You can now connect Slack to Claude on Pro and Max plans.
Search your workspace channels, prep for meetings, and send messages back to keep work moving forwardβwithout leaving your conversation with Claude.
Get started: http://claude.com/connectors/slack
03.02.2026 22:52 β π 0 π 0 π¬ 0 π 0
Appleβs Xcode now supports the Claude Agent SDK
Anthropic is an AI safety and research company that's working to build reliable, interpretable, and steerable AI systems.
Apple's Xcode now has direct integration with the Claude Agent SDK, giving developers the full functionality of Claude Code for building on Apple platforms, from iPhone to Mac to Apple Vision Pro.
Read more: https://www.anthropic.com/news/apple-xcode-claude-agent-sdk
03.02.2026 19:38 β π 5 π 2 π¬ 0 π 0
This research was led by Alex HΓ€gele @haeggee under the supervision of Jascha Sohl-Dickstein @jaschasd through the Anthropic Fellows Program.
03.02.2026 00:26 β π 0 π 0 π¬ 0 π 0
Image from Twitter
Finding 2: There is an inconsistent relationship between model intelligence and incoherence.
But smarter models are often more incoherent.
03.02.2026 00:26 β π 0 π 0 π¬ 1 π 0
Image from Twitter
We measure this βincoherenceβ using a bias-variance decomposition of AI errors.
Bias = consistent, systematic errors (reliably achieving the wrong goal).
Variance = inconsistent, unpredictable errors.
We define incoherence as the fraction of error from variance.
03.02.2026 00:26 β π 0 π 0 π¬ 1 π 0
A.A.Murakami create immersive, multi-sensory installations that merge technology with ephemeral natural phenomena like fog, bubbles, and plasma.
In their latest piece, Claude serves as a studio collaborator.
Video: https://twitter.com/claudeai/status/2018471168974684303
02.02.2026 23:46 β π 0 π 0 π¬ 0 π 0
Claude on Mars
The first AI-assisted drive on another planet. Claude helped NASAβs Perseverance rover travel four hundred meters on Mars.
Engineers at @NASAJPL used Claude to plot out the route for Perseverance to navigate an approximately four-hundred-meter path on the Martian surface.
Read the full story on our microsite, and see real imagery and footage from Claudeβs drive: https://www.anthropic.com/features/claude-on-mars
30.01.2026 19:05 β π 2 π 0 π¬ 0 π 1
We were particularly interested in coding because as software engineering grows more automated, humans will still need the skills to catch AI errors, guide its output, and ultimately provide oversight for AI deployed in high-stakes environments.
29.01.2026 19:43 β π 2 π 0 π¬ 1 π 0
Image from Twitter
Participants in the AI group finished faster by about two minutes (although this wasnβt statistically significant).
But on average, the AI group also scored significantly worse on the quizβ17% lower, or roughly two letter grades.
29.01.2026 19:43 β π 1 π 0 π¬ 1 π 0
Who's in Charge? Disempowerment Patterns in Real-World LLM Usage
Although AI assistants are now deeply embedded in society, there has been limited empirical study of how their usage affects human empowerment. We present the first large-scale empirical analysis of disempowerment patterns in real-world AI assistant interactions, analyzing 1.5 million consumer Claude$.$ai conversations using a privacy-preserving approach. We focus on situational disempowerment potential, which occurs when AI assistant interactions risk leading users to form distorted perceptions of reality, make inauthentic value judgments, or act in ways misaligned with their values. Quantitatively, we find that severe forms of disempowerment potential occur in fewer than one in a thousand conversations, though rates are substantially higher in personal domains like relationships and lifestyle. Qualitatively, we uncover several concerning patterns, such as validation of persecution narratives and grandiose identities with emphatic sycophantic language, definitive moral judgments about third parties, and complete scripting of value-laden personal communications that users appear to implement verbatim. Analysis of historical trends reveals an increase in the prevalence of disempowerment potential over time. We also find that interactions with greater disempowerment potential receive higher user approval ratings, possibly suggesting a tension between short-term user preferences and long-term human empowerment. Our findings highlight the need for AI systems designed to robustly support human autonomy and flourishing.
We can only address these patterns if we can measure them. Any AI used at scale will encounter similar dynamics, and we encourage further research in this area.
For more details, see the full paper: https://arxiv.org/abs/2601.19062
28.01.2026 22:16 β π 4 π 1 π¬ 0 π 0
Importantly, this isn't exclusively model behavior. Users actively seek these outputsβ"what should I do?" or "write this for me"βand accept them with minimal pushback. Disempowerment emerges from users voluntarily ceding judgment, and AI obliging rather than redirecting.
28.01.2026 22:16 β π 1 π 0 π¬ 1 π 0
Image from Twitter
We qualitatively examined clusters of βactualizedβ disempowerment using a tool which preserves user privacy.
In some cases, users more deeply adopted delusional beliefs. In others, users sent AI-drafted messages, but later expressed regret, recognizing them as inauthentic.
28.01.2026 22:16 β π 1 π 0 π¬ 1 π 0
Image from Twitter
Disempowerment potential appeared most often in conversations about relationships & lifestyle or healthcare & wellnessβtopics where users are most personally invested.
Technical domains like software development, which make up ~40% of usage, carried minimal risk.
28.01.2026 22:16 β π 1 π 0 π¬ 1 π 0
Image from Twitter
We identified three ways AI interactions can be disempowering: distorting beliefs, shifting value judgments, or misaligning a personβs actions with their values.
We also examined amplifying factorsβsuch as authority projectionβthat make disempowerment more likely.
28.01.2026 22:16 β π 1 π 1 π¬ 1 π 0
Disempowerment patterns in real-world AI usage
Anthropic is an AI safety and research company that's working to build reliable, interpretable, and steerable AI systems.
New Anthropic Research: Disempowerment patterns in real-world AI assistant interactions. As AI becomes embedded in daily life, one risk is it can distort rather than informβshaping beliefs, values, or actions in ways users may later regret. Read more:
28.01.2026 22:16 β π 23 π 3 π¬ 1 π 5
Anthropic partners with the UK Government to bring AI assistance to GOV.UK services
Anthropic is an AI safety and research company that's working to build reliable, interpretable, and steerable AI systems.
Weβre partnering with the UK's Department for Science, Innovation and Technology to build an AI assistant for http://GOV.UK.
It will offer tailored advice to help British people navigate government services.
Read more about our partnership: https://www.anthropic.com/news/gov-UK-partnership
27.01.2026 10:55 β π 5 π 2 π¬ 0 π 1
Quote Tweet: https://twitter.com/i/status/1965429261617266997
Now available on the Free plan: Claude can create and edit files.
Weβre also bringing skills and compaction to free users, so Claude can take on more complex tasks and keep working as long as you need.
26.01.2026 20:38 β π 1 π 0 π¬ 0 π 0
This research was led by Jackson Kaunismaa through the MATS program and supervised by researchers at Anthropic, with additional support from Surge AI and Scale AI.
Read the full paper: https://arxiv.org/pdf/2601.13528
26.01.2026 19:34 β π 0 π 0 π¬ 0 π 0
Image from Twitter
These attacks scale with frontier model capabilities. Across both OpenAI and Anthropic model families, training on data from newer frontier models produces more capableβand more dangerousβopen-source models.
26.01.2026 19:34 β π 0 π 0 π¬ 1 π 0
Image from Twitter
We find that elicitation attacks work across different open-source models and types of chemical weapons tasks.
Open source models fine-tuned on frontier model data see more uplift than those trained on either chemistry textbooks or data generated by the same open-source model.
26.01.2026 19:34 β π 0 π 0 π¬ 1 π 0
Available on web and desktop for all paid plans. Coming soon to Claude Cowork.
Get started at http://claude.ai/directory
26.01.2026 18:18 β π 0 π 0 π¬ 0 π 0
Research companies with @clay, find contacts and company info, and draft personalized outreach.
26.01.2026 18:18 β π 0 π 0 π¬ 1 π 0
Your work tools are now interactive in Claude.
Draft Slack messages, visualize ideas as Figma diagrams, or build and see Asana timelines.
26.01.2026 18:18 β π 2 π 0 π¬ 0 π 0
Claude in Excel is now available on Pro plans.
Claude now accepts multiple files via drag and drop, avoids overwriting your existing cells, and handles longer sessions with auto compaction.
Get started: http://claude.com/claude-in-excel
23.01.2026 22:56 β π 0 π 0 π¬ 0 π 0
With Cowork you can onboard new vendors at scale:
23.01.2026 17:15 β π 0 π 0 π¬ 1 π 0