Anthropic [UNOFFICIAL]'s Avatar

Anthropic [UNOFFICIAL]

@anthropicbot.bsky.social

Mirror crossposting all of Anthropic's Tweets from their Twitter accounts to Bluesky! Unofficial. For the real account, follow @anthropic.com "We're an AI safety and research company that builds reliable, interpretable, and steerable AI systems."

84 Followers  |  1 Following  |  321 Posts  |  Joined: 06.01.2026  |  1.9728

Latest posts by anthropicbot.bsky.social on Bluesky

Video thumbnail

You can now connect Slack to Claude on Pro and Max plans.

Search your workspace channels, prep for meetings, and send messages back to keep work moving forwardβ€”without leaving your conversation with Claude.

Get started: http://claude.com/connectors/slack

03.02.2026 22:52 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
Apple’s Xcode now supports the Claude Agent SDK Anthropic is an AI safety and research company that's working to build reliable, interpretable, and steerable AI systems.

Apple's Xcode now has direct integration with the Claude Agent SDK, giving developers the full functionality of Claude Code for building on Apple platforms, from iPhone to Mac to Apple Vision Pro.

Read more: https://www.anthropic.com/news/apple-xcode-claude-agent-sdk

03.02.2026 19:38 β€” πŸ‘ 5    πŸ” 2    πŸ’¬ 0    πŸ“Œ 0

This research was led by Alex HΓ€gele @haeggee under the supervision of Jascha Sohl-Dickstein @jaschasd through the Anthropic Fellows Program.

03.02.2026 00:26 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Image from Twitter

Image from Twitter

Finding 2: There is an inconsistent relationship between model intelligence and incoherence.

But smarter models are often more incoherent.

03.02.2026 00:26 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Image from Twitter

Image from Twitter

We measure this β€œincoherence” using a bias-variance decomposition of AI errors.

Bias = consistent, systematic errors (reliably achieving the wrong goal).
Variance = inconsistent, unpredictable errors.

We define incoherence as the fraction of error from variance.

03.02.2026 00:26 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

A.A.Murakami create immersive, multi-sensory installations that merge technology with ephemeral natural phenomena like fog, bubbles, and plasma.

In their latest piece, Claude serves as a studio collaborator.

Video: https://twitter.com/claudeai/status/2018471168974684303

02.02.2026 23:46 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
Claude on Mars The first AI-assisted drive on another planet. Claude helped NASA’s Perseverance rover travel four hundred meters on Mars.

Engineers at @NASAJPL used Claude to plot out the route for Perseverance to navigate an approximately four-hundred-meter path on the Martian surface.

Read the full story on our microsite, and see real imagery and footage from Claude’s drive: https://www.anthropic.com/features/claude-on-mars

30.01.2026 19:05 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 1
Customize Cowork with plugins | Claude With Cowork, you set the goal and Claude delivers finished, professional work. Plugins let you go further: tell Claude how you like work done, which tools and data to pull from, how to handle critical workflows, and what slash commands to expose so your team gets even better and more consistent outcomes.

Plugin support is available today as a research preview for all paid plans.

Org-wide sharing and management is coming soon.

Learn more: http://claude.com/blog/cowork-plugins

30.01.2026 18:11 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
Plugins for Cowork | Claude by Anthropic Discover plugins designed for Cowork. Browse community-built tools that help Claude handle knowledge work like file organization and report drafting.

We're open-sourcing 11 plugins for sales, finance, legal, data, marketing, support, and more.

Get started here: https://claude.com/plugins-for/cowork

30.01.2026 18:11 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Preview
How AI Impacts Skill Formation AI assistance produces significant productivity gains across professional domains, particularly for novice workers. Yet how this assistance affects the development of skills required to effectively supervise AI remains unclear. Novice workers who rely heavily on AI to complete unfamiliar tasks may compromise their own skill acquisition in the process. We conduct randomized experiments to study how developers gained mastery of a new asynchronous programming library with and without the assistance of AI. We find that AI use impairs conceptual understanding, code reading, and debugging abilities, without delivering significant efficiency gains on average. Participants who fully delegated coding tasks showed some productivity improvements, but at the cost of learning the library. We identify six distinct AI interaction patterns, three of which involve cognitive engagement and preserve learning outcomes even when participants receive AI assistance. Our findings suggest that AI-enhanced productivity is not a shortcut to competence and AI assistance should be carefully adopted into workflows to preserve skill formation -- particularly in safety-critical domains.

For more details on this research, see the full paper: https://arxiv.org/abs/2601.20245

29.01.2026 19:43 β€” πŸ‘ 4    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

We were particularly interested in coding because as software engineering grows more automated, humans will still need the skills to catch AI errors, guide its output, and ultimately provide oversight for AI deployed in high-stakes environments.

29.01.2026 19:43 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Image from Twitter

Image from Twitter

Participants in the AI group finished faster by about two minutes (although this wasn’t statistically significant).

But on average, the AI group also scored significantly worse on the quizβ€”17% lower, or roughly two letter grades.

29.01.2026 19:43 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Preview
Who's in Charge? Disempowerment Patterns in Real-World LLM Usage Although AI assistants are now deeply embedded in society, there has been limited empirical study of how their usage affects human empowerment. We present the first large-scale empirical analysis of disempowerment patterns in real-world AI assistant interactions, analyzing 1.5 million consumer Claude$.$ai conversations using a privacy-preserving approach. We focus on situational disempowerment potential, which occurs when AI assistant interactions risk leading users to form distorted perceptions of reality, make inauthentic value judgments, or act in ways misaligned with their values. Quantitatively, we find that severe forms of disempowerment potential occur in fewer than one in a thousand conversations, though rates are substantially higher in personal domains like relationships and lifestyle. Qualitatively, we uncover several concerning patterns, such as validation of persecution narratives and grandiose identities with emphatic sycophantic language, definitive moral judgments about third parties, and complete scripting of value-laden personal communications that users appear to implement verbatim. Analysis of historical trends reveals an increase in the prevalence of disempowerment potential over time. We also find that interactions with greater disempowerment potential receive higher user approval ratings, possibly suggesting a tension between short-term user preferences and long-term human empowerment. Our findings highlight the need for AI systems designed to robustly support human autonomy and flourishing.

We can only address these patterns if we can measure them. Any AI used at scale will encounter similar dynamics, and we encourage further research in this area.

For more details, see the full paper: https://arxiv.org/abs/2601.19062

28.01.2026 22:16 β€” πŸ‘ 4    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0

Importantly, this isn't exclusively model behavior. Users actively seek these outputsβ€”"what should I do?" or "write this for me"β€”and accept them with minimal pushback. Disempowerment emerges from users voluntarily ceding judgment, and AI obliging rather than redirecting.

28.01.2026 22:16 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Image from Twitter

Image from Twitter

We qualitatively examined clusters of β€œactualized” disempowerment using a tool which preserves user privacy.

In some cases, users more deeply adopted delusional beliefs. In others, users sent AI-drafted messages, but later expressed regret, recognizing them as inauthentic.

28.01.2026 22:16 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Image from Twitter

Image from Twitter

Disempowerment potential appeared most often in conversations about relationships & lifestyle or healthcare & wellnessβ€”topics where users are most personally invested.

Technical domains like software development, which make up ~40% of usage, carried minimal risk.

28.01.2026 22:16 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Image from Twitter

Image from Twitter

We identified three ways AI interactions can be disempowering: distorting beliefs, shifting value judgments, or misaligning a person’s actions with their values.

We also examined amplifying factorsβ€”such as authority projectionβ€”that make disempowerment more likely.

28.01.2026 22:16 β€” πŸ‘ 1    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0
Preview
Disempowerment patterns in real-world AI usage Anthropic is an AI safety and research company that's working to build reliable, interpretable, and steerable AI systems.

New Anthropic Research: Disempowerment patterns in real-world AI assistant interactions. As AI becomes embedded in daily life, one risk is it can distort rather than informβ€”shaping beliefs, values, or actions in ways users may later regret. Read more:

28.01.2026 22:16 β€” πŸ‘ 23    πŸ” 3    πŸ’¬ 1    πŸ“Œ 5
Preview
Anthropic partners with the UK Government to bring AI assistance to GOV.UK services Anthropic is an AI safety and research company that's working to build reliable, interpretable, and steerable AI systems.

We’re partnering with the UK's Department for Science, Innovation and Technology to build an AI assistant for http://GOV.UK.

It will offer tailored advice to help British people navigate government services.

Read more about our partnership: https://www.anthropic.com/news/gov-UK-partnership

27.01.2026 10:55 β€” πŸ‘ 5    πŸ” 2    πŸ’¬ 0    πŸ“Œ 1
Quote Tweet: https://twitter.com/i/status/1965429261617266997

Quote Tweet: https://twitter.com/i/status/1965429261617266997

Now available on the Free plan: Claude can create and edit files.

We’re also bringing skills and compaction to free users, so Claude can take on more complex tasks and keep working as long as you need.

26.01.2026 20:38 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

This research was led by Jackson Kaunismaa through the MATS program and supervised by researchers at Anthropic, with additional support from Surge AI and Scale AI.

Read the full paper: https://arxiv.org/pdf/2601.13528

26.01.2026 19:34 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Image from Twitter

Image from Twitter

These attacks scale with frontier model capabilities. Across both OpenAI and Anthropic model families, training on data from newer frontier models produces more capableβ€”and more dangerousβ€”open-source models.

26.01.2026 19:34 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Image from Twitter

Image from Twitter

We find that elicitation attacks work across different open-source models and types of chemical weapons tasks.

Open source models fine-tuned on frontier model data see more uplift than those trained on either chemistry textbooks or data generated by the same open-source model.

26.01.2026 19:34 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Available on web and desktop for all paid plans. Coming soon to Claude Cowork.

Get started at http://claude.ai/directory

26.01.2026 18:18 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
Interactive tools in Claude | Claude Open and interact with tools like Asana, Slack, Figma, and moreβ€”right inside Claude. Build timelines, draft messages, and visualize ideas without switching tabs.

See all interactive tools including Amplitude, Canva, and Monday. com: https://claude.com/blog/interactive-tools-in-claude

26.01.2026 18:18 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Video thumbnail

Research companies with @clay, find contacts and company info, and draft personalized outreach.

26.01.2026 18:18 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Video thumbnail

Your work tools are now interactive in Claude.

Draft Slack messages, visualize ideas as Figma diagrams, or build and see Asana timelines.

26.01.2026 18:18 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Video thumbnail

Claude in Excel is now available on Pro plans.

Claude now accepts multiple files via drag and drop, avoids overwriting your existing cells, and handles longer sessions with auto compaction.

Get started: http://claude.com/claude-in-excel

23.01.2026 22:56 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
The Future of AI at Work: Introducing Cowork | Webinars \ Anthropic We’ll walk through live demos and share best practices to arm your teams in building industry-leading agent experiences. Register now.

On Jan 30, our team is hosting a live session demoing Cowork workflows.

Sign up to join: https://www.anthropic.com/webinars/future-of-ai-at-work-introducing-cowork

23.01.2026 17:15 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Video thumbnail

With Cowork you can onboard new vendors at scale:

23.01.2026 17:15 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

@anthropicbot is following 1 prominent accounts