Google | The Midas Project
Updated their Frontier Safety Framework
On the whole, it's good that Google is continuing to update its risk management policies, and they seem to treat the issue with much more seriousness than some competitors.
Read the full diff at our website: www.themidasproject.com/watchtower/g...
25.09.2025 18:10 β π 0 π 0 π¬ 0 π 0
Remember that in 2024 Google promised to *define* specific risk thresholds, not explore illustrative examples.
25.09.2025 18:10 β π 0 π 0 π¬ 1 π 0
Additionally, as pointed out by Zach Stein-Perlman of AI Lab Watch, the CCLs for misalignment, which used to be a concrete (albeit initial) approach, are now described as "exploratory" and "illustrative."
25.09.2025 18:10 β π 2 π 0 π¬ 1 π 0
Similarly, for ML R&D, models that "can" accelerate AI development no longer require RAND SL 3. Only models that have been used for this purpose count. But this is a strange ordering -- shouldn't the safeguards precede the deployment (and even the training) of such a model?
25.09.2025 18:10 β π 0 π 0 π¬ 1 π 0
But it's weakened in other ways.
Critical capability levels, which previously focused on capabilities (e.g. "can be used to cause a mass casualty event") now seems to rely on anticipated outcomes (e.g. "resulting in additional expected harm at severe scale")
25.09.2025 18:10 β π 0 π 0 π¬ 1 π 0
In their blog post, Google describes this as a strengthening of the policy.
And in some ways, it is: they define a new harmful manipulation risk category, and they even soften the claim from v2 that they would only follow their promise if every other company does so as well.
25.09.2025 18:10 β π 0 π 0 π¬ 1 π 0
Date: September 22, 2025
Company: Google
Change: Released v3 of their Frontier Safety Framework
25.09.2025 18:10 β π 0 π 0 π¬ 1 π 0
Date: Feb 26 - March 6, 2025
Company: Google
Change: Scrubbed mentions of diversity and equity from the mission description of their Responsible AI team.
08.03.2025 00:19 β π 2 π 0 π¬ 1 π 2
(Removed)
05.03.2025 04:56 β π 0 π 0 π¬ 0 π 0
(Removed)
05.03.2025 04:56 β π 0 π 0 π¬ 1 π 0
The smaller changes made to Anthropic's practices:
(Added)
05.03.2025 04:56 β π 0 π 0 π¬ 1 π 0
The good news is that the details they provide on internal practices have changed very little (select screenshots included in rest of thread).
Now all they need to do is provide transparency on *all* the commitments they've made + when they are choosing to abandon any.
05.03.2025 04:56 β π 0 π 0 π¬ 1 π 0
Most surprisingly, there is now no record of the former commitments on Anthropic's transparency center, a web resource they launched to track their compliance with voluntary commitments and which they describe as "raising the bar on transparency."
05.03.2025 04:56 β π 0 π 0 π¬ 1 π 0
In fact, post-election, multiple tech companies confirmed their commitments hadn't changed.
Perhaps they understood that the commitments were not contingent on whatever way the political winds blow, but made to the public at large.
fedscoop.com/voluntary-ai...
05.03.2025 04:56 β π 0 π 0 π¬ 1 π 0
While there is a new administration in office, nothing in the commitments suggested that the promise was (1) time-bound or (2) contingent on the party affiliation of the sitting president.
05.03.2025 04:56 β π 0 π 0 π¬ 1 π 0
Company: @anthropic.com
Date: February 27th, 2025
Change: Removed "White House's Voluntary Commitments for Safe, Secure, and Trustworthy AI," seemingly without a trace, from their webpage "Transparency Hub" (formerly "Tracking Voluntary Commitments")
Some thoughts in π§΅
05.03.2025 04:56 β π 0 π 1 π¬ 1 π 0
Seoul Commitment Tracker
Tracking the progress of voluntary commitments made at the 2023 AI Safety Summit in Seoul, South Korea
To view all of the policies released by AI companies, and their scorecards, check out our full report at
www.seoul-tracker.org
14.02.2025 23:17 β π 0 π 0 π¬ 0 π 0
Company: xAI
Date: February 10, 2025
Change: Released Risk Management Framework draft
URL: x.ai/documents/20...
xAI's policy is stronger than others in terms of using specific benchmarks, but lacks threshold details, and provides no mitigations.
14.02.2025 23:17 β π 0 π 0 π¬ 1 π 0
Company: Amazon
Date: February 10, 2025
Change: Released their Frontier Model Safety Framework
URL: amazon.science/publications...
Like Microsoft, Amazon's policy also goes through the motions while setting vague thresholds that aren't clearly connected to specific mitigations
14.02.2025 23:17 β π 0 π 0 π¬ 1 π 0
Company: Microsoft
Date: February 8, 2025
Change: Released their Frontier Governance Framework
URL: cdn-dynmedia-1.microsoft.com/is/content/m...
Microsoft's policy is an admirable effort, but as with others, needs further specification. Mitigations should also be connected to specific thresholds
14.02.2025 23:17 β π 0 π 0 π¬ 1 π 0
Company: Cohere
Date: February 7th, 2025
Change: Released their "Secure AI Frontier Model Framework"
URL: cohere.com/security/the...
Cohere's framework mostly neglects the most important risks. Like G42, they are not developing frontier models, which makes this more understandable.
14.02.2025 23:17 β π 0 π 0 π¬ 1 π 0
Company: G42
Date: February 6, 2025
Change: Released their "Frontier AI Framework"
URL: g42.ai/application/...
G42's policy is surprisingly strong for a non-frontier lab. It's biggest issues are a lack of specificity and not defining future thresholds for catastrophic risks.
14.02.2025 23:17 β π 0 π 0 π¬ 1 π 0
Company: Google
Date: February 4, 2025
Change: Released v2 of their Frontier Safety Framework
URL: deepmind.google/discover/blo...
v2 of the framework improves Google's policy in some areas while weakening it in others, most notably no longer promising to adhere to it if others are not.
14.02.2025 23:17 β π 0 π 0 π¬ 1 π 0
Company: Meta
Date: February 3rd, 2025
Change: Released their "Frontier AI Framework"
URL: ai.meta.com/static-resou...
Meta's policy includes risk thresholds and a commitment to pause development, but is severely weakened by caveats, loopholes, and no mitigations named
14.02.2025 23:17 β π 0 π 0 π¬ 1 π 0
Over the past few weeks, a number of AI companies have released safety frameworks as promised at the Seoul AI Safety Summit.
Many others have not.
Most that do exist are weak or missing key components.
A π§΅ of the recently released policies and evaluation from
@themidasproject.bsky.social
14.02.2025 23:17 β π 1 π 0 π¬ 1 π 0
Company: Google
Date: February 4, 2024
Change: Removed a ban on using AI technology for warfare and surveillance
URL: ai.google/responsibili...
04.02.2025 23:12 β π 2 π 1 π¬ 1 π 0