Top AI experts say AI poses an extinction risk on par with nuclear war.
Prohibiting the development of superintelligence can prevent this risk.
Weโve just launched a new campaign to get this done.
@tolgabilge.bsky.social
AI policy researcher @controlai.com | aitreaty.org & taisc.org | Superforecaster linkedin.com/in/tolga-bilge newsletter.tolgabilge.com
Top AI experts say AI poses an extinction risk on par with nuclear war.
Prohibiting the development of superintelligence can prevent this risk.
Weโve just launched a new campaign to get this done.
The AI plateau:
12.08.2025 01:07 โ ๐ 4 ๐ 0 ๐ฌ 0 ๐ 0The future is not set, nor are commitments made by AI companies.
We've been compiling a growing list of examples of AI companies saying one thing, and doing the opposite:
controlai.news/p/art...
UK POLITICIANS DEMAND REGULATION OF POWERFUL AI
TODAY: Politicians across the UK political spectrum back our campaign for binding rules on dangerous AI development.
This is the first time a coalition of parliamentarians have acknowledged the extinction threat posed by AI.
1/6
Did Sam Altman lie to President Trump?
What are the facts?
โ Trump announced Stargate
โ Elon Musk says they donโt have the money
โ Nadella says his $80b is for Azure
โ Trump doesnโt know if they have it
โ Reporting suggests they may only have $52b
newsletter.tolgabilge.com/p/stargate-g...
We've just launched an open call for binding rules on dangerous AI development.
Top AI scientists, and even the CEOs of the biggest AI companies themselves, have warned that AI threatens human extinction.
The time for action is now. Sign below ๐
controlai.com/public...
Know them by their deeds, not their words.
AI companies often say one thing and do the opposite. Weโve been watching closely, and have been compiling a list of examples:
controlai.news/p/art...
We need a treaty to establish common redlines on AI.
AI development is advancing rapidly, and we may soon have AI systems that surpass humans in intelligence, yet we have no way to control them. Our very existence is at stake.
This could be the biggest deal in history.
๐งต
Google DeepMind's Chief AGI Scientist says there's a 50% chance that AGI will be built in the next 3 years.
This was in reference to a prediction he made back in 2011. He also thought there was a 5 to 50% chance of human extinction within a year of human-level AI being built!
The New Year is upon us, and it is a time when many are making predictions about how AI will continue to develop.
We've collected some predictions for AI in 2025, by Elon Musk, Sam Altman, Dario Amodei, Gary Marcus, and Eli Lifland.
Get them in our free weekly newsletter ๐
controlai.news/p/the...
Last year, OpenAI's chief lobbyist said that OpenAI is not aiming to build superintelligence.
Her boss, Sam Altman, is now bragging about how OpenAI is rushing to create superintelligence.
Two years of AI politics โ where we started, where we stand, and where weโre heading:
newsletter.tolgabilge.com/p/two-years-of-ai-politics-past-present
Two years of AI politics โ where we started, where we stand, and where weโre heading:
newsletter.tolgabilge.com/p/two-years-of-ai-politics-past-present
๐ฉ ControlAI Weekly Roundup: Time to Unplug?
1๏ธโฃ Voters back AI policy focus on preventing extreme risks
2๏ธโฃ Meta asks the government to block OpenAI's for-profit switch
3๏ธโฃ Eric Schmidt warns there's a time to unplug AI
Get our free newsletter:
controlai.news/p/con...
One of the weird things about the world today is that the idea of 'AGI' is now regularly being talked about in e.g. policy contexts. But seems v clear that most policymakers' notion of AGI & its implications is vastly underpowered compared to that of the ppl trying to build AGI.
16.12.2024 10:40 โ ๐ 17 ๐ 4 ๐ฌ 1 ๐ 0I'm not like that
14.12.2024 22:08 โ ๐ 4 ๐ 0 ๐ฌ 1 ๐ 0๐ฉ ControlAI Weekly Roundup: Sneaky Machines
1๏ธโฃ OpenAI launches o1, in tests tries to avoid shutdown
2๏ธโฃ Google DeepMind launches Gemini 2.0
3๏ธโฃ Comments by incoming AI czar David Sacks on AGI threat resurface
Get our free newsletter here ๐
controlai.news/p/sub...
Current AI research leads to extinction by godlike AI.
Creating AGI depends simply on enabling it to perform the intellectual tasks that we can.
Once AI can do that, we are on a path to godlike AI โ systems so beyond our reach that they pose the risk of human extinction.
๐งต
๐ฉ ControlAI Weekly Roundup: AI Accelerates Cyberattacks
1๏ธโฃ AI assists hackers mine sensitive data
2๏ธโฃ Google DeepMind predicts weather more accurately than leading system
3๏ธโฃ xAI plans massive expansion of its Memphis supercomputer
Get our free newsletter here ๐
controlai.news/p/con...
Recent polling by the AI Policy Institute โ clear majorities of Americans say:
โฌฅ AI labs can't police themselves, more regulation is needed
โฌฅ They support AI Safety Institute testing of AI models, and this should be mandatory
โฌฅ AI safety testing is more important than US-China competition
We're starting to see people wake up to the risks. Serious people, who aren't talking their own books, and who are oath-sworn to do the best for their countries, and who feel compelled to speak out.
26.11.2024 18:50 โ ๐ 4 ๐ 2 ๐ฌ 0 ๐ 0๐ฉ ControlAI Weekly Roundup: US-China Detente or AGI Suicide Race?
1๏ธโฃ Biden and Xi agree AI shouldnโt control nuclear weapons
2๏ธโฃ A US government commission recommends a race to AGI
3๏ธโฃ Bengio writes about advances in the ability of AI to reason
controlai.news/p/con...
psa: likes are public here
18.11.2024 22:23 โ ๐ 21 ๐ 2 ๐ฌ 1 ๐ 1