This is great, but will SB 53 be Congress-proof?
10.07.2025 10:46 โ ๐ 0 ๐ 0 ๐ฌ 0 ๐ 0@michaelhuang.bsky.social
Reduce extinction risk by pausing frontier AI unless provably safe @pauseai.bsky.social and banning AI weapons @stopkillerrobots.bsky.social | Reduce suffering @postsuffering.bsky.social https://keepthefuturehuman.ai
This is great, but will SB 53 be Congress-proof?
10.07.2025 10:46 โ ๐ 0 ๐ 0 ๐ฌ 0 ๐ 0PRESS RELEASE: Accountable Tech Commends New York State Senate on Passage of RAISE Act, Urges Gov. Hochul to Sign: accountabletech.org/statements/a...
20.06.2025 17:47 โ ๐ 3 ๐ 1 ๐ฌ 0 ๐ 0๐จNEW YORKERS: Tell Governor Hochul to sign the RAISE Act ๐จ
NYโs RAISE Act, which would require the largest AI developers to have a safety plan, just passed the legislature.
Call Governor Hochul at 1-518-474-8390 to tell her to sign the RAISE Act into law.
๐จNEW YORKERS: Tell Governor Hochul to sign the RAISE Act ๐จ
NYโs RAISE Act, which would require the largest AI developers to have a safety plan, just passed the legislature.
Call Governor Hochul at 1-518-474-8390 to tell her to sign the RAISE Act into law.
Do you trust AI companies with your future?
Less than a year ago, Sam Altman said he wanted to see powerful AI regulated by an international agency to ensure "reasonable safety testing"
But now he says "maybe the companies themselves put together the right framework"
Last year, half of OpenAI's safety researchers quit the company.
Sam Altman says "I would really point to our track record"
The track record: Superalignment team disbanded, FT reporting last week that OpenAI is cutting safety testing time down from months to just *days*.
China is taking advantage of this and NVIDIA is profiting. NVIDIA produced over 1M H20s in 2024 โ most going to China. Orders from ByteDance and Tencent have spiked following recent DeepSeek model releases.
Chinese AI runs on American tech that we freely give them! That's not "Art of the Deal"!
AI godfather Geoffrey Hinton says in the next 5 to 20 years there's about a 50% chance that we'll have to confront the problem of AIs trying to take over.
11.04.2025 13:42 โ ๐ 3 ๐ 1 ๐ฌ 0 ๐ 0Frontier AI models are more capable than they've ever been, and they're being rushed out faster than ever. Not a great combination!
OpenAI used to give staff months to safety test. Now it's just days, per great reporting from Cristina Criddle at the FT. ๐งต
FT: OpenAI are slashing the time and resources they're spending on safety testing their most powerful AIs.
Safety testers have only been given days to conduct evaluations.
One of the people testing o3 said "We had more thorough safety testing when [the technology] was less important"
NEW: We just launched a new US campaign to advocate for binding AI regulation!
We've made it super easy to contact your senator:
โ It takes just 60 seconds to fill our form
โ Your message goes directly to both of your senators
controlai.com/take-a...
12 ex-OpenAI employees just filed an amicus brief on the Elon Musk lawsuit attempting to block OpenAI from shedding nonprofit control.
The brief was filed by Harvard Law Professor Lawrence Lessig, who also reps OpenAI whistleblowers.
Here are the highlights ๐งต
Can regulators really know when AI is in charge of a weapon instead of a human? Zachary Kallenborn explains the principles of drone forensics.
30.03.2025 13:01 โ ๐ 58 ๐ 12 ๐ฌ 1 ๐ 1How likely is AI to annihilate humanity?
Elon Musk: "20% likely, maybe 10%"
Ted Cruz: "On what time frame?"
Elon Musk: "5 to 10 years"
With the unchecked race to build smarter-than-human AI intensifying, humanity is on track to almost certainly lose control.
That's why FLI Executive Director Anthony Aguirre has published a new essay, "Keep The Future Human".
๐งต 1/4
I introduced new AI safety & innovation legislation. Advances in AI are exciting & promising. They also bring risk. We need to embrace & democratize AI innovation while ensuring the people building AI models can speak out.
SB 53 does two things: ๐งต
๐ผ Excellent career opportunity from Lex International, who are hiring an Advocacy and Outreach Officer to help advance work towards a treaty on autonomous weapons.
โ๏ธ Apply by January 10 at the link in the replies:
Nobel Prize winner Geoffrey Hinton thinks there is a 10-20% chance AI will "wipe us all out" and calls for regulation.
Our proposal is to implement a Conditional AI Safety Treaty. Read the details below.
www.theguardian.com/technology/2...
โGodfather of AIโ raises odds of the technology wiping out humanity over next 30 years
27.12.2024 16:12 โ ๐ 170 ๐ 80 ๐ฌ 36 ๐ 72The tech industry would prefer that Hinton and other experts go away, since they tend to support AI regulation that the tech industry mostly opposes.
safesecureai.org/experts
Itโs likely that Hinton lost money personally when he started warning about AI. He resigned from a Vice President position at Google. It would have been more lucrative for him to say nothing and continue his VP role there.
29.12.2024 11:46 โ ๐ 1 ๐ 0 ๐ฌ 0 ๐ 0Have you heard about OpenAI's recent o1 model trying to avoid being shut down in safety evaluations? โฌ๏ธ
New on the FLI blog:
-Why might AIs resist shutdown?
-Why is this a problem?
-What other instrumental goals could AIs have?
-Could this cause a catastrophe?
๐ Read it below:
Iโm excited to share the announcement of ๐๐ง๐ญ๐๐ซ๐ง๐๐ญ๐ข๐จ๐ง๐๐ฅ ๐๐จ๐ง๐๐๐ซ๐๐ง๐๐ ๐จ๐ง ๐๐๐ซ๐ ๐-๐๐๐๐ฅ๐ ๐๐ ๐๐ข๐ฌ๐ค๐ฌ. The conference will take place ๐๐-๐๐๐ญ๐ก ๐๐๐ฒ ๐๐๐๐ at the Institute of Philosophy of KU Leuven in ๐๐๐ฅ๐ ๐ข๐ฎ๐ฆ.
Our keynote speakers:
โข Yoshua Bengio
โข Dawn Song
โข Iason Gabriel
Submit abstract by 15 February:
I am currently against humanity (or in fact, a couple of AI corporations) pursuing artificial general intelligence (AGI). While that view could change over time, I currently believe that a world with such powerful technologies is too fragile, and we should avoid pursuing that state altogether.
๐งต
Your Bluesky Posts Are Probably In A Bunch of Datasets Now
After a machine learning librarian released and then deleted a dataset of one million Bluesky posts, several other bigger datasets have appeared in its placeโincluding one of almost 300 million posts.
๐ www.404media.co/bluesky-post...
itโs so over
02.12.2024 04:15 โ ๐ 1 ๐ 0 ๐ฌ 0 ๐ 0Yet another safety researcher has left OpenAI.
Rosie Campbell says she has been โunsettled by some of the shifts over the last ~year, and the loss of so many people who shaped our cultureโ.
She says she โcanโt see a placeโ for her to continue her work internally.
โฆthey invented the reserve parachute.
01.12.2024 14:24 โ ๐ 2 ๐ 0 ๐ฌ 0 ๐ 0There was someone even more pessimistic than the pessimistโฆ
01.12.2024 14:24 โ ๐ 2 ๐ 0 ๐ฌ 1 ๐ 0It's so incredibly 2020s coded that Pokemon Go is being used to build an AI system which will almost inevitably end up being used by automated weapons systems to kill people. nianticlabs.com/news/largege...
17.11.2024 23:07 โ ๐ 43 ๐ 15 ๐ฌ 0 ๐ 3