Michael Huang's Avatar

Michael Huang

@michaelhuang.bsky.social

Reduce extinction risk by pausing frontier AI unless provably safe @pauseai.bsky.social and banning AI weapons @stopkillerrobots.bsky.social | Reduce suffering @postsuffering.bsky.social https://keepthefuturehuman.ai

140 Followers  |  105 Following  |  10 Posts  |  Joined: 30.07.2023  |  1.9743

Latest posts by michaelhuang.bsky.social on Bluesky

Post image

This is great, but will SB 53 be Congress-proof?

10.07.2025 10:46 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Post image

PRESS RELEASE: Accountable Tech Commends New York State Senate on Passage of RAISE Act, Urges Gov. Hochul to Sign: accountabletech.org/statements/a...

20.06.2025 17:47 โ€” ๐Ÿ‘ 3    ๐Ÿ” 1    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

๐ŸšจNEW YORKERS: Tell Governor Hochul to sign the RAISE Act ๐Ÿšจ

NYโ€™s RAISE Act, which would require the largest AI developers to have a safety plan, just passed the legislature.

Call Governor Hochul at 1-518-474-8390 to tell her to sign the RAISE Act into law.

17.06.2025 20:29 โ€” ๐Ÿ‘ 1    ๐Ÿ” 1    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

๐ŸšจNEW YORKERS: Tell Governor Hochul to sign the RAISE Act ๐Ÿšจ

NYโ€™s RAISE Act, which would require the largest AI developers to have a safety plan, just passed the legislature.

Call Governor Hochul at 1-518-474-8390 to tell her to sign the RAISE Act into law.

17.06.2025 20:32 โ€” ๐Ÿ‘ 1    ๐Ÿ” 1    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Video thumbnail

Do you trust AI companies with your future?

Less than a year ago, Sam Altman said he wanted to see powerful AI regulated by an international agency to ensure "reasonable safety testing"

But now he says "maybe the companies themselves put together the right framework"

15.04.2025 00:14 โ€” ๐Ÿ‘ 2    ๐Ÿ” 1    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Video thumbnail

Last year, half of OpenAI's safety researchers quit the company.

Sam Altman says "I would really point to our track record"

The track record: Superalignment team disbanded, FT reporting last week that OpenAI is cutting safety testing time down from months to just *days*.

14.04.2025 18:48 โ€” ๐Ÿ‘ 4    ๐Ÿ” 2    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

China is taking advantage of this and NVIDIA is profiting. NVIDIA produced over 1M H20s in 2024 โ€” most going to China. Orders from ByteDance and Tencent have spiked following recent DeepSeek model releases.

Chinese AI runs on American tech that we freely give them! That's not "Art of the Deal"!

11.04.2025 12:07 โ€” ๐Ÿ‘ 1    ๐Ÿ” 1    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Video thumbnail

AI godfather Geoffrey Hinton says in the next 5 to 20 years there's about a 50% chance that we'll have to confront the problem of AIs trying to take over.

11.04.2025 13:42 โ€” ๐Ÿ‘ 3    ๐Ÿ” 1    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Post image

Frontier AI models are more capable than they've ever been, and they're being rushed out faster than ever. Not a great combination!

OpenAI used to give staff months to safety test. Now it's just days, per great reporting from Cristina Criddle at the FT. ๐Ÿงต

11.04.2025 16:16 โ€” ๐Ÿ‘ 5    ๐Ÿ” 2    ๐Ÿ’ฌ 2    ๐Ÿ“Œ 0
Post image

FT: OpenAI are slashing the time and resources they're spending on safety testing their most powerful AIs.

Safety testers have only been given days to conduct evaluations.

One of the people testing o3 said "We had more thorough safety testing when [the technology] was less important"

11.04.2025 17:11 โ€” ๐Ÿ‘ 2    ๐Ÿ” 2    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Preview
ControlAI At ControlAI we are fighting to keep humanity in control.

NEW: We just launched a new US campaign to advocate for binding AI regulation!

We've made it super easy to contact your senator:
โ€” It takes just 60 seconds to fill our form
โ€” Your message goes directly to both of your senators

controlai.com/take-a...

11.04.2025 18:40 โ€” ๐Ÿ‘ 2    ๐Ÿ” 1    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Post image

12 ex-OpenAI employees just filed an amicus brief on the Elon Musk lawsuit attempting to block OpenAI from shedding nonprofit control.

The brief was filed by Harvard Law Professor Lawrence Lessig, who also reps OpenAI whistleblowers.

Here are the highlights ๐Ÿงต

11.04.2025 23:40 โ€” ๐Ÿ‘ 22    ๐Ÿ” 6    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 4
Preview
Verifying Who Pulled the Trigger Can regulators know when autonomous weapons systems are being used?

Can regulators really know when AI is in charge of a weapon instead of a human? Zachary Kallenborn explains the principles of drone forensics.

30.03.2025 13:01 โ€” ๐Ÿ‘ 58    ๐Ÿ” 12    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 1
Video thumbnail

How likely is AI to annihilate humanity?
Elon Musk: "20% likely, maybe 10%"
Ted Cruz: "On what time frame?"
Elon Musk: "5 to 10 years"

20.03.2025 14:22 โ€” ๐Ÿ‘ 3    ๐Ÿ” 2    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Post image

With the unchecked race to build smarter-than-human AI intensifying, humanity is on track to almost certainly lose control.

That's why FLI Executive Director Anthony Aguirre has published a new essay, "Keep The Future Human".

๐Ÿงต 1/4

07.03.2025 19:11 โ€” ๐Ÿ‘ 11    ๐Ÿ” 8    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 2

I introduced new AI safety & innovation legislation. Advances in AI are exciting & promising. They also bring risk. We need to embrace & democratize AI innovation while ensuring the people building AI models can speak out.

SB 53 does two things: ๐Ÿงต

28.02.2025 17:33 โ€” ๐Ÿ‘ 29    ๐Ÿ” 2    ๐Ÿ’ฌ 2    ๐Ÿ“Œ 3
Post image

๐Ÿ’ผ Excellent career opportunity from Lex International, who are hiring an Advocacy and Outreach Officer to help advance work towards a treaty on autonomous weapons.

โœ๏ธ Apply by January 10 at the link in the replies:

03.01.2025 19:49 โ€” ๐Ÿ‘ 6    ๐Ÿ” 3    ๐Ÿ’ฌ 2    ๐Ÿ“Œ 1

Nobel Prize winner Geoffrey Hinton thinks there is a 10-20% chance AI will "wipe us all out" and calls for regulation.

Our proposal is to implement a Conditional AI Safety Treaty. Read the details below.

www.theguardian.com/technology/2...

01.01.2025 01:34 โ€” ๐Ÿ‘ 1    ๐Ÿ” 1    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Preview
โ€˜Godfather of AIโ€™ raises odds of the technology wiping out humanity over next 30 years Geoffrey Hinton says there is 10-20% chance AI will lead to human extinction in next three decades amid fast pace of change The British-Canadian computer scientist often touted as a โ€œgodfatherโ€ of artificial intelligence has raised the odds of AI wipingโ€ฆ

โ€˜Godfather of AIโ€™ raises odds of the technology wiping out humanity over next 30 years

27.12.2024 16:12 โ€” ๐Ÿ‘ 170    ๐Ÿ” 80    ๐Ÿ’ฌ 36    ๐Ÿ“Œ 72
Preview
Letter from renowned AI experts | SB 1047 - Safe & Secure AI Innovation

The tech industry would prefer that Hinton and other experts go away, since they tend to support AI regulation that the tech industry mostly opposes.

safesecureai.org/experts

29.12.2024 12:15 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

Itโ€™s likely that Hinton lost money personally when he started warning about AI. He resigned from a Vice President position at Google. It would have been more lucrative for him to say nothing and continue his VP role there.

29.12.2024 11:46 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Post image

Have you heard about OpenAI's recent o1 model trying to avoid being shut down in safety evaluations? โฌ‡๏ธ

New on the FLI blog:
-Why might AIs resist shutdown?
-Why is this a problem?
-What other instrumental goals could AIs have?
-Could this cause a catastrophe?

๐Ÿ”— Read it below:

27.12.2024 19:49 โ€” ๐Ÿ‘ 5    ๐Ÿ” 2    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 2
International Conference on Large-Scale AI Risks

Iโ€™m excited to share the announcement of ๐ˆ๐ง๐ญ๐ž๐ซ๐ง๐š๐ญ๐ข๐จ๐ง๐š๐ฅ ๐‚๐จ๐ง๐Ÿ๐ž๐ซ๐ž๐ง๐œ๐ž ๐จ๐ง ๐‹๐š๐ซ๐ ๐ž-๐’๐œ๐š๐ฅ๐ž ๐€๐ˆ ๐‘๐ข๐ฌ๐ค๐ฌ. The conference will take place ๐Ÿ๐Ÿ”-๐Ÿ๐Ÿ–๐ญ๐ก ๐Œ๐š๐ฒ ๐Ÿ๐ŸŽ๐Ÿ๐Ÿ“ at the Institute of Philosophy of KU Leuven in ๐๐ž๐ฅ๐ ๐ข๐ฎ๐ฆ.

Our keynote speakers:
โ€ข Yoshua Bengio
โ€ข Dawn Song
โ€ข Iason Gabriel

Submit abstract by 15 February:

17.12.2024 09:48 โ€” ๐Ÿ‘ 21    ๐Ÿ” 4    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

I am currently against humanity (or in fact, a couple of AI corporations) pursuing artificial general intelligence (AGI). While that view could change over time, I currently believe that a world with such powerful technologies is too fragile, and we should avoid pursuing that state altogether.

๐Ÿงต

09.12.2024 11:08 โ€” ๐Ÿ‘ 11    ๐Ÿ” 2    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image Post image

Your Bluesky Posts Are Probably In A Bunch of Datasets Now

After a machine learning librarian released and then deleted a dataset of one million Bluesky posts, several other bigger datasets have appeared in its placeโ€”including one of almost 300 million posts.

๐Ÿ”— www.404media.co/bluesky-post...

03.12.2024 17:28 โ€” ๐Ÿ‘ 205    ๐Ÿ” 81    ๐Ÿ’ฌ 15    ๐Ÿ“Œ 32

itโ€™s so over

02.12.2024 04:15 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Post image Post image

Yet another safety researcher has left OpenAI.

Rosie Campbell says she has been โ€œunsettled by some of the shifts over the last ~year, and the loss of so many people who shaped our cultureโ€.

She says she โ€œcanโ€™t see a placeโ€ for her to continue her work internally.

01.12.2024 00:48 โ€” ๐Ÿ‘ 56    ๐Ÿ” 12    ๐Ÿ’ฌ 3    ๐Ÿ“Œ 0

โ€ฆthey invented the reserve parachute.

01.12.2024 14:24 โ€” ๐Ÿ‘ 2    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Post image

There was someone even more pessimistic than the pessimistโ€ฆ

01.12.2024 14:24 โ€” ๐Ÿ‘ 2    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Preview
Building a Large Geospatial Model to Achieve Spatial Intelligence At Niantic, we are pioneering the concept of a Large Geospatial Model that will use large-scale machine learning to understand a scene and connect it to millions of other scenes globally.

It's so incredibly 2020s coded that Pokemon Go is being used to build an AI system which will almost inevitably end up being used by automated weapons systems to kill people. nianticlabs.com/news/largege...

17.11.2024 23:07 โ€” ๐Ÿ‘ 43    ๐Ÿ” 15    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 3

@michaelhuang is following 20 prominent accounts