Still, the initial reason for linking that post was to point at the karma; a post calling for a disavowal of Musk receiving 16 votes but only 6 karma does not indicate a strong consensus (as I was originally arguing against).
16.03.2025 15:14 β π 1 π 0 π¬ 0 π 0
That few people are defending a controversial figure under a post calling for public disavowals doesn't shock me. Many forum users believe controversial things that they will not support openly in the comments.
That said, the lack of support is evidence in favour of 'EA community disavow Musk'
16.03.2025 15:11 β π 3 π 0 π¬ 2 π 0
The funds is a different question and I should have made sure the link was only to the post. That was my mistake.
16.03.2025 15:10 β π 1 π 0 π¬ 1 π 0
In some small cases, it might be counter-signalling; the author has Claude write a significant enough chunk of their post that they fear being caught using and LLM, so they sprinkle a few spelling mistakes to make it seem more human (since LLMs rarely make spelling mistakes of that sort).
16.03.2025 14:35 β π 0 π 0 π¬ 1 π 0
The specific comment I linked to is also written by someone other than yourself (check the top of the webpage). Still, I apologise for the confusion.
16.03.2025 14:30 β π 1 π 0 π¬ 1 π 0
I wasn't trying to link a specific comment and I apologise for doing so! I was intending to show the post, not the comment.
I only realised I had done so when I clicked the link. I couldn't figure out how to edit my tweet (apparently it's only available on the app).
Apologies David
16.03.2025 14:25 β π 2 π 0 π¬ 1 π 0
You got me! "Glimmers" are indeed faint :D
Allow me to rephrase it as "a dim glimmer"
Without getting lost in defining 'glimmers', I don't see the slight 1-5% changes as much to be optimistic about. They're something, but not much of an update. I hope my first comment didn't come across as spiky!
16.03.2025 09:42 β π 2 π 0 π¬ 1 π 0
Yes, fair. A faint glimmer.
16.03.2025 09:32 β π 0 π 0 π¬ 1 π 0
My guess is that Peterson, Rogan and Brand weren't the product of Republicanss making keen bets on up + coming voices.
The popularity of these influencers is probably more demand based rather than supply.
16.03.2025 09:32 β π 0 π 0 π¬ 0 π 0
These changes are all pretty small. Consistent, but still small.
16.03.2025 09:27 β π 1 π 0 π¬ 1 π 0
I didn't pick up on the use of the "royal we."
The linked post indicates that the consensus you claimed exists doesn't appear to exists (more votes than karma, comments that don't clearly agree in one direction)
16.03.2025 03:49 β π 1 π 0 π¬ 1 π 0
Who are you speaking for when you say "we disavow him"?
This post on the forum calling for 'EA' to disavow Musk doesn't indicate it's as uniform a view as you imply. forum.effectivealtruism.org/posts/wjBXNj...
16.03.2025 02:01 β π 3 π 0 π¬ 2 π 0
Further evidence; see well-paid EA community members taking the 1% pledge instead instead of 10%. 1% is in many ways signalling(and in my eyes, signalling the wrong thing). An increasing share of pledgers just take the 1% path. Better than nothing, but further from yee olden days.
04.12.2024 16:32 β π 1 π 0 π¬ 0 π 0
An examination of what will replace humans. It seems like there is a absence of this kind of modelling. Do humans stay as they are, become appendages to SAI, some kind of light upgraded selves, become Jupiter brains. Maybe examine why we become one thing vs another.
28.11.2024 14:48 β π 1 π 0 π¬ 0 π 0
It's fine if you accidentally learn things, just so long as none of them influence your decision to have kids.
27.11.2024 09:38 β π 2 π 0 π¬ 0 π 0
@tobyord.bsky.social is missing, as is @wdmacaskill.bsky.social
25.11.2024 14:57 β π 1 π 0 π¬ 0 π 0
Could also be a nod to Geminiβwishing for a user to die (though, admittedly, that happened pretty recently and maybe before the ad campaign). Or perhaps Bing Chat, for being unhinged.
25.11.2024 14:52 β π 0 π 0 π¬ 0 π 0
'Lizardman's Constant' fails to account for surveys where the lizardmen themselves are participating
25.11.2024 14:49 β π 2 π 0 π¬ 0 π 0
Lizardman's constant increasing?
25.11.2024 14:35 β π 1 π 0 π¬ 1 π 0
This is true. Some people will agree with the above tweet (ingroup) and others won't (outgroup), neatly carving the world into two groups.
25.11.2024 14:32 β π 0 π 0 π¬ 0 π 0
If humans were immoral, we'd escape the relentless attrition of brilliant minds to old age. Those minds could continue iterating forever and come up with an escape from the "everyone spends 70% of their life in education or employment just to keep society running" status-quo.
25.11.2024 13:28 β π 0 π 0 π¬ 0 π 0
Subscribe to the newsletter for the thoughtful stuff. www.richardhanania.com
Ezra Kleinβs tweets, articles, clips and podcasts on bluesky.
We are a research institute investigating the trajectory of AI for the benefit of society.
epoch.ai
Techno-optimist, but AGI is not like the other technologies.
Step 1: make memes.
Step 2: ???
Step 3: lower p(doom)
β Founder of Our World in Data
β Professor at the University of Oxford
Data to understand global problems and research to make progress against them.
Storyteller. Pragmatist. Pursue excellence.
AI policy researcher, wife guy in training, fan of cute animals and sci-fi. Started a Substack recently: https://milesbrundage.substack.com/
Effective Altruism and the Human Mind (with Lucius Caviola) is available for free at: https://academic.oup.com/book/56384
For physical and audiobook versions, see: https://stefanschubert.substack.com/p/physical-and-audiobook-versions-of
AI safety at Anthropic, on leave from a faculty job at NYU.
Views not employers'.
I think you should join Giving What We Can.
cims.nyu.edu/~sbowman
Social policy synthesizer. www.secondbest.ca
Reverse engineering neural networks at Anthropic. Previously Distill, OpenAI, Google Brain.Personal account.
Chief Scientist at the UK AI Security Institute (AISI). Previously DeepMind, OpenAI, Google Brain, etc.
π: suffering | π: EA, AI alignment, decoupling, R, cringe, amateur pharmacology + programming | Georgetown '22 (math+econ+phil) | Career status: π€·ββοΈ
Anthropic and Import AI. Previously OpenAI, Bloomberg, The Register. Weird futures.
Comms officer @ Open Philanthropy, former Magic pro, webfiction connoisseur. https://aarongertler.net/
Research @ Open Philanthropy. Formerly economist at GPI / Nuffield College, Oxford.
Interests: development econ, animal welfare, global catastrophic risks
Associate Professor of Environmental Studies, Director of the Center for Environmental and Animal Protection, Director of the Center for Mind, Ethics, and Policy, and Co-Director of the Wild Animal Welfare Program, New York University. jeffsebo.net