Taylor Beauvais's Avatar

Taylor Beauvais

@taylorbeauvais.bsky.social

PhD candidate at Boston University studying the sociology of AI. Grad fellow at Rafik Hariri institute for computation and computer engineering. Teaching fellow for AI Ethics Machine learning analyst at Open Justice Lab.

405 Followers  |  541 Following  |  59 Posts  |  Joined: 07.02.2024  |  2.0893

Latest posts by taylorbeauvais.bsky.social on Bluesky

Preview
How to Protest Safely in the Age of Surveillance Law enforcement has more tools than ever to track your movements and access your communications. Here’s how to protect your privacy if you plan to protest.

Law enforcement has more tools than ever to track your movements and access your communications. Here’s how to protect your privacy if you plan to protest. www.wired.com/story/how-to...

12.06.2025 19:30 β€” πŸ‘ 1359    πŸ” 829    πŸ’¬ 24    πŸ“Œ 54
Preview
The Illusion of Thinking: Understanding the Strengths and Limitations of Reasoning Models via the Lens of Problem Complexity Recent generations of frontier language models have introduced Large Reasoning Models (LRMs) that generate detailed thinking processes…

Apple out with a new paper that, not coincidentally, explains why they’ve stayed out of the LLM mania.

The β€œillusion of thinking” β€” what I and many, many others have been saying for ages, only to get the β€œnuh uh” from the mentally ill AI chudbro crowd.

This is why β€œagents” will fail, btw.

07.06.2025 23:36 β€” πŸ‘ 5    πŸ” 2    πŸ’¬ 0    πŸ“Œ 0
Preview
On moving fast and breaking things . . . again: social media’s lessons for generative AI governance Generative AI systems are increasingly being employed globally, bringing with them both tremendous promise and substantial potential for harm. As is often the case, governance initiatives are laggi...

On moving fast and breaking things . . . again: social media’s lessons for generative AI governance

www.tandfonline.com/doi/full/10....

05.06.2025 14:35 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Hassabis said he could also see AI being used to protect people from other algorithms designed by big tech to drain their attention away from more important tasks. β€œI’m very excited about the idea of a universal AI assistant that knows you really well, enriches your life by maybe giving you amazing recommendations, and helps to take care of mundane chores for you,” he said.

β€œ[It] basically gives you more time and maybe protects your attention from other algorithms trying to gain your attention. I think we can actually use AI in service of the individual.

Hassabis said he could also see AI being used to protect people from other algorithms designed by big tech to drain their attention away from more important tasks. β€œI’m very excited about the idea of a universal AI assistant that knows you really well, enriches your life by maybe giving you amazing recommendations, and helps to take care of mundane chores for you,” he said. β€œ[It] basically gives you more time and maybe protects your attention from other algorithms trying to gain your attention. I think we can actually use AI in service of the individual.

Current tech is a collection of guys in hotdog costumes saying they have a new thing to save you from the old thing. www.theguardian.com/technology/2...

03.06.2025 10:08 β€” πŸ‘ 208    πŸ” 59    πŸ’¬ 14    πŸ“Œ 17
Preview
Opinion | Silicon Valley Is at an Inflection Point

"Imagine what would happen if most climate science were done by researchers who worked in fossil fuel companies. That’s what’s happening with AI"

@karenhao.bsky.social on the role of tech firms in society & the importance of independent research in democracy

www.nytimes.com/2025/05/30/o...

02.06.2025 16:24 β€” πŸ‘ 16    πŸ” 4    πŸ’¬ 0    πŸ“Œ 0

These people can't stop reinventing phrenology

31.05.2025 22:21 β€” πŸ‘ 104    πŸ” 28    πŸ’¬ 3    πŸ“Œ 0

this is why "diverse" training datasets doesn't inherently mean those communities end up being the primary beneficiaries of the technology

31.05.2025 22:22 β€” πŸ‘ 94    πŸ” 24    πŸ’¬ 2    πŸ“Œ 0

to summarise the study, there is negligible productivity and time gain from AI chatbot use and the only driving factor for mass deployment is fabricated fear of being left behind from the "AI revolution"

29.05.2025 11:33 β€” πŸ‘ 196    πŸ” 72    πŸ’¬ 3    πŸ“Œ 2
Preview
AI-powered political fanfiction racks up views online Fake stories about real politicians sometimes get more views than real-world reporting that’s not built for the algorithms.

AI-generated slop on Facebook, TikTok, and YouTube has become a barometer of political fame, just as it has of pop culture celebrity β€” and some lawmakers are starting to worry.

29.05.2025 12:03 β€” πŸ‘ 12    πŸ” 4    πŸ’¬ 1    πŸ“Œ 1
Preview
No One Knows How to Deal With 'Student-on-Student' AI CSAM A new report from Stanford finds that schools, parents, police, and our legal system are not prepared to deal with the growing problem of minors using AI to generate CSAM of other minors.

The AI revolution: opening new frontiers in bullying.

This technology should not be easily accessible by the public, it serves no purpose, provides no benefit.

29.05.2025 13:06 β€” πŸ‘ 3    πŸ” 2    πŸ’¬ 1    πŸ“Œ 0

The hype cycle, while purporting to be all grown-up and hard-headed, is just another coping strategy for those invested in technological determinism.

27.05.2025 08:44 β€” πŸ‘ 9    πŸ” 2    πŸ’¬ 0    πŸ“Œ 0

That goes back to point 1 though. The Gov does not have infinite police power. Their losses in court, regarding Harvard cases and otherwise, prove that.

The Gov also loses some power with every loss. Every court win for Harvard empowers other universities to follow suit and playbook.

24.05.2025 16:08 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Harvard knows the law, the judges, and power better than Trump. Peace of mind may be shaken, but Harvard's power isn't as precarious as the stock market. Harvard remains just as, if not more appealing and competitive, specifically because they're fighting.

24.05.2025 13:47 β€” πŸ‘ 8    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0

3. It's perhaps the most well connected higher ed institution in history. If you need resources it helps to know the richest people in existence. It helps that they have played a role in writing the laws of this country. Even much of the conservative gov was educated by them, which builds good will

24.05.2025 13:47 β€” πŸ‘ 6    πŸ” 2    πŸ’¬ 1    πŸ“Œ 0

2. People around the world know and revere their research and edu. Gov can talk shit but if you still have some of the best researchers/ labs/ curriculum/ facilities it doesn't matter. Harvard is made of people, and spaces. People know that, and that can't be taken with an exec order.

24.05.2025 13:47 β€” πŸ‘ 6    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0

The argument is really shallow though. The Gov wins because they wont stop and people lose confidence? This isn't the stock market.

3 big reasons why this is silly:

1. Harvard has yet to lose in court. The Gov needs to win some for their threat to be effective. People are scared now, not displaced

24.05.2025 13:47 β€” πŸ‘ 14    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0
Tweet by Sam Bowman
@sleepinyourhat
If it thinks you're doing something egregiously immoral, for example, like faking data in a pharmaceutical trial, it will use command-line tools to contact the press, contact regulators, try to lock you out of the relevant systems, or all of the above.

Tweet by Sam Bowman @sleepinyourhat If it thinks you're doing something egregiously immoral, for example, like faking data in a pharmaceutical trial, it will use command-line tools to contact the press, contact regulators, try to lock you out of the relevant systems, or all of the above.

welcome to the future, now your error-prone software can call the cops

(this is an Anthropic employee talking about Claude Opus 4)

22.05.2025 20:55 β€” πŸ‘ 3256    πŸ” 669    πŸ’¬ 170    πŸ“Œ 231
Altman is trying to cut out the middleman and condense digital life into a single, unified piece of hardware and software. The promise is this: Your whole life could be lived through such a device, turning OpenAI’s products into a repository of uses and personal data that could be impossible to leaveβ€”just as, if everyone in your family has an iPhone, Macbook, and iCloud storage plan, switching to Android is deeply unpleasant and challenging.

Altman is trying to cut out the middleman and condense digital life into a single, unified piece of hardware and software. The promise is this: Your whole life could be lived through such a device, turning OpenAI’s products into a repository of uses and personal data that could be impossible to leaveβ€”just as, if everyone in your family has an iPhone, Macbook, and iCloud storage plan, switching to Android is deeply unpleasant and challenging.

My firm belief is that the reason they are so bad at presenting compelling use cases is that they are trying to sell an empty container built for their purposes but not for actual users.

22.05.2025 12:39 β€” πŸ‘ 211    πŸ” 55    πŸ’¬ 8    πŸ“Œ 11
Computer Science and the Law | Communications of the ACM

"...some issues... require not just a knowledge of law or of technology, but of both. That is, some problems cannot be discussed purely on technical grounds or purely on legal grounds; the crux of the matter lies in the intersection"

AI "safety " work requires sociology!
dl.acm.org/doi/10.1145/...

22.05.2025 12:59 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
β€œThey’re trying to influence me to gain the more acceptable viewpoint”: The algorithmic imaginaries of politically activated social media users - Raven Maragh-Lloyd, Ryan Stoldt, Javie Ssozie, Kathryn... Links between extremism online and personalization algorithms are, by now, widely accepted. However, discussions surrounding sociopolitical radicalization and i...

Good article pushing back on algorithmic radicalization hypotheses. Users play a role in their own curation.

β€œThey’re trying to influence me to gain the more acceptable viewpoint”: The algorithmic imaginaries of politically activated social media users

journals.sagepub.com/doi/10.1177/...

22.05.2025 12:46 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
Almost half of young people would prefer a world without internet, UK study finds Half of 16- to 21-year-olds support β€˜digital curfew’ and nearly 70% feel worse after using social media

46% of 16 to 21 year olds say they would rather a world without internet, and 70% say they feel worse about themselves after using social media.

It’s long past time governments stepped in to address the consequences of leaving the internet to the private sector.

20.05.2025 06:39 β€” πŸ‘ 564    πŸ” 145    πŸ’¬ 21    πŸ“Œ 50
Preview
AI is more persuasive than people in online debates When given information about its human opponents, the large language model GPT-4 was able to make particularly convincing arguments.

www.nature.com/articles/d41...

19.05.2025 23:19 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
Generalization bias in large language model summarization of scientific research | Royal Society Open Science Artificial intelligence chatbots driven by large language models (LLMs) have the potential to increase public science literacy and support scientific research, as they can quickly summarize complex scientific information in accessible terms. However, when ...

New study finds that AI chatbots were nearly five times more likely to contain broad generalizations about scientific research compared to humans.
royalsocietypublishing.org/doi/10.1098/...

19.05.2025 16:57 β€” πŸ‘ 15    πŸ” 4    πŸ’¬ 0    πŸ“Œ 0
Preview
DOGE Loses Battle to Take Over USIPβ€”and Its $500 Million Headquarters A federal judge called DOGE’s actions at the United States Institute of Peace β€˜unlawful.’

NEWS: The courts have decided against DOGE and the US government in their legal battle to take full control of the United States Institute of Peace, including a headquarters building with an estimated value of $500 million. www.wired.com/story/usip-d...

19.05.2025 17:08 β€” πŸ‘ 2074    πŸ” 579    πŸ’¬ 38    πŸ“Œ 48

"The tool put those users’ posts through a large language model, gave each a β€œradical score,” and provided its reason for doing so."

The person quoted here says they have no social science background, and doesn't even define what "radical" means...

19.05.2025 14:15 β€” πŸ‘ 38    πŸ” 16    πŸ’¬ 6    πŸ“Œ 0

There's also a subplot here about academic publishing. It's worth asking why some choose to use AI. Is it laziness or easy access to code-switching for academic communications? How much of peer review is less about what we saw, and more about how we say it?

07.05.2025 14:30 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 2    πŸ“Œ 0

One year I attended an ASA conference session, only to discover it was actually a family's memorial service to some recently deceased doctor. It was filmed, there were kids there, and someone even gave a eulogy.

It was pitched as "AI and medical Sociology". The doctor studied algorithmic bias.

02.05.2025 20:48 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

I was a member for 3 years and ended up getting basically nothing from it. The conference presentations didn't facilitate feedback, the grants/ fellowships ended up costing me more than I was ever awarded, and job postings shared on listservs were mostly underpaid post-docs.

02.05.2025 20:42 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
19.04.2025 17:08 β€” πŸ‘ 20    πŸ” 3    πŸ’¬ 0    πŸ“Œ 0

@taylorbeauvais is following 20 prominent accounts