Law enforcement has more tools than ever to track your movements and access your communications. Hereβs how to protect your privacy if you plan to protest. www.wired.com/story/how-to...
12.06.2025 19:30 β π 1359 π 829 π¬ 24 π 54@taylorbeauvais.bsky.social
PhD candidate at Boston University studying the sociology of AI. Grad fellow at Rafik Hariri institute for computation and computer engineering. Teaching fellow for AI Ethics Machine learning analyst at Open Justice Lab.
Law enforcement has more tools than ever to track your movements and access your communications. Hereβs how to protect your privacy if you plan to protest. www.wired.com/story/how-to...
12.06.2025 19:30 β π 1359 π 829 π¬ 24 π 54Apple out with a new paper that, not coincidentally, explains why theyβve stayed out of the LLM mania.
The βillusion of thinkingβ β what I and many, many others have been saying for ages, only to get the βnuh uhβ from the mentally ill AI chudbro crowd.
This is why βagentsβ will fail, btw.
On moving fast and breaking thingsβ. . .βagain: social mediaβs lessons for generative AI governance
www.tandfonline.com/doi/full/10....
Hassabis said he could also see AI being used to protect people from other algorithms designed by big tech to drain their attention away from more important tasks. βIβm very excited about the idea of a universal AI assistant that knows you really well, enriches your life by maybe giving you amazing recommendations, and helps to take care of mundane chores for you,β he said. β[It] basically gives you more time and maybe protects your attention from other algorithms trying to gain your attention. I think we can actually use AI in service of the individual.
Current tech is a collection of guys in hotdog costumes saying they have a new thing to save you from the old thing. www.theguardian.com/technology/2...
03.06.2025 10:08 β π 208 π 59 π¬ 14 π 17"Imagine what would happen if most climate science were done by researchers who worked in fossil fuel companies. Thatβs whatβs happening with AI"
@karenhao.bsky.social on the role of tech firms in society & the importance of independent research in democracy
www.nytimes.com/2025/05/30/o...
These people can't stop reinventing phrenology
31.05.2025 22:21 β π 104 π 28 π¬ 3 π 0this is why "diverse" training datasets doesn't inherently mean those communities end up being the primary beneficiaries of the technology
31.05.2025 22:22 β π 94 π 24 π¬ 2 π 0to summarise the study, there is negligible productivity and time gain from AI chatbot use and the only driving factor for mass deployment is fabricated fear of being left behind from the "AI revolution"
29.05.2025 11:33 β π 196 π 72 π¬ 3 π 2AI-generated slop on Facebook, TikTok, and YouTube has become a barometer of political fame, just as it has of pop culture celebrity β and some lawmakers are starting to worry.
29.05.2025 12:03 β π 12 π 4 π¬ 1 π 1The AI revolution: opening new frontiers in bullying.
This technology should not be easily accessible by the public, it serves no purpose, provides no benefit.
The hype cycle, while purporting to be all grown-up and hard-headed, is just another coping strategy for those invested in technological determinism.
27.05.2025 08:44 β π 9 π 2 π¬ 0 π 0That goes back to point 1 though. The Gov does not have infinite police power. Their losses in court, regarding Harvard cases and otherwise, prove that.
The Gov also loses some power with every loss. Every court win for Harvard empowers other universities to follow suit and playbook.
Harvard knows the law, the judges, and power better than Trump. Peace of mind may be shaken, but Harvard's power isn't as precarious as the stock market. Harvard remains just as, if not more appealing and competitive, specifically because they're fighting.
24.05.2025 13:47 β π 8 π 1 π¬ 1 π 03. It's perhaps the most well connected higher ed institution in history. If you need resources it helps to know the richest people in existence. It helps that they have played a role in writing the laws of this country. Even much of the conservative gov was educated by them, which builds good will
24.05.2025 13:47 β π 6 π 2 π¬ 1 π 02. People around the world know and revere their research and edu. Gov can talk shit but if you still have some of the best researchers/ labs/ curriculum/ facilities it doesn't matter. Harvard is made of people, and spaces. People know that, and that can't be taken with an exec order.
24.05.2025 13:47 β π 6 π 1 π¬ 1 π 0The argument is really shallow though. The Gov wins because they wont stop and people lose confidence? This isn't the stock market.
3 big reasons why this is silly:
1. Harvard has yet to lose in court. The Gov needs to win some for their threat to be effective. People are scared now, not displaced
Tweet by Sam Bowman @sleepinyourhat If it thinks you're doing something egregiously immoral, for example, like faking data in a pharmaceutical trial, it will use command-line tools to contact the press, contact regulators, try to lock you out of the relevant systems, or all of the above.
welcome to the future, now your error-prone software can call the cops
(this is an Anthropic employee talking about Claude Opus 4)
Altman is trying to cut out the middleman and condense digital life into a single, unified piece of hardware and software. The promise is this: Your whole life could be lived through such a device, turning OpenAIβs products into a repository of uses and personal data that could be impossible to leaveβjust as, if everyone in your family has an iPhone, Macbook, and iCloud storage plan, switching to Android is deeply unpleasant and challenging.
My firm belief is that the reason they are so bad at presenting compelling use cases is that they are trying to sell an empty container built for their purposes but not for actual users.
22.05.2025 12:39 β π 211 π 55 π¬ 8 π 11"...some issues... require not just a knowledge of law or of technology, but of both. That is, some problems cannot be discussed purely on technical grounds or purely on legal grounds; the crux of the matter lies in the intersection"
AI "safety " work requires sociology!
dl.acm.org/doi/10.1145/...
Good article pushing back on algorithmic radicalization hypotheses. Users play a role in their own curation.
βTheyβre trying to influence me to gain the more acceptable viewpointβ: The algorithmic imaginaries of politically activated social media users
journals.sagepub.com/doi/10.1177/...
46% of 16 to 21 year olds say they would rather a world without internet, and 70% say they feel worse about themselves after using social media.
Itβs long past time governments stepped in to address the consequences of leaving the internet to the private sector.
New study finds that AI chatbots were nearly five times more likely to contain broad generalizations about scientific research compared to humans.
royalsocietypublishing.org/doi/10.1098/...
NEWS: The courts have decided against DOGE and the US government in their legal battle to take full control of the United States Institute of Peace, including a headquarters building with an estimated value of $500 million. www.wired.com/story/usip-d...
19.05.2025 17:08 β π 2074 π 579 π¬ 38 π 48"The tool put those usersβ posts through a large language model, gave each a βradical score,β and provided its reason for doing so."
The person quoted here says they have no social science background, and doesn't even define what "radical" means...
There's also a subplot here about academic publishing. It's worth asking why some choose to use AI. Is it laziness or easy access to code-switching for academic communications? How much of peer review is less about what we saw, and more about how we say it?
07.05.2025 14:30 β π 0 π 0 π¬ 2 π 0One year I attended an ASA conference session, only to discover it was actually a family's memorial service to some recently deceased doctor. It was filmed, there were kids there, and someone even gave a eulogy.
It was pitched as "AI and medical Sociology". The doctor studied algorithmic bias.
I was a member for 3 years and ended up getting basically nothing from it. The conference presentations didn't facilitate feedback, the grants/ fellowships ended up costing me more than I was ever awarded, and job postings shared on listservs were mostly underpaid post-docs.
02.05.2025 20:42 β π 1 π 0 π¬ 0 π 0