Erin LeDell's Avatar

Erin LeDell

@ledell.bsky.social

Chief Scientist @ Distributional.com @dbnlAI.bsky.social #MLSky #StatSky Founder @ datascientific.com Founder wimlds.org & co-founder rladies.org PhD @ UC Berkeley 🏑 🌈 Oakland, California.

6,856 Followers  |  1,759 Following  |  26 Posts  |  Joined: 25.04.2023  |  1.7593

Latest posts by ledell.bsky.social on Bluesky

The way python and R foster inclusion directly contributes to their success: joyful places to exist, a steady flow of new maintainers, and a delightful collection of niche tools empowered by wildly different expertise coming together

Watch the new python documentary for more on PSF’s work here

28.10.2025 00:20 β€” πŸ‘ 52    πŸ” 23    πŸ’¬ 0    πŸ“Œ 1
Preview
A national recognition; but science and open source are bitter victories I have recently been awarded France’s national order of merit, for my career, in science, in open source, and around AI. The speech that I gave carries messages important to me (French below;...

A speech about what drives me, how science and open source are bitter victories, unable to make improve the world if society does not embrace them for the better:
gael-varoquaux.info/personnal/a-...

10.10.2025 11:37 β€” πŸ‘ 114    πŸ” 37    πŸ’¬ 5    πŸ“Œ 5
Preview
I'm the president of Signal. I love dance music in the mornings, night yoga, and acting like a tourist β€” here's a day in my life. Meredith Whittaker wakes up by 6 am whether she's in New York or Paris and checks her Signal, where the entire company operates and communicates.

For those who want to know about how I chop vegetables between meetings, or grind my coffee, have I got the article for you www.businessinsider.com/day-in-the-l...

14.10.2025 06:23 β€” πŸ‘ 270    πŸ” 20    πŸ’¬ 16    πŸ“Œ 6
Preview
The official home of the Python Programming Language

TLDR; The PSF has made the decision to put our community and our shared diversity, equity, and inclusion values ahead of seeking $1.5M in new revenue. Please read and share. pyfound.blogspot.com/2025/10/NSF-...
🧡

27.10.2025 14:47 β€” πŸ‘ 6262    πŸ” 2737    πŸ’¬ 125    πŸ“Œ 451
Lilac-breasted Roller
Lillabrystet Ellekrage
Coracias caudatus

Lilac-breasted Roller Lillabrystet Ellekrage Coracias caudatus

Lilac-breasted Roller
Lillabrystet Ellekrage
Coracias caudatus
#birds #birding #Kenya #photography #nature #naturephotography #wildlifephotography #wildlife #ornithology #birdphotography #animalphotography

20.10.2025 20:27 β€” πŸ‘ 2297    πŸ” 272    πŸ’¬ 61    πŸ“Œ 18

Look at that. And New Mexico is not a rich state. Just one that figured out some priorities.

09.09.2025 15:08 β€” πŸ‘ 1548    πŸ” 337    πŸ’¬ 32    πŸ“Œ 8
Post image

Meta trained a special β€œaggregator” model that learns how to combine and reconcile different answers into a more accurate final one, instead of relying on simple majority voting or reward model ranking on multiple model answers.

09.09.2025 14:03 β€” πŸ‘ 47    πŸ” 7    πŸ’¬ 3    πŸ“Œ 1

Simpering cowards one and all.

This is almost unbearable to watch.

06.09.2025 22:18 β€” πŸ‘ 750    πŸ” 162    πŸ’¬ 83    πŸ“Œ 15
highlighted text: language models are optimized to be good test-takers, and guessing when uncertain improves test performance

full text: 

Like students facing hard exam questions, large language models sometimes guess when
uncertain, producing plausible yet incorrect statements instead of admitting uncertainty. Such
β€œhallucinations” persist even in state-of-the-art systems and undermine trust. We argue that
language models hallucinate because the training and evaluation procedures reward guessing over
acknowledging uncertainty, and we analyze the statistical causes of hallucinations in the modern
training pipeline. Hallucinations need not be mysteriousβ€”they originate simply as errors in binary
classification. If incorrect statements cannot be distinguished from facts, then hallucinations
in pretrained language models will arise through natural statistical pressures. We then argue
that hallucinations persist due to the way most evaluations are gradedβ€”language models are
optimized to be good test-takers, and guessing when uncertain improves test performance. This
β€œepidemic” of penalizing uncertain responses can only be addressed through a socio-technical
mitigation: modifying the scoring of existing benchmarks that are misaligned but dominate
leaderboards, rather than introducing additional hallucination evaluations. This change may
steer the field toward more trustworthy AI systems

highlighted text: language models are optimized to be good test-takers, and guessing when uncertain improves test performance full text: Like students facing hard exam questions, large language models sometimes guess when uncertain, producing plausible yet incorrect statements instead of admitting uncertainty. Such β€œhallucinations” persist even in state-of-the-art systems and undermine trust. We argue that language models hallucinate because the training and evaluation procedures reward guessing over acknowledging uncertainty, and we analyze the statistical causes of hallucinations in the modern training pipeline. Hallucinations need not be mysteriousβ€”they originate simply as errors in binary classification. If incorrect statements cannot be distinguished from facts, then hallucinations in pretrained language models will arise through natural statistical pressures. We then argue that hallucinations persist due to the way most evaluations are gradedβ€”language models are optimized to be good test-takers, and guessing when uncertain improves test performance. This β€œepidemic” of penalizing uncertain responses can only be addressed through a socio-technical mitigation: modifying the scoring of existing benchmarks that are misaligned but dominate leaderboards, rather than introducing additional hallucination evaluations. This change may steer the field toward more trustworthy AI systems

Hallucinations are accidentally created by evals

They come from post-training. Reasoning models hallucinate more because we do more rigorous post-training on them

The problem is we reward them for being confident

cdn.openai.com/pdf/d04913be...

06.09.2025 16:25 β€” πŸ‘ 65    πŸ” 7    πŸ’¬ 8    πŸ“Œ 10

Plugging something into the tiny computer that you keep in your pocket. That one that has all your passwords, information, and location.. and giving control away to a random ai and company you know nothing about…

04.09.2025 01:12 β€” πŸ‘ 27    πŸ” 6    πŸ’¬ 2    πŸ“Œ 0
Post image

A nasal spray reduced the risk of Covid infections in a double blind, placebo controlled randomized trial
jamanetwork.com/journals/jam...

02.09.2025 15:16 β€” πŸ‘ 627    πŸ” 174    πŸ’¬ 22    πŸ“Œ 17

Exactly. Keep the pressure on. Trump will not stick with anyone he thinks may be dragging him further down. That was the beginning of his distancing from Elon.

Keep the pressure on. RFK must go.

01.09.2025 18:34 β€” πŸ‘ 1389    πŸ” 336    πŸ’¬ 60    πŸ“Œ 12

It's fine if this is all seven overvalued companies in an AI trenchcoat, right?

Right?

01.09.2025 19:21 β€” πŸ‘ 49    πŸ” 10    πŸ’¬ 2    πŸ“Œ 1

"When in doubt, don't ask ChatGPT for health advice."

13.08.2025 02:30 β€” πŸ‘ 57    πŸ” 13    πŸ’¬ 0    πŸ“Œ 0
Preview
AI Eroded Doctors’ Ability to Spot Cancer Within Months in Study Artificial intelligence, touted for its potential to transform medicine, led to some doctors losing skills after just a few months in a new study.

β€œThe AI in the study probably prompted doctors to become over-reliant on its recommendations, β€˜leading to clinicians becoming less motivated, less focused, and less responsible when making cognitive decisions without AI assistance,’ the scientists said in the paper.”

12.08.2025 23:41 β€” πŸ‘ 5262    πŸ” 2583    πŸ’¬ 115    πŸ“Œ 543
Post image

Somebody on LinkedIn said what we're all thinking.

10.08.2025 18:30 β€” πŸ‘ 25043    πŸ” 5271    πŸ’¬ 494    πŸ“Œ 506

I have some academic lady friends I’ve known for 20+yrs. This industry can be so cold, competitive, and selfish, but these women are so kind, generous, steadfast, and fun. We’ll sometimes get busy and go months without chatting, then reconnect as if no time has passed. I’m so grateful for them…

10.08.2025 18:22 β€” πŸ‘ 90    πŸ” 3    πŸ’¬ 1    πŸ“Œ 1
Post image

"Capitalism is temporary. Dykes are forever"
Seen in NYC

13.07.2025 00:07 β€” πŸ‘ 2212    πŸ” 670    πŸ’¬ 10    πŸ“Œ 10

Anyway just saying

27.07.2025 18:38 β€” πŸ‘ 379    πŸ” 76    πŸ’¬ 13    πŸ“Œ 4

What--and I say this with my chest--the hell are we doing here people

25.07.2025 12:14 β€” πŸ‘ 536    πŸ” 120    πŸ’¬ 22    πŸ“Œ 7

Maybe if your country is the wealthiest in the world but the richest tenth of the country have two thirds of the wealth and the bottom 50% only have 2.5% of the wealth, you don't have the wealthiest country in the world, you just have feudalism.

23.07.2025 14:51 β€” πŸ‘ 2470    πŸ” 679    πŸ’¬ 47    πŸ“Œ 22
Preview
ChatGPT Hallucinated a Feature, Forcing Human Developers to Add It Welcome to the era of β€˜gaslight driven development.’ Soundslice added a feature the chatbot thought it existed after engineers kept finding screenshots from the LLM in its error logs.

ChatGPT Hallucinated a Feature, Forcing Human Developers to Add It

πŸ”— www.404media.co/chatgpt-hall...

23.07.2025 15:09 β€” πŸ‘ 179    πŸ” 62    πŸ’¬ 10    πŸ“Œ 14

Legends never die!

23.07.2025 14:25 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image

In a stunning moment of self-delusion, the Wall Street Journal headline writers admitted that they don't know how LLM chatbots work.

21.07.2025 01:48 β€” πŸ‘ 2978    πŸ” 473    πŸ’¬ 43    πŸ“Œ 90

It is *bananas* that they would give vibe coding tools (and _Replit_, of all platforms 🀣) production deploy access! With no backups! We gave better backup tools to teenagers on Glitch remixing apps a decade ago.

20.07.2025 16:25 β€” πŸ‘ 259    πŸ” 35    πŸ’¬ 14    πŸ“Œ 0
21.07.2025 02:12 β€” πŸ‘ 12476    πŸ” 3629    πŸ’¬ 48    πŸ“Œ 45
Preview
Jason βœ¨πŸ‘ΎSaaStr.Ai✨ Lemkin (@jasonlk) .@Replit goes rogue during a code freeze and shutdown and deletes our entire database

This thread is incredible.

20.07.2025 15:01 β€” πŸ‘ 4180    πŸ” 1231    πŸ’¬ 313    πŸ“Œ 631
Preview
ChatGPT advises women to ask for lower salaries, study finds A new study has found that large language models (LLMs) like ChatGPT consistently advise women to ask for lower salaries than men.

Study finds A.I. LLMs advise women to ask for lower salaries than men. When prompted w/ a user profile of same education, experience & job role, differing only by gender, ChatGPT advised the female applicant to request $280K salary; Male applicant=$400K.
thenextweb.com/news/chatgpt...

20.07.2025 20:15 β€” πŸ‘ 1950    πŸ” 1032    πŸ’¬ 91    πŸ“Œ 331

make art! if it's the end of the world, you might as well make art! if it's not the end of the world, then the future will be better because people made art right now!

18.07.2025 21:39 β€” πŸ‘ 4026    πŸ” 1275    πŸ’¬ 67    πŸ“Œ 56
Video thumbnail

A U.S. surgeon trying to have a β€œpeer to peer” consultation with a doctor at a health insurance company who is hiding his identity as her patient gets denied coverage. And it gets worse from there. You have to see it to believe it.

(1/2)

18.07.2025 12:37 β€” πŸ‘ 5851    πŸ” 2494    πŸ’¬ 244    πŸ“Œ 465

@ledell is following 20 prominent accounts