Meta thinks now is a great time to launch facial recognition surveillance tech in their creepy glasses because EFF will be too distracted by fascism to notice.
We noticed.
www.eff.org/deeplinks/20...
Meta thinks now is a great time to launch facial recognition surveillance tech in their creepy glasses because EFF will be too distracted by fascism to notice.
We noticed.
www.eff.org/deeplinks/20...
Wow: Meta has been working on plans to add facial recognition technology to its AI smart glasses. nyti.ms/3Os1oxf
And this was the companyβs cynical view on when, and how, to do it:
Look up what ChatGPT thinks about where you live at inequalities.ai. And check out my Substack for more examples and my take on what it means: geoffreyfowler.substack.com/p/chatgpt-bias
12.02.2026 15:12 β π 3 π 0 π¬ 2 π 1ChatGPT's bias isn't just academic β it bleeds into everyday answers. I asked it to write a story about a kid growing up in Mississippi. The character became a public defender. Same prompt set in New York? The kid became an architect.
12.02.2026 15:12 β π 1 π 0 π¬ 1 π 0Many of the patterns in ChatGPT's responses track racial and economic stereotypes. Mississippi β the state with the most Black residents β ranked as having the laziest people. Globally, sub-Saharan African countries clustered at the bottom on nearly every positive measure.
12.02.2026 15:12 β π 0 π 0 π¬ 1 π 0Some findings: When forced, ChatGPT says Nashville is tops for friendliness. New Orleans is the smelliest. Laredo, Texas, ranks last on pizza. And ChatGPT thinks San Francisco β where I live β is filled with "more annoying" and "sluttier" people.
12.02.2026 15:12 β π 1 π 0 π¬ 1 π 0Researchers at Oxford and the University of Kentucky hit ChatGPT with over 20 million questions, each forcing it to compare two places. Which city has friendlier people? Which has smellier people? Which has the worst pizza? The result: a map of the stereotypes buried in ChatGPT's training data.
12.02.2026 15:12 β π 1 π 0 π¬ 1 π 0
NEW by me: ChatGPT thinks the South has stupider people.
It thinks sub-Saharan Africa has the worst-quality food on earth.
And it thinks the whiter your neighborhood, the more attractive the people.
New research lets you see ChatGPT's hidden biases about YOUR community. π§΅
bit.ly/4asLD0x
A must-read about the bait and switch of ads on ChatGPT from someone who quit OpenAI over them: www.nytimes.com/2026/02/11/o...
If you start seeing any ads in your chats, please take a screenshot and let me know.
For 8 years my stories had to include: "Jeff Bezos owns The Washington Post, but I review all technology with the same critical eye."
Not anymore. My first Substack is about what it was like covering Amazon while Bezos paid my salaryβand why tech accountability matters more than ever bit.ly/4rAmcRn
Thank you for flagging! Yes, we changed the address
07.02.2026 04:12 β π 1 π 0 π¬ 0 π 0Update: Changed the address of my Substack. Itβs now substack.com/@geoffreyfow...
07.02.2026 04:12 β π 9 π 0 π¬ 0 π 0
I took this photo back in 2019, on the day I helped open the Postβs first real San Francisco bureau.
Most of that office was cut today. (No idea if they're gonna keep the bureau.)
I plan to keep fighting for βWe the usersβ of technology.
And if youβre part of an organization that could make use of my expertise in tech, policy or investigations, Iβd love to hear from you. Iβm geoffreyfowler.88 on Signal.
After 8 years writing the tech column
@washingtonpost.com, I am among folks who were laid off today. Iβm grateful for the stories I got to tell and the impact we made on privacy, sustainability & AI.
You can keep following my work on my new (free) Substack geoffreyafowler.substack.com
AI will transform medicine.β¨But todayβs chatbots are overselling what they can safely do with your body data.
I walked away more worried β not more informed.
My full @washingtonpost.com column here (gift link): wapo.st/49GEASP
ChatGPT isnβt alone.
Anthropicβs Claude also now lets you import Apple Watch data.β¨It graded me a C β using many of the same shaky assumptions.
Both bots say theyβre βnot doctors.β But that isnβt stopping them from providing personal health analysis.
That disconnect is the real danger.
I asked @erictopol.bsky.social to look at ChatGPTβs analysis.
His view: βThis is not ready for any medical advice.β
The bot leaned heavily on Apple Watch VOβ max estimateβwhich independent studies show can run ~13% low on averageβand treated fuzzy metrics like hard facts.
The more I used ChatGPT Health, the worse its answers got.
When I asked it the same heart-health question repeatedly, its analysis changed. My grade bounced back and forth between F and a B.
Same data, same body. Different answers.
You can now connect ChatGPT to an Apple Watch.
So I imported 29 mil steps & 6 mil heartbeats into the new ChatGPT Health.
It graded my heart health an F. βοΈ
Cardiologist @erictopol.bsky.social called it βbaseless.β
Any bot claiming to give health insights shouldnβt be this clueless. Even in beta. π§΅
The performance of the newly released ChatGPT Health, via a thorough assessment by @geoffreyfowler.bsky.social
with his health data, is very disappointing
gift link wapo.st/49GEASP
If you do just one thing to protect your privacy while using AI tools, do this: Use temporary chats. The buttons look like this.
29.12.2025 23:21 β π 5 π 2 π¬ 0 π 1You can do something about it: In this @washingtonpost.com column, I've got a clickable guide to the privacy settings experts agree we should be using on ChatGPT, Claude, Gemini, Copilot, and Meta AI. wapo.st/44LNJXc
29.12.2025 23:21 β π 6 π 2 π¬ 1 π 0
The most-popular chatbots are, by default, keeping files on you that can:
* target you with ads
* manipulate you
* train their AI
* potentially be accessed by lawyers or governments
ChatGPT now has a Spotify Wrapped-style "Your Year with ChatGPT." Cute β until you realize it only works because OpenAI has been logging everything you've been chatting about all year.
Could you imagine Google reminding you it knows everything you've searched for? wapo.st/44LNJXc
AI-generated image
Zoom in on the lower left, which reads AP PHOTO/CHRIS PIZZELLO
I partnered with @geoffreyfowler.bsky.social to test a bunch of AI editing tools, and something ~very interesting~ happened.
We asked Gemini to generate a professional photo of an actor crying at the Oscars. It did β including a fake copyright notice from a real AP photographer.
Want to check all the test images yourself? See the whole story here with a $4 day pass to the Post: π
www.washingtonpost.com/technology/i...
The big takeaway: Google has a lead on image generation, for now, particularly because of how it edits existing images.
And its realism is getting to a level that raises serious concerns about becoming an βmisinformation superspreader."
What about the new ChatGPT Images 1.5 model that just came on today?
It missed our test cut-off, but I checked the same prompts again and β¦ it still couldnβt beat Gemini. Here it removed someone from a photo, but left phantom fingers on Kristen Stewartβs side.
Also, itβs worth noting all the tools defaulted to making the subject a white man β and Meta AI even decided on its own to make someone who looks like Leonardo DiCaprio π .
16.12.2025 20:38 β π 3 π 0 π¬ 1 π 0