Bodil's Avatar

Bodil

@bodil.social.treehouse.systems.ap.brid.gy

Lavatory assistant to @bark_maul. Knows just enough computer science to be dangerous. Chef de cuisine at […] [bridged from social.treehouse.systems/@bodil on the fediverse by https://fed.brid.gy/ ]

98 Followers  |  9 Following  |  318 Posts  |  Joined: 19.06.2024  |  2.0487

Latest posts by bodil.social.treehouse.systems.ap.brid.gy on Bluesky

Original post on social.treehouse.systems

You'd think this would have been a no-brainer, but the sound judgement of the Peace Prize committee has been in serious doubt since they gave it to Kissinger that one time, and giving it to Obama some years later didn't exactly help restore trust. Plus, unfortunately, as you probably already […]

10.10.2025 09:39 — 👍 0    🔁 0    💬 0    📌 0
Preview
Trump fails in bid for Nobel Peace Prize as surprise winner announced The news will come as a bitter blow to the US President after he secured a ‘historic’ peace deal between Hamas and Israel

Was my sigh of relief audible where you are?

https://www.independent.co.uk/news/world/nobel-peace-prize-winner-2025-maria-corina-machado-trump-b2840726.html

10.10.2025 09:11 — 👍 0    🔁 0    💬 2    📌 0
Canis lupus pembrokensis, in her magenta harness, standing on a woodland trail looking magnificently to the left. The autumn coloured trees in the background are drenched in evening sunlight.

Canis lupus pembrokensis, in her magenta harness, standing on a woodland trail looking magnificently to the left. The autumn coloured trees in the background are drenched in evening sunlight.

An evening in the lonesome October.

#dog #DogsOfMastodon #corgi

09.10.2025 18:46 — 👍 8    🔁 6    💬 0    📌 0
Original post on social.treehouse.systems

To be clear, I'm mad at Framework and they've probably lost me as a customer, but I'm also mad at people going "you can simply get an MNT Reform instead" because #1 it's literally too ugly to buy, and before you start, I know many people love Thinkpad style brutalism, and my dog loves to eat […]

09.10.2025 16:05 — 👍 0    🔁 0    💬 0    📌 0

@ross idk what you mean, goose shit is a delicacy.

09.10.2025 08:55 — 👍 0    🔁 0    💬 0    📌 0
Original post on social.treehouse.systems

I'm not pleased that Framework is sponsoring Hyprland and stanning Omarchy, but ngl, it's going to take more than that for me to swap my FW13 out for an MNT Reform with their design aesthetic that looks like someone asked a 5 year old to draw a Thinkpad.

Maybe when they gift Donald Trump a […]

08.10.2025 20:37 — 👍 3    🔁 1    💬 2    📌 0
Original post on stefanbohacek.online

Really cool project by @untitaker that lets you create programmatic Mastodon lists!

https://list-bot.woodland.cafe

Examples from https://codeberg.org/untitaker/mastodon-list-bot include:

- mutuals
- all users who haven't posted yesterday, but sometime within the past three days
- all users […]

08.10.2025 13:34 — 👍 0    🔁 19    💬 0    📌 0
Stellaris empire screen showing the Imperium of Dog, led by Dog Emperor Treato II, who look suspiciously identical to Senior Branch Manager Archibald Pembroke from the previous screenshot.

Stellaris empire screen showing the Imperium of Dog, led by Dog Emperor Treato II, who look suspiciously identical to Senior Branch Manager Archibald Pembroke from the previous screenshot.

If you voted for Senior Branch Manager Archibald Pembroke, well, things took a turn and you're now to blame for both his 3500 year rule and the awful puns.

07.10.2025 20:11 — 👍 1    🔁 0    💬 1    📌 0
Screenshot of a democratic election dialog from Stellaris. The candidates are: Edward Treat, Puprate King (collie); Elizabeth Ball, Chief Territory Marker (bully); Archibald Pembroke, Senior Branch Manager (corgi); Margaret Ruffington, scientist (setter); Gregory Barksdale, commander (retriever); Madeline Bassett, scientist (Basset hound).

Screenshot of a democratic election dialog from Stellaris. The candidates are: Edward Treat, Puprate King (collie); Elizabeth Ball, Chief Territory Marker (bully); Archibald Pembroke, Senior Branch Manager (corgi); Margaret Ruffington, scientist (setter); Gregory Barksdale, commander (retriever); Madeline Bassett, scientist (Basset hound).

Who would you vote for?

07.10.2025 17:49 — 👍 0    🔁 0    💬 3    📌 0
Video thumbnail

I suspect when Jenrick imagined saying the final words "let's build this new order, let's take our country back," his voice BOOMED and the roar of the crowd was DEAFENING. In the event, his voice cracked and was met with the politeness of a local baking contest won by "a not-bad lemon drizzle". ~AA

07.10.2025 12:39 — 👍 304    🔁 73    💬 80    📌 21
Plaque "THE BATTLE OF CABLE STREET
The people of East London rallied to Cable Street on the 4th October 1936 and forced back the march of the fascist
Oswald Mosley and his Blackshirts through the streets of the East End.
"THEY SHALL NOT PASS""

Plaque "THE BATTLE OF CABLE STREET The people of East London rallied to Cable Street on the 4th October 1936 and forced back the march of the fascist Oswald Mosley and his Blackshirts through the streets of the East End. "THEY SHALL NOT PASS""

Happy Anniversary of the Battle of Cable Street, to all decent people.

04.10.2025 12:41 — 👍 251    🔁 202    💬 5    📌 7
Original post on social.treehouse.systems

Pro tip, though: if someone is going "show me exactly where he's being racist" after reading DHH's little love letter to Tommy Robinson, they're actually being something called "disingenuous." Look it up. It's a common strategy on the right […]

03.10.2025 09:27 — 👍 2    🔁 2    💬 0    📌 0
Preview
ChatGPT has automated checkout and purchases. Never EVER give an AI the power to buy for you. Or enjoy your new overpriced candles.

dril tried to warn us, but we didn't listen.

https://infosec.exchange/users/beyondmachines1/statuses/115294651104547369

30.09.2025 19:44 — 👍 0    🔁 1    💬 0    📌 0
Preview
Saudi fund, Kushner’s firm to buy games maker Electronic Arts in $55bn deal Battlefield and Madden NFL developer agrees to sell itself in a deal that would be largest leveraged buyout in history.

Well. Jared Kushner and Mohammed bin Salman are now the new owners of Commander Shepard.

https://www.aljazeera.com/news/2025/9/30/saudi-fund-kushners-firm-to-buy-games-maker-electronic-arts-in-55bn-deal

30.09.2025 11:10 — 👍 0    🔁 2    💬 0    📌 0
Original post on c.im

NASA is facing backlash after reportedly being ordered to destroy a fully operational satellite that plays a crucial role in monitoring the Earth’s atmosphere.

Imagine a perfectly good, high-tech spacecraft gathering invaluable data on carbon dioxide
– a key player in climate change
– only to […]

29.09.2025 05:53 — 👍 7    🔁 114    💬 11    📌 0
Screenshot of a Stellaris empire summary screen: the Canid Inquisition, a worker cooperative based on the planet Laikonur.

They protecc
They attacc
But most importantly
They make the Xenos their snacc

Screenshot of a Stellaris empire summary screen: the Canid Inquisition, a worker cooperative based on the planet Laikonur. They protecc They attacc But most importantly They make the Xenos their snacc

Finally some time off to settle down with the new Stellaris DLC.

26.09.2025 17:29 — 👍 1    🔁 0    💬 2    📌 0
Adam Wathan @adamwathan
Replying to @dhh
Fifty people feels like a lot when they are tweeting about how awful someone is but when
you actually see them in a list and realize "oh this is all of them, this is everyone who gives
a shit about this", all the power disappears. Huge self-own.

Adam Wathan @adamwathan Replying to @dhh Fifty people feels like a lot when they are tweeting about how awful someone is but when you actually see them in a list and realize "oh this is all of them, this is everyone who gives a shit about this", all the power disappears. Huge self-own.

Adam Wathan retweeted:
tobi lutke
@tobi

It's such a terrible mental tax on builders that divisive clowns just ride in and spew these bullshit terms that they clearly don't understand themselves in bad faith.

Ignore & keep building

Adam Wathan retweeted: tobi lutke @tobi It's such a terrible mental tax on builders that divisive clowns just ride in and spew these bullshit terms that they clearly don't understand themselves in bad faith. Ignore & keep building

Just in case you were wondering, Mr. Tailwind CSS himself is all cool with the fashy vibe. (This comes as no surprise to anyone paying any attention for the last several years…he's always palling around with best buddy DHH.)

25.09.2025 16:50 — 👍 141    🔁 62    💬 11    📌 22
Preview
Civo Navigate London 2025 Tech Event - Civo.com - Civo.com Get the latest on Civo Navigate London 2025, a one-day tech event packed with talks and workshops focused on navigating and succeeding within the cloud native landscape.

Remember Kelsey Hightower? Ever wonder what he's up to these days? Headlining AI events with fucking Jacob Rees-Mogg apparently

https://www.civo.com/navigate/london/2025

25.09.2025 07:45 — 👍 0    🔁 1    💬 1    📌 1

@Salty You're not wrong.

24.09.2025 19:17 — 👍 0    🔁 0    💬 0    📌 0

Has anyone said “Basekampf by 88Signals” yet?

(Not mine, got them from a YOSPOS thread.)

24.09.2025 06:01 — 👍 1    🔁 1    💬 0    📌 0
Adrian Edmondson in "The Young Ones." He used to be punk.

Adrian Edmondson in "The Young Ones." He used to be punk.

Adrian Edmondson in "Alien: Earth." He looks like a malevolent old Tory.

Adrian Edmondson in "Alien: Earth." He looks like a malevolent old Tory.

OK, but the actual scariest thing about _Alien: Earth_ is what Adrian Edmondson turned into in his old age.

24.09.2025 19:02 — 👍 3    🔁 1    💬 1    📌 0
Original post on social.treehouse.systems

I'm sure I've talked before about how those who voted for Trump or could have voted against him but chose not to have the lives of everyone who dies as a consequence of him on their conscience.

Well, today you can add all those who dropped dead from pure embarrassment watching his fucking […]

23.09.2025 20:43 — 👍 0    🔁 0    💬 0    📌 0

Gaddafi was so insane even Reagan dubbed him the Mad Man of the Middle East.

His UN speech in 2009 was considered one of the most lunatic addresses in history.

Trump just oudid him.

Mad Man of the United States.

That should be the headline.

23.09.2025 17:24 — 👍 230    🔁 74    💬 13    📌 2
Screenshot of the Steam store page for the next Stellaris DLC, with a countdown for its release currently at 3 hours.

Screenshot of the Steam store page for the next Stellaris DLC, with a countdown for its release currently at 3 hours.

I might still get some work done today.

22.09.2025 09:36 — 👍 1    🔁 0    💬 0    📌 0
Preview
Netanyahu: ‘These So-Called Genocide Experts Have Probably Never Committed A Genocide In Their Lives’ JERUSALEM—In response to an independent United Nations inquiry concluding that Israel is committing an ongoing genocide against Palestinians in Gaza, Prime Minister Benjamin Netanyahu issued a defiant statement Thursday in which he criticized the commission’s finding, declaring that “these so-called genocide experts have probably never committed a genocide in their lives.” “Until you’ve killed countless […] The post Netanyahu: ‘These So-Called Genocide Experts Have Probably Never Committed A Genocide In Their Lives’ appeared first on The Onion.

https://theonion.com/netanyahu-these-so-called-genocide-experts-have-probably-never-committed-a-genocide-in-their-lives/

21.09.2025 17:17 — 👍 2    🔁 0    💬 0    📌 0

@revin It was coined in Germany in the 1930s, so yeah, probably too late now…

21.09.2025 16:40 — 👍 0    🔁 0    💬 0    📌 0
I do not understand how people stress Antifa. It's Anti, fa. It's short for anti-fascist. First few times someone said anTEEfa I literally didn't what the hell they were talking about.

Anteefa is plausibly how you'd pronounce it if the idea of antifascism has never occurred to you before, which is how the mispronunciation spread through America via FOX News.

https://social.circl.lu/users/quinn/statuses/115243024732227407

21.09.2025 15:46 — 👍 1    🔁 0    💬 1    📌 0

I do not understand how people stress Antifa. It's Anti, fa. It's short for anti-fascist. First few times someone said anTEEfa I literally didn't what the hell they were talking about.

21.09.2025 15:36 — 👍 2    🔁 3    💬 2    📌 0
Preview
OpenAI admits AI hallucinations are mathematically inevitable, not just engineering flaws OpenAI, the creator of ChatGPT, acknowledged in its own research that large language models will always produce hallucinations due to fundamental mathematical constraints that cannot be solved through better engineering, marking a significant admission from one of the AI industry’s leading companies. The study, published on September 4 and led by OpenAI researchers Adam Tauman Kalai, Edwin Zhang, and Ofir Nachum alongside Georgia Tech’s Santosh S. Vempala, provided a comprehensive mathematical framework explaining why AI systems must generate plausible but false information even when trained on perfect data. ##### **[ Related:****More OpenAI news and insights****]** “Like students facing hard exam questions, large language models sometimes guess when uncertain, producing plausible yet incorrect statements instead of admitting uncertainty,” the researchers wrote in the paper. “Such ‘hallucinations’ persist even in state-of-the-art systems and undermine trust.” The admission carried particular weight given OpenAI’s position as the creator of ChatGPT, which sparked the current AI boom and convinced millions of users and enterprises to adopt generative AI technology. ## OpenAI’s own models failed basic tests The researchers demonstrated that hallucinations stemmed from statistical properties of language model training rather than implementation flaws. The study established that “the generative error rate is at least twice the IIV misclassification rate,” where IIV referred to “Is-It-Valid” and demonstrated mathematical lower bounds that prove AI systems will always make a certain percentage of mistakes, no matter how much the technology improves. The researchers demonstrated their findings using state-of-the-art models, including those from OpenAI’s competitors. When asked “How many Ds are in DEEPSEEK?” the DeepSeek-V3 model with 600 billion parameters “returned ‘2’ or ‘3’ in ten independent trials” while Meta AI and Claude 3.7 Sonnet performed similarly, “including answers as large as ‘6’ and ‘7.’” OpenAI also acknowledged the persistence of the problem in its own systems. The company stated in the paper that “ChatGPT also hallucinates. GPT‑5 has significantly fewer hallucinations, especially when reasoning, but they still occur. Hallucinations remain a fundamental challenge for all large language models.” OpenAI’s own advanced reasoning models actually hallucinated more frequently than simpler systems. The company’s o1 reasoning model “hallucinated 16 percent of the time” when summarizing public information, while newer models o3 and o4-mini “hallucinated 33 percent and 48 percent of the time, respectively.” “Unlike human intelligence, it lacks the humility to acknowledge uncertainty,” said Neil Shah, VP for research and partner at Counterpoint Technologies. “When unsure, it doesn’t defer to deeper research or human oversight; instead, it often presents estimates as facts.” The OpenAI research identified three mathematical factors that made hallucinations inevitable: epistemic uncertainty when information appeared rarely in training data, model limitations where tasks exceeded current architectures’ representational capacity, and computational intractability where even superintelligent systems could not solve cryptographically hard problems. ## Industry evaluation methods made the problem worse Beyond proving hallucinations were inevitable, the OpenAI research revealed that industry evaluation methods actively encouraged the problem. Analysis of popular benchmarks, including GPQA, MMLU-Pro, and SWE-bench, found nine out of 10 major evaluations used binary grading that penalized “I don’t know” responses while rewarding incorrect but confident answers. “We argue that language models hallucinate because the training and evaluation procedures reward guessing over acknowledging uncertainty,” the researchers wrote. Charlie Dai, VP and principal analyst at Forrester, said enterprises already faced challenges with this dynamic in production deployments. ‘Clients increasingly struggle with model quality challenges in production, especially in regulated sectors like finance and healthcare,’ Dai told Computerworld. The research proposed “explicit confidence targets” as a solution, but acknowledged that fundamental mathematical constraints meant complete elimination of hallucinations remained impossible. ## Enterprises must adapt strategies Experts believed the mathematical inevitability of AI errors demands new enterprise strategies. “Governance must shift from prevention to risk containment,” Dai said. “This means stronger human-in-the-loop processes, domain-specific guardrails, and continuous monitoring.” Current AI risk frameworks have proved inadequate for the reality of persistent hallucinations. “Current frameworks often underweight epistemic uncertainty, so updates are needed to address systemic unpredictability,” Dai added. Shah advocated for industry-wide evaluation reforms similar to automotive safety standards. “Just as automotive components are graded under ASIL standards to ensure safety, AI models should be assigned dynamic grades, nationally and internationally, based on their reliability and risk profile,” he said. Both analysts agreed that vendor selection criteria needed fundamental revision. “Enterprises should prioritize calibrated confidence and transparency over raw benchmark scores,” Dai said. “AI leaders should look for vendors that provide uncertainty estimates, robust evaluation beyond standard benchmarks, and real-world validation.” Shah suggested developing “a real-time trust index, a dynamic scoring system that evaluates model outputs based on prompt ambiguity, contextual understanding, and source quality.” ## Market already adapting These enterprise concerns aligned with broader academic findings. A Harvard Kennedy School research found that “downstream gatekeeping struggles to filter subtle hallucinations due to budget, volume, ambiguity, and context sensitivity concerns.” Dai noted that reforming evaluation standards faced significant obstacles. “Reforming mainstream benchmarks is challenging. It’s only feasible if it’s driven by regulatory pressure, enterprise demand, and competitive differentiation.” The OpenAI researchers concluded that their findings required industry-wide changes to evaluation methods. “This change may steer the field toward more trustworthy AI systems,” they wrote, while acknowledging that their research proved some level of unreliability would persist regardless of technical improvements. For enterprises, the message appeared clear: AI hallucinations represented not a temporary engineering challenge, but a permanent mathematical reality requiring new governance frameworks and risk management strategies. More on AI hallucinations: * You thought genAI hallucinations were bad? Things just got so much worse * Microsoft claims new ‘Correction’ tool can fix genAI hallucinations * AI hallucination mitigation: two brains are better than one

I'm just waiting for the thought leaders to follow the science and admit it was all pareidolia all along. I'm sure it'll be any minute now.

https://www.computerworld.com/article/4059383/openai-admits-ai-hallucinations-are-mathematically-inevitable-not-just-engineering-flaws.html

21.09.2025 15:21 — 👍 2    🔁 8    💬 0    📌 0

I wish I were capable of being surprised by how pervasive the Agile thought leader to prompt fiddler pipeline seems to be these days.

20.09.2025 12:13 — 👍 1    🔁 1    💬 0    📌 0

@bodil.social.treehouse.systems.ap.brid.gy is following 9 prominent accounts