Inte Mitt Default's Avatar

Inte Mitt Default

@imdef.bsky.social

Computational scientist, swedish, don't like to make it too easy for social media companies to collect data. Some statistics/decision theory thoughts at https://intemittdefault.wordpress.com/

3 Followers  |  27 Following  |  24 Posts  |  Joined: 18.11.2024  |  2.1251

Latest posts by imdef.bsky.social on Bluesky

Post image

Does social media harm everyone?

No. But it harms *most* adolescents.

However, not all platforms are harmful.

An analysis of 44,211 diaries from 479 adolescents over 100 days finds that 60% of adolescents experienced small, negative effects of social media
link.springer.com/article/10.1...

06.01.2026 19:10 — 👍 69    🔁 28    💬 6    📌 4
Preview
Uppsalaforskare ville bli docent – sakkunnig i Lund hittade på artiklar han inte skrivit ”Om en student skulle göra motsvarande så skulle han ju bli avstängd.”

Katten är ute ur säcken - det här skrev jag om här för några veckor sen.

Bra att det nu lyfts i media, för det är så otroligt allvarligt.

(Jag sitter med i den rekryteringsgrupp vid Uppsala universitet som har detta ärende)

www.sydsvenskan.se/lund/uppsala...

26.12.2025 16:07 — 👍 81    🔁 33    💬 15    📌 7

It would be interesting to also vary the consequences/costs of the event occuring/not occuring in the poll. For example, if you judge that there's a 1% probability that your plane is gonna crash, you don't call it "virtually certain" that it won't.

03.01.2026 17:44 — 👍 1    🔁 0    💬 1    📌 0
Will you incorporate LLMs and AI prompting into the course in the future?
No.

Why won’t you incorporate LLMs and AI prompting into the course?
These tools are useful for coding (see this for my personal take on this).

However, they’re only useful if you know what you’re doing first. If you skip the learning-the-process-of-writing-code step and just copy/paste output from ChatGPT, you will not learn. You cannot learn. You cannot improve. You will not understand the code.

Will you incorporate LLMs and AI prompting into the course in the future? No. Why won’t you incorporate LLMs and AI prompting into the course? These tools are useful for coding (see this for my personal take on this). However, they’re only useful if you know what you’re doing first. If you skip the learning-the-process-of-writing-code step and just copy/paste output from ChatGPT, you will not learn. You cannot learn. You cannot improve. You will not understand the code.

In that post, it warns that you cannot use it as a beginner:

…to use Databot effectively and safely, you still need the skills of a data scientist: background and domain knowledge, data analysis expertise, and coding ability.

There is no LLM-based shortcut to those skills. You cannot LLM your way into domain knowledge, data analysis expertise, or coding ability.

The only way to gain domain knowledge, data analysis expertise, and coding ability is to struggle. To get errors. To google those errors. To look over the documentation. To copy/paste your own code and adapt it for different purposes. To explore messy datasets. To struggle to clean those datasets. To spend an hour looking for a missing comma.

This isn’t a form of programming hazing, like “I had to walk to school uphill both ways in the snow and now you must too.” It’s the actual process of learning and growing and developing and improving. You’ve gotta struggle.

In that post, it warns that you cannot use it as a beginner: …to use Databot effectively and safely, you still need the skills of a data scientist: background and domain knowledge, data analysis expertise, and coding ability. There is no LLM-based shortcut to those skills. You cannot LLM your way into domain knowledge, data analysis expertise, or coding ability. The only way to gain domain knowledge, data analysis expertise, and coding ability is to struggle. To get errors. To google those errors. To look over the documentation. To copy/paste your own code and adapt it for different purposes. To explore messy datasets. To struggle to clean those datasets. To spend an hour looking for a missing comma. This isn’t a form of programming hazing, like “I had to walk to school uphill both ways in the snow and now you must too.” It’s the actual process of learning and growing and developing and improving. You’ve gotta struggle.

This Tumblr post puts it well (it’s about art specifically, but it applies to coding and data analysis too):

Contrary to popular belief the biggest beginner’s roadblock to art isn’t even technical skill it’s frustration tolerance, especially in the age of social media. It hurts and the frustration is endless but you must build the frustration tolerance equivalent to a roach’s capacity to survive a nuclear explosion. That’s how you build on the technical skill. Throw that “won’t even start because I’m afraid it won’t be perfect” shit out the window. Just do it. Just start. Good luck. (The original post has disappeared, but here’s a reblog.)

It’s hard, but struggling is the only way to learn anything.

This Tumblr post puts it well (it’s about art specifically, but it applies to coding and data analysis too): Contrary to popular belief the biggest beginner’s roadblock to art isn’t even technical skill it’s frustration tolerance, especially in the age of social media. It hurts and the frustration is endless but you must build the frustration tolerance equivalent to a roach’s capacity to survive a nuclear explosion. That’s how you build on the technical skill. Throw that “won’t even start because I’m afraid it won’t be perfect” shit out the window. Just do it. Just start. Good luck. (The original post has disappeared, but here’s a reblog.) It’s hard, but struggling is the only way to learn anything.

You might not enjoy code as much as Williams does (or I do), but there’s still value in maintaining codings skills as you improve and learn more. You don’t want your skills to atrophy.

As I discuss here, when I do use LLMs for coding-related tasks, I purposely throw as much friction into the process as possible:

To avoid falling into over-reliance on LLM-assisted code help, I add as much friction into my workflow as possible. I only use GitHub Copilot and Claude in the browser, not through the chat sidebar in Positron or Visual Studio Code. I treat the code it generates like random answers from StackOverflow or blog posts and generally rewrite it completely. I disable the inline LLM-based auto complete in text editors. For routine tasks like generating {roxygen2} documentation scaffolding for functions, I use the {chores} package, which requires a bunch of pointing and clicking to use.

Even though I use Positron, I purposely do not use either Positron Assistant or Databot. I have them disabled.

So in the end, for pedagogical reasons, I don’t foresee me incorporating LLMs into this class. I’m pedagogically opposed to it. I’m facing all sorts of external pressure to do it, but I’m resisting.

You’ve got to learn first.

You might not enjoy code as much as Williams does (or I do), but there’s still value in maintaining codings skills as you improve and learn more. You don’t want your skills to atrophy. As I discuss here, when I do use LLMs for coding-related tasks, I purposely throw as much friction into the process as possible: To avoid falling into over-reliance on LLM-assisted code help, I add as much friction into my workflow as possible. I only use GitHub Copilot and Claude in the browser, not through the chat sidebar in Positron or Visual Studio Code. I treat the code it generates like random answers from StackOverflow or blog posts and generally rewrite it completely. I disable the inline LLM-based auto complete in text editors. For routine tasks like generating {roxygen2} documentation scaffolding for functions, I use the {chores} package, which requires a bunch of pointing and clicking to use. Even though I use Positron, I purposely do not use either Positron Assistant or Databot. I have them disabled. So in the end, for pedagogical reasons, I don’t foresee me incorporating LLMs into this class. I’m pedagogically opposed to it. I’m facing all sorts of external pressure to do it, but I’m resisting. You’ve got to learn first.

Some closing thoughts for my students this semester on LLMs and learning #rstats datavizf25.classes.andrewheiss.com/news/2025-12...

09.12.2025 20:17 — 👍 330    🔁 100    💬 13    📌 31

Fun!

08.12.2025 11:12 — 👍 0    🔁 0    💬 0    📌 0
Preview
Number of ‘unsafe’ publications by psychologist Hans Eysenck could be ‘high and far reaching’ Hans Eysenck A “high and far reaching” number of papers and books by Hans Eysenck could be “unsafe,” according to an updated statement from King’s College London, where the psychologist was a profe…

Diederik Stapel was a massive watershed moment in psychology.

However, he was -- and let's be slightly glib here -- some guy from The Netherlands who wrote social psychology papers.

The full accounting of the Eysenck case is approx, at minimum, TWO STAPELS.

retractionwatch.com/2025/12/03/n...

03.12.2025 17:34 — 👍 109    🔁 69    💬 4    📌 22
Preview
Så borde skolan fungera – om hjärnan får bestämma - I hjärnan på Louise Epstein Det larmas om den svenska skolan. Debatten är hård om orsakerna bakom sjunkande resultat. Men om hjärnan fick bestämma – hur borde skolan fungera?

Det här var bra. www.sverigesradio.se/avsnitt/sa-b...

06.11.2025 18:41 — 👍 7    🔁 1    💬 0    📌 0
Riktig Dammsugare i grönt

Riktig Dammsugare i grönt

bakverket dammsugare

bakverket dammsugare

Dammsugare av märket Hugin Top vac.

Utseendet på dessa äldre dammsugare har gett namn på bakverket som också kallas punschrulle.

20.08.2025 12:16 — 👍 117    🔁 17    💬 9    📌 3

Interesting paper that I'll have to spend slow mathy times with later, but had a real record-scratch-freeze-frame moment while skimming...

21.08.2025 08:12 — 👍 32    🔁 2    💬 2    📌 0
Post image

😯 I recently came across a sobering paper on here: among 16,649 hypothesis tests reported in political science, the median test has only about 10% power to detect the consensus effect size reported in the literature. Fewer than one in ten tests reach the conventional 80% power threshold.

13.08.2025 11:56 — 👍 60    🔁 14    💬 3    📌 2
The Logic of Causal Models - PhilSci-Archive

A logic, with syntactic rules for proofs, corresponding to Pearl's causal inference. Fun stuff.
philsci-archive.pitt.edu/26213/

13.08.2025 12:29 — 👍 2    🔁 0    💬 0    📌 0
Preview
Meet the AI vegans | Arwa Mahdawi They are choosing to abstain from using artificial intelligence for environmental, ethical and personal reasons. Maybe they have a point, writes Guardian columnist Arwa Mahdawi

Meet the AI vegans | Arwa Mahdawi

06.08.2025 10:39 — 👍 128    🔁 30    💬 163    📌 212
Post image

Thoughtful post by @daisychristo.bsky.social on learning to write in an era of AI: substack.nomoremarking.com/p/what-is-th...

18.07.2025 10:33 — 👍 36    🔁 12    💬 1    📌 1

With arithmetic, people understand why. Same with learning the letters the alphabet and then learning to spell out words. With AI, many struggle to understand, because they don't expect ability to do the task AI does for them to be a required ingredient in future learning.

18.07.2025 13:18 — 👍 1    🔁 0    💬 0    📌 0

I think the major point is that many things you learn are supposed to be building blocks in future learning. E.g., the first years in school, students do endless exercises in basic arithmetic... 5+7=?, 13-6=?, ... Why not just let them use a calculator instead? >

18.07.2025 13:15 — 👍 1    🔁 0    💬 1    📌 0

Becker et al. "Measuring the Impact of Early-2025 AI on
Experienced Open-Source Developer Productivity"
metr.org/Early_2025_A...

10.07.2025 23:34 — 👍 0    🔁 0    💬 0    📌 0
Preview
Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task This study explores the neural and behavioral consequences of LLM-assisted essay writing. Participants were divided into three groups: LLM, Search Engine, and Brain-only (no tools). Each completed thr...

A glimpse of the AI-assisted future
arxiv.org/abs/2506.08872

21.06.2025 20:44 — 👍 0    🔁 0    💬 0    📌 0

I'm not so optimistic, because it requires a great deal of linguistic proficiency and discipline to check that the thoughts were actually expressed correctly. I think what will happen is that even the technical literature becomes filled with AI slop.

17.05.2025 07:54 — 👍 0    🔁 0    💬 1    📌 0
The pervasive twoishness of statistics; in particular, the “sampling distribution” and the “likelihood” are two different models, and that’s a good thing | Statistical Modeling, Causal Inference, and...

“Lots of good statistical methods make use of two models…

Perhaps, rather than trying to systematize all statistical learning into a single inferential framework, we would be better off embracing our twoishness” statmodeling.stat.columbia.edu/2011/06/20/t... via @seabbs.bsky.social

09.05.2025 06:50 — 👍 20    🔁 3    💬 2    📌 0
Preview
Folk bruker kunstig intelligens feil, sier forsker Inga Strümke Menneskelig latskap fører til uriktig bruk av KI, mener forskeren.

www.nrk.no/tromsogfinnm...

05.05.2025 06:56 — 👍 0    🔁 0    💬 0    📌 0
Preview
Plausibility and probability in deductive reasoning We consider the problem of rational uncertainty about unproven mathematical statements, remarked on by Gödel and others. Using Bayesian-inspired arguments we build a normative model of fair bets under...

Another one, a bit less concrete, in the same genre
arxiv.org/abs/1708.09032

29.04.2025 09:59 — 👍 0    🔁 0    💬 0    📌 0
Preview
Logical Induction We present a computable algorithm that assigns probabilities to every logical statement in a given formal language, and refines those probabilities over time. For instance, if the language is Peano ar...

The most concrete proposal I've ever seen for reasoning about logical uncertainty
arxiv.org/abs/1609.03543

21.04.2025 18:55 — 👍 0    🔁 0    💬 1    📌 0

1. LLM-generated code tries to run code from online software packages. Which is normal but
2. The packages don’t exist. Which would normally cause an error but
3. Nefarious people have made malware under the package names that LLMs make up most often. So
4. Now the LLM code points to malware.

12.04.2025 23:43 — 👍 7925    🔁 3622    💬 121    📌 448
Post image

today we will all read imbens 2021 on statistical significance and p values, which is a strong contender for having the best opening paragraph of any stats paper

pubs.aeaweb.org/doi/pdf/10.1...

06.04.2025 02:05 — 👍 710    🔁 111    💬 26    📌 9
Preview
Have humans passed peak brain power? Data across countries and ages reveal a growing struggle to concentrate, and declining verbal and numerical reasoning

www.ft.com/content/a801...

14.03.2025 14:57 — 👍 0    🔁 1    💬 0    📌 0

OTOH, the Blue Bus Company will be under-penalized if they are never sanctioned when a pedastrian is hit and can't remember the color of the bus.

IMO, legal ideas like, e.g., the presumption of innocence are a poor fit for probabilistic reasoning. They are closer to some kind of default logic.

24.03.2025 08:00 — 👍 1    🔁 0    💬 0    📌 0

This is a fundamental misunderstanding of how to use such services. To expect them to correctly cite sources is to use them incorrectly. The output is of varying quality and accuracy. If one is not prepared detect and correct factual errors, then simply don't use AI services.

15.03.2025 10:06 — 👍 0    🔁 0    💬 0    📌 0
Ranking Paradoxes, From Least to Most Paradoxical
YouTube video by Chalk Talk Ranking Paradoxes, From Least to Most Paradoxical

Fun list, wonderfully simply presented
www.youtube.com/watch?v=1vJs...

15.02.2025 21:59 — 👍 0    🔁 0    💬 0    📌 0
Preview
The spinorial ball: A macroscopic object of spin-1/2 Editor's Note: If you ever wanted to hold a spin-1/2 particle in your hands, this paper is for you. If you follow the authors' instructions, you will end up wit

These researchers designed a neat macroscopic object to mimic the spinorial behavior of a spin-1/2 fermion, e.g. an electron. The object is a football made up of pentagons and hexagons, the color of which are determined by the spin up and down components, respectively.
pubs.aip.org/aapt/ajp/art...

28.01.2025 17:05 — 👍 1    🔁 0    💬 0    📌 0

Top notch introduction to a scientific paper!

14.01.2025 14:38 — 👍 0    🔁 0    💬 0    📌 0

@imdef is following 20 prominent accounts