Michael Lin, MD PhD's Avatar

Michael Lin, MD PhD

@michaelzlin.bsky.social

Harvard → UCLA → HMS → UCSD → Associate Prof. of Neurobiology & Bioengineering at Stanford → Molecules, medicines, & SARSCoV2. Bad manners blocked.

2,657 Followers  |  207 Following  |  167 Posts  |  Joined: 26.11.2023  |  2.1792

Latest posts by michaelzlin.bsky.social on Bluesky

Would you mind posting a link to the article; I couldn't find it. Thanks!

04.10.2025 15:39 — 👍 0    🔁 0    💬 1    📌 0
Post image Post image Post image

Had the pleasure of visiting Prague as part of an advisory commission for the Czech Academy of Sciences Institute of Biotechnology. Got to check out exciting science and the impressive ultra-high resolution MS machine.

Great to see people working hard to expand knowledge, with public support too!

30.09.2025 14:32 — 👍 3    🔁 0    💬 0    📌 0
Preview
Kennedy’s Vaccine Panel Votes to Limit Access to Covid Shots

I cannot overstate how remarkable it is that under GOP rule, US federal health regulations have been captured by fringe crackpots who espouse views that the vast majority of the US public—and nearly 100% of health professionals—reject.

Gift link:

20.09.2025 04:18 — 👍 4263    🔁 1563    💬 138    📌 85
Post image

First clouds over Stanford since spring

11.09.2025 01:30 — 👍 9    🔁 0    💬 1    📌 0

I addressed this as well in the original thread. Thanks Christophe for linking to it

06.09.2025 06:45 — 👍 1    🔁 0    💬 0    📌 0

Thus the arbitrary 95% standard and how it is applied leads to contradictory conclusions, making scientists seem to hapless and clueless. So it harms public understanding and scientific support to insist on painting results in black or white rather than how they actually are: shades of gray.

28.08.2025 15:44 — 👍 2    🔁 0    💬 1    📌 0

And this is not just an academic exercise. How many times do you read in the news there is no association between risk factor X and outcome Y, only to read the opposite a few months later? These inconsistencies are often due to these Type 2 errors of declaring no difference when there was one.

28.08.2025 15:44 — 👍 1    🔁 0    💬 1    📌 0

It's more informative, accurate, and comprehensive than our current rules of saying yes or no when the answer is almost always different degrees of maybe. It would do justice to the concept of statistics, which is the supposed to be the science of quantifying degrees of certainty.

28.08.2025 15:44 — 👍 1    🔁 0    💬 2    📌 0

Then one can calmly and rationally consider whether that result provides some support for a hypothesis, together with what is mechanistically likely.

Again this would be for the 95% of non-clinical experiments that aren't addressing a hypothesis with treatment-chaning or financial implications.

28.08.2025 15:44 — 👍 0    🔁 0    💬 1    📌 0

This would be much more factual than "There was no significant difference between Groups A and B" or, even worse but too common, "There was no difference between Groups A and B".

28.08.2025 15:44 — 👍 0    🔁 0    💬 1    📌 0

Allow papers and proposals to show the graph of outcome distributions by condition and to state any possible or likely differences by the actual confidence level. For example, "Group B had 50% higher levels than Group A on average; the distributions were 90% likely non-random".

28.08.2025 15:44 — 👍 2    🔁 0    💬 1    📌 0

The defense of these arbitrary requirements is that they are necessary to prevent a high false-positive rate. But we don't have to generate a bunch of false negatives and throw out all discussion of actual likely differences to counteract that. There is a simple, easy, clear, and logical solution.

28.08.2025 15:44 — 👍 1    🔁 0    💬 1    📌 0

Thus the arbitrary 95% threshold and its enforcement by data non-discussion leads to a lot of false negative conclusions. Essentially real differences are being suppressed and thrown aside if they don't get to 95% confidence. It's wasteful and leads to actual wrong conclusions.

28.08.2025 15:44 — 👍 0    🔁 0    💬 1    📌 0

What makes the situation harmful is that we have imposed this arbitrary threshold of 95% confidence onto all experimental results, and reviewers for grants and papers are being instructed to not allow any discussion of differences if that threshold is not met.

28.08.2025 15:44 — 👍 2    🔁 0    💬 1    📌 0

In reality most experiments where p values are calculated aren't powered to meet a predicted effect size, so are underpowered. And many actual differences are reported in a conceptually and statistically incorrect manner as "no difference" when it's "difference not reaching a 95% confidence level".

28.08.2025 15:44 — 👍 7    🔁 0    💬 1    📌 0

For other research it can also be worth it, say for the final conclusive hypothesis test in a preclinical study, to also set a rigorous 95% threshold, to get that level of certainty.

But let's be honest: 95% of the experiments out there for which p values are calculated don't need that...

28.08.2025 15:44 — 👍 2    🔁 0    💬 1    📌 0

One place there is an absolute need for an arbitrary threshold is registrational clinical trials where an adequate level of statistical confidence needs to be pre-agreed upon, then met to get approval.

28.08.2025 15:44 — 👍 4    🔁 0    💬 1    📌 0

I think having an arbitrary threshold for statistical significance does more harm than good. It creates artificially black or white conclusions (there was a significant difference or not, with the word significant often omittted leading to bad misunderstandings) when knowledge is actually all gray.

28.08.2025 15:44 — 👍 25    🔁 2    💬 1    📌 2

“A legitimate PhD-level expert in anything,” they said.

“Show me a diagram of the US presidents since FDR, with their names and years in office under their photos,” I said.

08.08.2025 18:54 — 👍 3103    🔁 741    💬 335    📌 606

And these insights from imaging fast interneuron spiking over several days is something that only genetically encoded voltage indicators can provide.

24.07.2025 22:34 — 👍 0    🔁 0    💬 0    📌 0

Thus interneurons do learn, but there is a hierarchy of specificity, where pyramidals > PV > SST. And the role of PV appears to be to engage in negative feedback to enhance contrast between odor-encoding pyramidals, required to link the memory of CS and US, vs non-encoding pyramidals.

24.07.2025 22:33 — 👍 0    🔁 0    💬 1    📌 0
Post image

Interestingly they could use ASAP3 to record spiking responses of the same neurons over days, before and after training, allowing them to show that about half of the PV interneurons that respond to odor one day remain responsive the next day. Pyramidals learn more stably, SST interneurons worse.

24.07.2025 22:33 — 👍 0    🔁 0    💬 1    📌 0

They find that PV neurons are activated by stimulus presentation to overall suppress pyramidal spiking for a few seconds. The effect is to reduce background spiking allowing odor-specific and time-specific pyramidal cell firing to show more contrast over background.

24.07.2025 22:10 — 👍 1    🔁 0    💬 1    📌 0

And now the 3rd paper this week using ASAP-family voltage indicators in mice to look at fast neuronal activity.

In this case, collaborators Jiannis Taxidis and Peyman Golshani used ASAP3 to record spiking in PV and SST interneurons in the hippocampus.

Published yesterday in @natneuro.nature.com

24.07.2025 21:55 — 👍 7    🔁 1    💬 2    📌 0

I think it's been pushed by journal editors looking for citation metrics, but reviewers should try to push back on this trend with critical thinking.

Anyway I will get off the soap box now.

20.07.2025 20:02 — 👍 0    🔁 0    💬 0    📌 0

Computer calculations were thought to be resource-unlimited, but the energy demands of AI mean that is no longer true. And the odd thing is that science has always been resource-limited, so this shift toward one expensive description over multiple mechanistic investigations is self-defeating.

20.07.2025 20:02 — 👍 0    🔁 0    💬 1    📌 0

Of course it's not the technology itself that is to blame, but poor training and lack of selectivity in its use. In science that means reviewers and journal editors choosing to accept expensive high-throughput surveys of biology and rejecting experiments that will generate new mechanistic insights.

20.07.2025 20:00 — 👍 1    🔁 0    💬 2    📌 0

We see it in the overuse of energy-intensive AI for simple queries, the writing of overly long and sloppy code by AI, the overuse of high-throughput sequencing methods to generate data in lieu of any mechanistic discovery, etc.

20.07.2025 19:59 — 👍 2    🔁 0    💬 1    📌 0

A negative side-effect of technological abundance is that it does not train people to deal with resource-limited conditions. Growing up with the easy ability to produce productivity outputs, people lose appreciation for, and ability to identify, creative and economically resourceful solutions.

20.07.2025 19:57 — 👍 6    🔁 0    💬 1    📌 1

Thanks; indeed electrical signals propagate and decay in unexpected ways on length-scales from microns to meters, so there's a lot to learn

17.07.2025 23:22 — 👍 1    🔁 0    💬 0    📌 0

@michaelzlin is following 20 prominent accounts