This all ignores the COI of these authors benefitting from the potential value of their startup if authors, as a population, seek out or start using LLM-assisted review tools.
Some of the downstream implications feel tenuous to me. In a status quo world of humans everywhere in the process, LLM-assisted review surely has some advantages.
A reasonable follow-up question is in a world where LLM-assisted review is de facto at (some) journals, what else also changes?
In my graduate class on marine policy, I learned the the US never signed on to the UN Convention on the Law of the Sea,.
Not because of any particular objections, but because the US doesn't want to be bound by international rules...
"When you see the opposite of what your hypothesis predicts, that means either the hypothesis is wrong or one of your auxiliary hypotheses is wrong."
reminds me of this classic
arxiv.org/pdf/1103.5672
The torment nexus was inside the customer support SOP all along.
"Because nobody is required to change anything. Because the indifference is structural. Because fuck you, but politely, with a ticket number."
fireborn.mataroa.blog/blog/because...
seconded on both the .m3u8 not having the actual data; and ffmpeg as the tool
No, because a DOI resolves to the published item; so it doesn't have any tracking in the way that publishers want (which is have different links that end up at the same location, but which enable them to count who and # of times those different links were clicked.)
100%
I don't think any previous technology or experience has really prepared humans to deal with "errors" in LLM output.
(error is in quotes because the notion that an LLM can produce an error feels rather like suggesting that a magic 8-ball can lie to you)
but everyone loves lots of irritating superfluous parentheses
If I have a personal relationship and I receive LLM prose, I might feel bad that the sender feels it necessary.
At the moment, I would try not to view it negatively on part of the sender.
(but I acknowledge each person's inbox is different, as is their pet peeves)
Admittedly, it can be annoying to parse through LLM verbosity to glean the important details of an email.
On the other hand, some people bury the lede in their emails regardless.
¯\_(ツ)_/¯
Most website copy and press releases don't speak to me anyway.
My feelings around personal messages, DMs are more complex. Many folks who aren't native English speakers likely see some benefit to using LLMs to support writing. (and are often critiqued for their writing not being professional) ...
There's huge variation in how people write and what they consider to be a "first draft".
I think there are at least some parallels with the distinction between external processors and internal processors.
I have yet to make ramen in a coffee machine, but I'm still curious to try it...
I am not sure quite how it happened, but after the cameo in Into the Spider-Verse, an eight-episode live action series of Nic Cage as Spider-Noir was greenlit and made and is coming out in May??
but I thought showing that the data aren't perfectly normal and centered around 0 and then claiming that as proof of one's pet alternative explanation is how science is done. /s
Key to efficient learning is realizing how we ACTUALLY learn, not just what FEELS like learning. I wrote a Claude Skill for some friends to help them think about this and they've liked it -- see Principles for some directions you could explore
github.com/DrCatHicks/l...
"move fast and break things" has never respected the maintenance and glue work required to keep systems running.
Wouldn't it be cheaper to give the board cookies?
Cynically, I think this might presume that an executive search is prioritizing competence in core functions vs. being able to gin up impressive BI metrics for fundraising purposes and/or keep board members happy.
TIL in the UK, BOGOF is preferred over BOGO
I find I enjoy Scalzi best when he indulges in humorous setups (e.g. Redshirts, The Android's Dream); some of his more serious stuff (e.g. OMW), I think is only good, not great.
It is baffling why the NIH / NCBI doesn't just devote a special team to create and maintain an automated system of grabbing AAMs, and leveraging its position to make publishers integrate it.
But I suppose mandates that impose additional burdens on individuals is the US way...
IMO, there's a huge gap between casual data science that all the numbers and assume all errors are independent, and the type of high-quality systematic reviews that happen in some kinds of clinical areas.
The team needs sufficient expertise to identify potential biases in different methodological approaches, and to quantify the direction and degree of biases.
(Also, the team needs to agree~~~)
I would like to hope that there is some growing awareness of effective triangulation / evidence synthesis practices.
Unfortunately, doing it well is pretty effort-intensive...
Agree! The triangulation checklist from Munafo and Davey Smith 2018 is great for this -https://www.nature.com/articles/d41586-018-01023-3
Does anyone else get tripped up by formatting links in markdown as [text](url)?
I often first write it the other way around as [url](text)... maybe I've had too much formative time with writing html links as <a href="url">text</a>
Happy Birthday!