And in theory any videos made should come with a visible and invisible watermark. Iβd be interested to know if they do! dig.watch/updates/ai-g...
02.12.2025 20:23 β π 5 π 3 π¬ 0 π 1@lizziegibney.bsky.social
Senior reporter at Nature, views my own. Journalist covering physics, AI, policy. Attempting to stop lurking and start posting. See my stories at nature.com/news
And in theory any videos made should come with a visible and invisible watermark. Iβd be interested to know if they do! dig.watch/updates/ai-g...
02.12.2025 20:23 β π 5 π 3 π¬ 0 π 1I write a lot about AI and in AI policy circles I kept hearing one thing -- China is the country talking loudest about wanting to regulate the technology at a global level.
Here's my explainer on what that could look like
π§ͺπ€
www.nature.com/articles/d41...
Rather than just scaling, is it time to bring the neural networks behind LLMs together with old school rule-based 'symbolic' AI?
Lots of opportunities and challenges in this great story by @nicolakimjones.bsky.social π§ͺπ€
www.nature.com/articles/d41...
A robot pulls on ends of the thread containing a slipknot
In a gear change from my last few stories...by acting as a "mechanical fuse" this simple little slipknot, added to surgical thread, can radically improve how surgeons perform sutures & lead to better outcomes
A lovely intersection of mechanics, geometry and medicine π§ͺ
www.nature.com/articles/d41...
This announcement brimmed with hype but at its heart are interesting qs - what if models could learn from high-end scientific (rather than everyday) data? And are there specific science Qs that AI can help?
Done well, AI for science yields amazing stuff (see AlphaFold). But done badly brings risks
DeepMind has long been seen as the scientists' AI firm: focusing on ethics, publishing prolifically & tackling problems researchers want solved.
But since the advent of LLMs, the pace of AI has changed and commercial imperatives abound. Can DeepMind stay on top?
www.nature.com/articles/d41...
π§ͺπ€
This millimetre-size robot from @ethz.ch can be steered around the body in real time using magnetic fields.
Once it arrives at a blockage or tumour, it delivers its cargo of drug and dissolves. So far only in pigs and sheep; human trials are next.
My story here: www.nature.com/articles/d41... π§ͺ
There was a lot of hubbub last month about this tiny, one-author model that beats some big LLMs on a prominent benchmark supposed to test intelligence (ARC-AGI).
Right now it mainly does sudokus, but could this technique eventually helps AI go beyond LLMs?
www.nature.com/articles/d41...
π§ͺπ€
This is pretty cool. How do you train an LLM? Yes but reeeeally how? This 200+ page (!) blog from @hf.co shows how to train a model from start to finish, bugs, warts & all. Loads of interesting details I hadn't thought about huggingface.co/spaces/Huggi...
31.10.2025 11:30 β π 3 π 0 π¬ 0 π 0I'm also not sure how exciting it is to do slightly better NMR - millions of dollars of hardware kind of exciting?
And, as is often in these quantum advantage claims, now that researchers will try to beef up classical calculations, the claim may not last long
What's cool? These QC measurements can tease out otherwise hard to get information & look suited to mapping onto NMR-type problems to reveal features of molecular structure. The studies are rigorous.
BUT the work is very proof of principle. "Showing promise" does not equate to "this will happen"
Another day, another claim of quantum advantage -- this time, with hints of (someday) doing something useful π§ͺβοΈ
www.nature.com/articles/d41...
with @shannonvallor.bsky.social @anilseth.bsky.social @williamis.bsky.social
21.10.2025 06:20 β π 2 π 0 π¬ 0 π 0It was packed with AI royalty (& a sprinkling of IRL celebs) & an excellent overview what the world is getting right (& wrong) on AI. One point shone through -- in striving for AGI we might be getting AI very wrong
You can still watch the whole event here: www.youtube.com/live/GmnBTCK...
Where do the world famous LLM sceptic @garymarcus.bsky.social & his bud Laurence Fishburne (AKA Morpheus) like to hang out? The Royal Society of course! π§ͺπ€
Here's my write up from @unisouthampton.bsky.social's Celebrating the 75th Anniversary of the Turing Test event www.nature.com/articles/d41...
Next week @jameszou.bsky.social & colleagues will host a conference where all the papers are written by AI agents & reviewed by them too.
What do you reckon? A good chance to put AIs through their paces? Or a way to divert AI slop from elsewhere? π§ͺπ€
My story here:
www.nature.com/articles/d41...
Not common to see a 7-byline story in the wild, but that's what the situation calls for when the government shuts down and every science agency needs to be checked in on for RIFs, grant terminations, and general dysfunction.
Our update on what the chaos means for science here:
I'm pretty sceptical about 'AI scheming', as it's so easy to anthropomorphise & experiments often involve telling the AI to do the bad thing they end up doing.
To understand what's behind the hype, read this smart & sober overview from@silverjacket.bsky.social π§ͺπ€
www.nature.com/articles/d41...
Over at @naturepodcast.bsky.social we also made a super short vid on this year's physics Nobel Prize. Quantum phenomena at the macroscopic scale in two minutes... go! With me, and Ben Thompson & camera work by the fab @emzywb.bsky.social π§ͺβοΈ
www.youtube.com/shorts/krita...
John Martinisβ wife didnβt wake him in the middle of the night (California time) to tell him he had won a Nobel. βI got up a little bit before 6. Then I opened my computer and saw John and Michelβs and my pictures."
Story by @lizziegibney.bsky.social and me
www.nature.com/articles/d41...
Have you heard loads about "AI agents", but have little idea what they are, what they can do for researchers and whether to believe the hype? Then this is for you! π§ͺπ€https://www.nature.com/articles/d41586-025-03246-7
06.10.2025 08:22 β π 4 π 1 π¬ 0 π 0Is there a reason we picked this slightly terrifying still? π€£
02.10.2025 09:18 β π 0 π 0 π¬ 1 π 0Nature's news team is now recruiting for our paid US intern position for January to June 2026. It's a great opportunity to come join our team & publish important stories π§ͺ
Apply by 16 Oct springernature.wd3.myworkdayjobs.com/SpringerNatu...
I'm sure there's valuable science in there but experts I consulted were not super convinced by the paper (why does it work better with noise? Why didn't they compare to the best available classical algo?) We won't cover it. Alas others already have&without outside comment www.ft.com/content/d9d4...
26.09.2025 11:35 β π 10 π 1 π¬ 2 π 0I was going to post a rant about the perils of quantum computing hype, following the claim by #HSBC & #IBM that they have improved bond market predictions using a QC, but instead I will just link to Scott Aaronson's blog which says it all scottaaronson.blog?p=9170 π§ͺβοΈ cc @bullshitquantum.bsky.social
26.09.2025 11:35 β π 32 π 7 π¬ 3 π 1Also read @helenpearson.bsky.social & @heidiledford.bsky.social's excellent story unpicking the origins of Trump's paracetamol/autism claims www.nature.com/articles/d41...
This quote sums it up: "We do not think that taking acetaminophen is in any way contributing to actually causing autism"
And here's Nature's take on why more AI developers should follow suit and put their LLMs through the peer review wringer. The process is far from perfect, but it seems a valuable counterbalance against AI hype and good for clarity & safety www.nature.com/articles/d41...
17.09.2025 19:10 β π 5 π 0 π¬ 0 π 0The paper is here -- and kudos to #deepseek for going through the peer review process for such a cutting edge, general model www.nature.com/articles/s41...
The peer review exchanges with 8 external experts (linked from the paper) are well worth a read
Remember DeepSeek's R1 model that crashed the US stock market in Jan? DeepSeek has said it did not boost the model by training on OpenAI outputs. This and much more (eg $$ to train & technical details) revealed in the firm's peer reviewed paper out in Nature today π§ͺπ€ www.nature.com/articles/d41...
17.09.2025 19:10 β π 17 π 4 π¬ 1 π 0The top line is we're never going to get rid of hallucinations as it's just the way LLMs are built: they're not understanding, they're guessing based on stats. But maybe LLMs can be better fine-tuned to sound less confident, so humans aren't so taken in by them & use them more appropriately?
11.09.2025 09:11 β π 24 π 4 π¬ 6 π 0