Dunno if it’s a skill issue but LLMs, while they make me a good deal more productive, do not make me wildly more productive. Mostly they let me do things I wasn’t going to do otherwise
man uses GPT to guide him through sequencing his dying dog's DNA, identifying the mutated genomes, and developing a bespoke cancer mRNA vaccine
www.theaustralian.com.au/business/tec...
bsky.app/profile/hars...
**LoRA accelerates**
Open-source code to LoRA fine-tune all parameters of Kimi-K2-thinking!
www.workshoplabs.ai/blog/post-tr...
Agreed. Now apply this thinking to "crypto is for scams".
People should be free use technology. Instead, condemn people for using tech to do something bad, not the use of technology itself.
dear fellow ai-likers.
give. it. a. fucking. rest.
you simply do *not* need to respond to ever bad take out there.
you know how we currently have the least powerful AI that will exist in the future?
we're currently seeing the *least bad ai takes* we will see in the future.
pace yourselves.
🧵New paper: "Lost in Backpropagation: The LM Head is a Gradient Bottleneck"
The output layer of LLMs destroys 95-99% of your training signal during backpropagation, and this significantly slows down pretraining 👇
Characteristic velocity of 2.97 km/s! I think T1100G was at 2.7 km/s so pretty nice improvement.
The new record-holder for the highest strength fiber, and also the highest specific strength commercially available material, is Toray's T1200.
8 GPa Tensile Strength, 1820 kg/m^3, 12.4% better specific strength than the previous record-holder T1100 fiber:
www.toraycma.com/toray-develo...
Defuddle now returns Youtube transcripts!
Paste a YouTube link into defuddle·md to get a markdown transcript with timestamps, chapters, and pretty good diarization
...or if you just want to read it, try the new Reader mode in
@obsidian.md Web Clipper powered by Defuddle
Are there any rocky planets large enough to make a chemical rocket infeasible but nuclear staged rockets would make launch feasible?
Thinking about that Fermi paradox argument that species can get "stuck" on a planet that's too large.
www.percepta.ai/blog/can-llm...
As a research lark at Percepta, Christos embedded a computer into an LLM, showed that it could solve the hardest Sudokus, and then as a side bonus built an exponentially faster attention
Love this. A team with clarity about what it will take to apply AI to every domain: humility, trying stuff, and staying immersed in reality.
Hopefully the Iran stuff will provide Xi with another example of how going to war for purely narrative reasons is a bad idea.
"The large diversity of neuronal properties is actually tightly regulated to ensure energy-efficient signaling in different contexts." www.biorxiv.org/content/10.6...
"Some relationships deepen when you tell the truth and some end" by Henrik Karlsson www.henrikkarlsson.xyz/p/going-your...
Ukraine used a drone-mounted laser to destroy enemy fiber-optic drones. Fiber-optic drones are a challence bc they are unjammable.
Laser tech continues to percolate into the battlefield.
www.youtube.com/watch?v=m2-p...
More in this thread:
x.com/michaelandre...
More skeptical take:
x.com/DanTurnerEva...
People underrate progress in brain emulations.
This team took a fly connectome, guessed neurotransmitters, and ran a model of brain/body in a simulated environment!
theinnermostloop.substack.com/p/the-first-...
Today I announce Cantrip: On summoning entities from language in circles.
In this book I unify the paradigm behind base models, chatbots, coding agents, RLMs, and RL agents, through the metaphor of magic. Code is provided.
deepfates.com/cantrip
Do LLMs Benefit from Their Own Words?🤔
In multi-turn chats, models are typically given their own past responses as context.
But do their own words always help…
Or are they more often a waste of compute and a distraction?
🧵
arxiv.org/abs/2602.24287
Related: "Take any extremely smart and experienced software engineer and put them into a new highly complex domain and have them solve a problem without giving them enough time to understand the problem. They will, without fail, deliver a solution of spectacular complexity"
bsky.app/profile/timk...
Rest is critical to clear thinking. With rapid feedback cycles, need to pause and think about whether you're headed the right direction.
Better to go the correct direction slowly than wrong direction quickly.
thingofthings.substack.com/p/ideologica...
Abi Olvera has a nice post on why low-wage work is different than it's portrayed in the media. And how well-meaning policies do more harm than good.
abio.substack.com/p/low-wage-w...
My wish came true!
Seems that Taalas is not promising for edge devices.
It will be expensive and best suited to highly interactive applications with humans.
www.zach.be/p/taalas-is-...
Since we're doing consciousness discourse, might I point out that we've been arguing about this for thousands of years with little progress?
Instead, people should propose real-world tests to determine moral patienthood:
splittinginfinity.substack.com/p/use-prefer...
Excellent post!
Do all of these considerations apply to EMP's as well? So SolidGround would basically protect our grid from them too?
splittinginfinity.substack.com/p/breakthrou...
splittinginfinity.substack.com/p/on-ai-scal...
Recursive self improvement is still a concern! Especially as AI builds its own data sets and interacts with the real world.
But the idea that automated AI research plus an internet text dataset was going to take over the world was always silly.
See also ...
Nice, this will finally put to rest worries about (limited-sense) recursive self-improvement and algorithmic progress.
You can watch the researcher produce significant improvements and then exhaust all the useful tricks.
Forcefully makes the point that data is the real bottleneck.