Joachim W Pedersen's Avatar

Joachim W Pedersen

@joachimwpedersen.bsky.social

Bio-inspired AI, meta-learning, evolution, self-organization, developmental algorithms, and structural flexibility. Postdoc @ ITU of Copenhagen. https://scholar.google.com/citations?user=QVN3iv8AAAAJ&hl=en

658 Followers  |  474 Following  |  28 Posts  |  Joined: 11.08.2024
Posts Following

Posts by Joachim W Pedersen (@joachimwpedersen.bsky.social)

EvoSelf 2026

My colleagues are hosting a workshop for this years #GECCO: evolving-self-organisation-workshop.github.io/gecco-2026/

The headline is Evolving Self-Organisation. Can't wait to see all your interesting submissions! Great for #ALife and #ALICE researchers as well!

24.02.2026 12:51 — 👍 3    🔁 1    💬 0    📌 0
Preview
Neural Network Quine Self-replication is a key aspect of biological life that has been largely overlooked in Artificial Intelligence systems. Here we describe how to build and train self-replicating neural networks. The n...

Neural Network Quines: training a model to output its own weights

arxiv.org/abs/1803.05859

13.10.2025 22:18 — 👍 27    🔁 3    💬 1    📌 3

Looking forward to this!

11.09.2025 09:40 — 👍 4    🔁 0    💬 0    📌 0
Preview
Knowledge Work Is Dying—Here’s What Comes Next While AI devours information-based roles, OpenAI, Alphabet, and Apple are investing in wisdom work—and you can, too

These perspectives formulated by Joe Hudson resonate a lot with me as an AI researcher with a background in psychology
every.to/thesis/knowl...

29.06.2025 18:42 — 👍 2    🔁 0    💬 0    📌 0
Post image

Introducing The Darwin Gödel Machine

sakana.ai/dgm

The Darwin Gödel Machine is a self-improving agent that can modify its own code. Inspired by evolution, we maintain an expanding lineage of agent variants, allowing for open-ended exploration of the vast design space of such self-improving agents.

30.05.2025 02:29 — 👍 37    🔁 9    💬 1    📌 3
Video thumbnail

“Continuous Thought Machines”

Blog → sakana.ai/ctm

Modern AI is powerful, but it's still distinct from human-like flexible intelligence. We believe neural timing is key. Our Continuous Thought Machine is built from the ground up to use neural dynamics as a powerful representation for intelligence.

12.05.2025 02:33 — 👍 73    🔁 15    💬 3    📌 5
Preview
Advances and Challenges in Foundation Agents: From Brain-Inspired Intelligence to Evolutionary, Collaborative, and Safe Systems The advent of large language models (LLMs) has catalyzed a transformative shift in artificial intelligence, paving the way for advanced intelligent agents capable of sophisticated reasoning, robust pe...

arxiv.org/abs/2504.01990

05.04.2025 11:41 — 👍 1    🔁 1    💬 0    📌 0

New submission deadline: April 2nd!
So still some time to put interesting thoughts on Evolving Self-Organization together!

Also: We are very fortunate to have the great Risto Miikkulainen as the keynote speaker at the workshop!

Can't wait to see you all there! 🤩🙌
#Evolution #Gecco #ALife

26.03.2025 09:51 — 👍 4    🔁 0    💬 0    📌 0

Very satisfying to see one's code run on actual real-world robots and not just simulation.
Check out the paper here:
arxiv.org/pdf/2503.12406

19.03.2025 10:25 — 👍 4    🔁 0    💬 0    📌 0
[IROS25] Bio-Inspired Plastic Neural Nets for Zero-Shot Out-of-Distribution Generalization in Robots
YouTube video by Worasuchad Haomachai [IROS25] Bio-Inspired Plastic Neural Nets for Zero-Shot Out-of-Distribution Generalization in Robots

www.youtube.com/watch?v=jnoa...
Bio-Inspired Plastic Neural Nets that continually adapt their own synaptic strengths can make for extremely robust locomotion policies!
Trained exclusively in simulation, the plastic networks transfer easily to the real world, even under various extra OOD situations.

19.03.2025 10:24 — 👍 20    🔁 5    💬 2    📌 0

Remember that 4-page submissions of early results are also welcome!

Also, does anyone know if #GECCO has an official 🦋 account? I cannot seem to find it...

26.02.2025 18:35 — 👍 1    🔁 0    💬 0    📌 0

Both 4-pagers of early research as well as 8-page papers with more substantial results are welcome!

10.02.2025 13:46 — 👍 1    🔁 0    💬 0    📌 0

Join us for the Evolving Self-Organisation workshop at #GECCO this year! Great chance to submit your favourite ideas concerning self-organisation processes and evolution, and how they interact.
Relevant for Alifers #ALife and anyone interested in #evolution, #self-organisation, and #ComplexSystems.

10.02.2025 13:46 — 👍 14    🔁 3    💬 1    📌 2

Very cool! And great aesthetics as well 🙌 😊

21.01.2025 18:22 — 👍 3    🔁 0    💬 0    📌 0
Video thumbnail

Ever wish you could coordinate thousands of units in games such as StarCraft through natural language alone?

We are excited to present our HIVE approach, a framework and benchmark for LLM-driven multi-agent control.

21.01.2025 12:39 — 👍 38    🔁 11    💬 2    📌 0

With all the research coming from Sakana AI, this figure needs to be updated fast! direct.mit.edu/isal/proceed...

#LLM #ALife #ArtificialIntelligence

15.01.2025 13:03 — 👍 13    🔁 1    💬 0    📌 0

Transformer²: Self-adaptive LLMs

arxiv.org/abs/2501.06252

Check out the new paper from Sakana AI (@sakanaai.bsky.social) paper. We show the power of an LLM that can self-adapt its weights to its environment!

15.01.2025 05:56 — 👍 46    🔁 10    💬 1    📌 3

Vi har samlet et starter pack med forskere og repræsentanter fra ITU på Bluesky. Mød dem her 👇
go.bsky.app/E8WJwXS

14.01.2025 13:03 — 👍 23    🔁 9    💬 0    📌 0

Can Dynamic Neural Networks boost Computer Vision and Sensor Fusion?
We are very happy to share this awesome collection of papers on the topic!

08.01.2025 09:33 — 👍 6    🔁 2    💬 0    📌 0

If microchip ~= silicon
then AGI ~= huge pile of sand

22.12.2024 09:42 — 👍 19    🔁 2    💬 1    📌 0

Neural Attention Memory Models are evolved to optimize the performance of Transformers by actively pruning the KV cache memory. Surprisingly, we find that NAMMs are able to zero-shot transfer its performance gains across architectures, input modalities and even task domains! arxiv.org/abs/2410.13166

10.12.2024 01:41 — 👍 57    🔁 9    💬 1    📌 0

3) Optimizer optimization: Think hyperparameter tuning, e.g., learning rate etc. The search within the inner-loop is altered.

We use meta-learning to achieve improved inner-loop optimization, so it is well worth considering exactly how our double-loop achieves this!
#meta-learning #deeplearning #ai

04.12.2024 03:21 — 👍 2    🔁 0    💬 0    📌 0

1) Starting point optimization: Think MAML. Move the initial point of the inner-loop search to a better place to learn quick.
2) Loss landscape optimization: Think neural architecture search. The loss landscape(s) of the inner-loop is transformed.

04.12.2024 03:21 — 👍 1    🔁 0    💬 1    📌 0

This can be thought of independently from which optimizer is being used in the inner-loop.
In any meta-learning approach, the outer-loop optimization will transform the inner-loop optimization process in at least of one three ways and often in a combination of these three.

04.12.2024 03:21 — 👍 1    🔁 0    💬 1    📌 0
Post image

In deep learning research, we often categorize meta-learning approaches as either gradient-based or black-box meta-learning. In my PhD thesis, I argued that it can sometimes be useful to classify approaches based on how the outer-loop optimization affects the inner-loop optimization.

04.12.2024 03:21 — 👍 6    🔁 2    💬 1    📌 0

Like 130,000 others, I made a starter pack. This one is people working on or with evolutionary computation in its many forms: genetic algorithms, genetic programming, evolution strategies.

If you like to be added, or suggest someone else, message me or reply to this post.

28.11.2024 04:58 — 👍 40    🔁 12    💬 7    📌 2

Thanks for making a pack putting the spotlight on evolutionary computation! I would love to join the list :)

28.11.2024 08:08 — 👍 1    🔁 0    💬 1    📌 0

watermark.silverchair.com/isal_a_00759...

24.11.2024 11:45 — 👍 0    🔁 0    💬 0    📌 0

Would also like to plug this perspective paper: From Text to Life: On the Reciprocal Relationship between Artificial Life and Large Language Models, which I was proud to contribute to along with all the talented colleagues on the author list :)

24.11.2024 11:45 — 👍 1    🔁 0    💬 1    📌 0

In the same vein of NDPs:
arxiv.org/pdf/2405.08510

A have also been interested in synaptic plasticity for a while (
arxiv.org/pdf/2104.07959 ) and how plasticity can be used to achieve structural flexibility in neural networks: dl.acm.org/doi/pdf/10.1...

24.11.2024 11:45 — 👍 2    🔁 0    💬 1    📌 0