's Avatar

@sakanaai.bsky.social

Sakana AI is an AI R&D company based in Tokyo, Japan. ๐Ÿ—ผ๐Ÿง  https://sakana.ai/careers

1,004 Followers  |  0 Following  |  255 Posts  |  Joined: 01.12.2024  |  2.3272

Latest posts by sakanaai.bsky.social on Bluesky

We're pleased to announce that Sakana AI is co-hosting the โ€œAI for Science: Algorithms to Atomsโ€ social event and panel discussion during #NeurIPS2025 with Yann LeCun, Bill Dally, Anima Anandkumar, and Max Welling!

If you'll be at NeurIPS San Diego, hereโ€™s the link to join: luma.com/AI-for-Scien...

24.11.2025 02:02 โ€” ๐Ÿ‘ 8    ๐Ÿ” 2    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Post image

ๆฒ–็ธ„ใง้–‹ๅ‚ฌไธญใฎICONIP 2025ใฎๅ…ฌ้–‹ใƒ•ใ‚ฉใƒผใƒฉใƒ ใซใ€Sakana AI Research Scientistใฎๅฑฑ็”ฐ็ฅๅคชๆœ—ใŒ็™ปๅฃ‡ใ—ใพใ™ใ€‚

ใใฎไป–ใ€ALife Institute ๅฒก็‘ž่ตทๅ…ˆ็”ŸใฎๅŸบ่ชฟ่ฌ›ๆผ”ใ‚„ใƒ‘ใƒใƒซ่จŽ่ซ–ใ‚‚ใ‚ใ‚Šใพใ™ใ€‚ใœใฒใ”่ฆงใใ ใ•ใ„ใ€‚

11/24 (็ฅ) 13:00-

็พๅœฐ๏ผ‹online๏ผˆ็„กๆ–™ใƒป่ฆ็™ป้Œฒ๏ผ‰

iconip2025.apnns.org/forum/

22.11.2025 11:59 โ€” ๐Ÿ‘ 3    ๐Ÿ” 1    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

Excited to announce Sakana AIโ€™s Series B! ๐ŸŸ
sakana.ai/series-b

From day one, Sakana AI has done things differently. Our research has always focused on developing efficient AI technology sustainably, driven by the belief that resource constraintsโ€”not limitless computeโ€”are key to true innovation.

17.11.2025 00:03 โ€” ๐Ÿ‘ 26    ๐Ÿ” 2    ๐Ÿ’ฌ 3    ๐Ÿ“Œ 1

Our company is dedicated to deploying our R&D into Japanโ€™s key business and public sectors. This funding will accelerate our mission: to develop AI sustainably and implement technology that truly benefits Japan.

17.11.2025 00:01 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

Weโ€™re proud of our deep work with enterprise clients like MUFG and Daiwa Securities Group in finance, and are now expanding this focus to include other key areas like defense and the industrial sector.

17.11.2025 00:01 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 1

Over the past year, weโ€™ve built a healthy, growing enterprise AI business where we have partnered with some of the largest enterprises in Japan, focusing on AI applications that deliver real, practical benefits to our clients.

17.11.2025 00:01 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

This principle extends directly to our business model, which we built with that same focus on sustainability and profitability.

17.11.2025 00:00 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
At present, we are seeing a record amount of capital pouring into AI compute at a rate which may not be sustainable, funding AI businesses without a clear path to profitability, and consuming unprecedented levels of energy, to develop AI models under the assumption of near limitless resources.

But is this really the future that we want to build? As an AI R&D company in Tokyo, we also question whether this is the right approach for Japan, a nation with limited resources, to build its Sovereign AI.

We believe that intelligent life has arisen not from an abundance of resources but rather from the lack of it. Nature ultimately selects systems that are able to do more with less. It is debatable whether such constraints are a requirement for intelligence to emerge, but it is undeniable that our own intelligence is a result of resource constraints.

Sakana AI has always done things differently. Since the beginning, our research has focused on developing efficient AI technology sustainably. For instance, rather than reinventing the wheel by training yet another large foundation model from scratch, we instead pioneered ways of using evolution to combine existing open-source models, using tree search to combine closed models, getting models to self-improve, to push the frontier of AI capabilities.

Last year, we also pioneered the use of LLM agents to automate AI science, enabling more efficient algorithms to be discovered by AI itself. We developed energy efficient language models that work on edge devices, and we are also working on entirely new AI architectures which may carry us into a vastly more efficient AI paradigm.

We also created our business model with sustainability and profitability in mind. This year, we built a healthy and growing enterprise AI business, where we have partnered with some of the largest enterprises in Japan to deploy AI-enabled business applications that can truly bring return on investment to our clients with our technology.

At present, we are seeing a record amount of capital pouring into AI compute at a rate which may not be sustainable, funding AI businesses without a clear path to profitability, and consuming unprecedented levels of energy, to develop AI models under the assumption of near limitless resources. But is this really the future that we want to build? As an AI R&D company in Tokyo, we also question whether this is the right approach for Japan, a nation with limited resources, to build its Sovereign AI. We believe that intelligent life has arisen not from an abundance of resources but rather from the lack of it. Nature ultimately selects systems that are able to do more with less. It is debatable whether such constraints are a requirement for intelligence to emerge, but it is undeniable that our own intelligence is a result of resource constraints. Sakana AI has always done things differently. Since the beginning, our research has focused on developing efficient AI technology sustainably. For instance, rather than reinventing the wheel by training yet another large foundation model from scratch, we instead pioneered ways of using evolution to combine existing open-source models, using tree search to combine closed models, getting models to self-improve, to push the frontier of AI capabilities. Last year, we also pioneered the use of LLM agents to automate AI science, enabling more efficient algorithms to be discovered by AI itself. We developed energy efficient language models that work on edge devices, and we are also working on entirely new AI architectures which may carry us into a vastly more efficient AI paradigm. We also created our business model with sustainability and profitability in mind. This year, we built a healthy and growing enterprise AI business, where we have partnered with some of the largest enterprises in Japan to deploy AI-enabled business applications that can truly bring return on investment to our clients with our technology.

Weโ€™re thrilled to announce Sakana AI has raised ยฅ20B in our Series B! ๐ŸŸ

From day one, weโ€™ve taken a different path. Our research has always focused on developing efficient AI technology sustainably, driven by the belief that resource constraintsโ€”not limitless computeโ€”are key to true innovation.

17.11.2025 00:00 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Video thumbnail

Announcing our Series B ๐ŸŸ

sakana.ai/series-b

16.11.2025 23:59 โ€” ๐Ÿ‘ 14    ๐Ÿ” 2    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 1

Great to see Tarin Clanuwat featured for her amazing work. She has a deep love for Japanese classical literature and is using AI to build bridges to that past for everyone.

www.tokyoupdates.metro.tokyo.lg.jp/post-1670/

Weโ€™re lucky to have her driving this at Sakana AI.

14.11.2025 03:55 โ€” ๐Ÿ‘ 6    ๐Ÿ” 2    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Post image

ใ€Œๆ—ฅๆœฌใฎๆ–‡ๅŒ–ใ‚„็คพไผš็š„ๆ–‡่„ˆใ‚’็š„็ขบใซ็†่งฃใงใใ‚‹AIใƒขใƒ‡ใƒซใ‚’ๆง‹็ฏ‰ใ—ใŸใ„ใ€

ๆฑไบฌ้ƒฝใฎๅ…ฌๅผใƒกใƒ‡ใ‚ฃใ‚ขใ€ŒTOKYO UPDATESใ€ใซใ€Sakana AIใƒชใ‚ตใƒผใƒใ‚ตใ‚คใ‚จใƒณใƒ†ใ‚ฃใ‚นใƒˆใฎใ‚ซใƒฉใƒผใƒŒใƒฏใƒƒใƒˆใƒปใ‚ฟใƒชใƒณใฎใ‚คใƒณใ‚ฟใƒ“ใƒฅใƒผใŒๆŽฒ่ผ‰ใ•ใ‚Œใพใ—ใŸใ€‚

www.tokyoupdates.metro.tokyo.lg.jp/post-1670/

ๆ—ฅๆœฌใฎ่ฒด้‡ใชๆ–‡ๅŒ–่ณ‡ๆบใ‚’ๆดป็”จใ™ใ‚‹ใ“ใจใงใ€ๆ—ฅๆœฌใซๆ นใ–ใ—ใŸAIๆŠ€่ก“ใฎๅฎŸ็พใ‚’็›ฎๆŒ‡ใ™ๅ–ใ‚Š็ต„ใฟใซใคใ„ใฆใ€ใœใฒใ”ไธ€่ชญใใ ใ•ใ„ใ€‚

14.11.2025 03:40 โ€” ๐Ÿ‘ 4    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 1
Preview
From GRPO to GPT-5: Sudoku Variants

The benchmark continues to reveal gaps between AI computation and human-like reasoning.

๐Ÿ”— Blogpost: pub.sakana.ai/sudoku-gpt5/
๐Ÿ“Š Leaderboard: pub.sakana.ai/sudoku/
๐Ÿ“„ Report: arxiv.org/abs/2505.16135
๐Ÿ’ป GitHub: github.com/SakanaAI/Sudoku-Bench

11.11.2025 08:07 โ€” ๐Ÿ‘ 3    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

Our GRPO and "Thought Cloning" experiments (learning from expert solvers) show current methods struggle with spatial reasoning and creative insights humans use naturally.

11.11.2025 08:07 โ€” ๐Ÿ‘ 2    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

Unlike Chess or Go, these puzzles require understanding novel rules through meta-reasoning, then maintaining consistency over long reasoning chains.

11.11.2025 08:06 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Video thumbnail

GPT-5 on Sudoku-Bench ๐Ÿงฉ

GPT-5 now leads our Sudoku-Bench leaderboard with 33% solve rate, ~2x the previous best, and is the first LLM to solve a 9x9 modern Sudoku.

Still, 67% of puzzles remain unsolved.

Read more about our update here:
๐Ÿ”— Blogpost โ†’ pub.sakana.ai/sudoku-gpt5/

๐Ÿงต Thread ๐Ÿ‘‡

11.11.2025 08:04 โ€” ๐Ÿ‘ 18    ๐Ÿ” 2    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

Excited to release our new work: Petri Dish Neural Cellular Automata!

pub.sakana.ai/pdnca

We investigate how multi-agent NCAs can develop into artificial life ๐Ÿฆ  exhibiting complex, emergent behaviors like cyclic dynamics, territorial defense, and spontaneous cooperation.

05.11.2025 00:47 โ€” ๐Ÿ‘ 33    ๐Ÿ” 10    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Preview
GitHub - SakanaAI/petri-dish-nca Contribute to SakanaAI/petri-dish-nca development by creating an account on GitHub.

Learn more about our approach.

GitHub: github.com/SakanaAI/pet...
Online Technical Report: pub.sakana.ai/pdnca

05.11.2025 00:28 โ€” ๐Ÿ‘ 2    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

Petri Dish Neural Cellular Automata (PD-NCA) is a new ALife substrate that consists of a differentiable world where multiple NCA learn to self-replicate and grow via ongoing gradient descent. Every individual is constantly trying to grow, all the while learning to adapt and out-compete its neighbors

05.11.2025 00:28 โ€” ๐Ÿ‘ 2    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Video thumbnail

Introducing Petri Dish Neural Cellular Automata (PD-NCA)

pub.sakana.ai/pdnca/

In this work we explore the role of continual adaptation in artificial life, where the cellular automata in our system do not rely on a fixed set of parameters, but rather learn continuously during the simulation itself.

05.11.2025 00:26 โ€” ๐Ÿ‘ 21    ๐Ÿ” 3    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 1
How the 'Attention is all you need' paper was born from freedom, not pressure

To underscore his point, Jones described the conditions that allowed transformers to emerge in the first place โ€” a stark contrast to today's environment. The project, he said, was "very organic, bottom up," born from "talking over lunch or scrawling randomly on the whiteboard in the office."

Critically, "we didn't actually have a good idea, we had the freedom to actually spend time and go and work on it, and even more importantly, we didn't have any pressure that was coming down from management," Jones recounted. "No pressure to work on any particular project, publish a number of papers to push a certain metric up."

That freedom, Jones suggested, is largely absent today. Even researchers recruited for astronomical salaries โ€” "literally a million dollars a year, in some cases" โ€” may not feel empowered to take risks. "Do you think that when they start their new position they feel empowered to try their wild ideas and more speculative ideas, or do they feel immense pressure to prove their worth and once again, go for the low hanging fruit?" he asked.

How the 'Attention is all you need' paper was born from freedom, not pressure To underscore his point, Jones described the conditions that allowed transformers to emerge in the first place โ€” a stark contrast to today's environment. The project, he said, was "very organic, bottom up," born from "talking over lunch or scrawling randomly on the whiteboard in the office." Critically, "we didn't actually have a good idea, we had the freedom to actually spend time and go and work on it, and even more importantly, we didn't have any pressure that was coming down from management," Jones recounted. "No pressure to work on any particular project, publish a number of papers to push a certain metric up." That freedom, Jones suggested, is largely absent today. Even researchers recruited for astronomical salaries โ€” "literally a million dollars a year, in some cases" โ€” may not feel empowered to take risks. "Do you think that when they start their new position they feel empowered to try their wild ideas and more speculative ideas, or do they feel immense pressure to prove their worth and once again, go for the low hanging fruit?" he asked.

How the โ€˜Attention is all you needโ€™ paper was born from freedom, not pressure:

24.10.2025 13:34 โ€” ๐Ÿ‘ 8    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Why one AI lab is betting that research freedom beats million-dollar salaries

Jones's proposed solution is deliberately provocative: Turn up the "explore dial" and openly share findings, even at competitive cost. He acknowledged the irony of his position. "It may sound a little controversial to hear one of the Transformers authors stand on stage and tell you that he's absolutely sick of them, but it's kind of fair enough, right? I've been working on them longer than anyone, with the possible exception of seven people."

At Sakana AI, Jones said he's attempting to recreate that pre-transformer environment, with nature-inspired research and minimal pressure to chase publications or compete directly with rivals. He offered researchers a mantra from engineer Brian Cheung: "You should only do the research that wouldn't happen if you weren't doing it."

One example is Sakana's "continuous thought machine," which incorporates brain-like synchronization into neural networks. An employee who pitched the idea told Jones he would have faced skepticism and pressure not to waste time at previous employers or academic positions. At Sakana, Jones gave him a week to explore. The project became successful enough to be spotlighted at NeurIPS, a major AI conference.

Jones even suggested that freedom beats compensation in recruiting. "It's a really, really good way of getting talent," he said of the exploratory environment. "Think about it, talented, intelligent people, ambitious people, will naturally seek out this kind of environment."

Why one AI lab is betting that research freedom beats million-dollar salaries Jones's proposed solution is deliberately provocative: Turn up the "explore dial" and openly share findings, even at competitive cost. He acknowledged the irony of his position. "It may sound a little controversial to hear one of the Transformers authors stand on stage and tell you that he's absolutely sick of them, but it's kind of fair enough, right? I've been working on them longer than anyone, with the possible exception of seven people." At Sakana AI, Jones said he's attempting to recreate that pre-transformer environment, with nature-inspired research and minimal pressure to chase publications or compete directly with rivals. He offered researchers a mantra from engineer Brian Cheung: "You should only do the research that wouldn't happen if you weren't doing it." One example is Sakana's "continuous thought machine," which incorporates brain-like synchronization into neural networks. An employee who pitched the idea told Jones he would have faced skepticism and pressure not to waste time at previous employers or academic positions. At Sakana, Jones gave him a week to explore. The project became successful enough to be spotlighted at NeurIPS, a major AI conference. Jones even suggested that freedom beats compensation in recruiting. "It's a really, really good way of getting talent," he said of the exploratory environment. "Think about it, talented, intelligent people, ambitious people, will naturally seek out this kind of environment."

Sakana AIโ€™s CTO (Llion Jones) says heโ€™s โ€˜absolutely sickโ€™ of transformers, the tech that powers every major AI model

โ€œYou should only do the research that wouldnโ€™t happen if you werenโ€™t doing it.โ€ (Brian Cheung) ๐Ÿง ๐Ÿ’ก

venturebeat.com/ai/sakana-ai...

23.10.2025 17:30 โ€” ๐Ÿ‘ 15    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

Use a LLM to create a new constructed language (ConLang) like Klingon, Vulcan, etc.. where an LLM designs phonology, builds grammar, generates a lexicon, creates orthography, and even writes a mini grammar book.

IASC: Interactive Agentic System for ConLangs

11.10.2025 01:55 โ€” ๐Ÿ‘ 35    ๐Ÿ” 8    ๐Ÿ’ฌ 3    ๐Ÿ“Œ 0
Preview
IASC: Interactive Agentic System for ConLangs We present a system that uses LLMs as a tool in the development of Constructed Languages. The system is modular in that one first creates a target phonology for the language using an agentic approach ...

Thereโ€™s a fairly wide gulf in capabilities both among different LLMs and different linguistic specifications, with it being notably easier for systems to deal with settings that are commoner cross-linguistically than those that are rarer.

PDF arxiv.org/abs/2510.07591
Code github.com/SakanaAI/IASC

10.10.2025 04:58 โ€” ๐Ÿ‘ 4    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

Our goals with IASC:

1/ We hope that these tools will be fun to use for creating artificially constructed languages.

2/ We are interested in exploring what LLMs โ€˜knowโ€™ about languageโ€”not what they know about any particular language, but how much they know about and understand linguistic concepts.

10.10.2025 04:56 โ€” ๐Ÿ‘ 4    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Preview
GitHub - SakanaAI/IASC: LLMs for Constructed Languages LLMs for Constructed Languages. Contribute to SakanaAI/IASC development by creating an account on GitHub.

We are happy to announce the release of IASC, an Interactive Agentic System for ConLangs (Constructed Languages).

GitHub: github.com/SakanaAI/IASC

10.10.2025 04:55 โ€” ๐Ÿ‘ 3    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Preview
IASC: Interactive Agentic System for ConLangs We present a system that uses LLMs as a tool in the development of Constructed Languages. The system is modular in that one first creates a target phonology for the language using an agentic approach ...

IASC: Interactive Agentic System for ConLangs

arxiv.org/abs/2510.07591

If youโ€™re a fan of science fiction or fantasy, youโ€™ve probably heard of made-up languages like Elvish from โ€œThe Lord of the Ringsโ€ or Klingon from โ€œStar Trek.โ€

Can LLM agents create new artificial languages?

10.10.2025 04:54 โ€” ๐Ÿ‘ 6    ๐Ÿ” 1    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 2
Preview
GitHub - SakanaAI/ShinkaEvolve Contribute to SakanaAI/ShinkaEvolve development by creating an account on GitHub.

By making ShinkaEvolve open-source, our goal is to democratize access to advanced discovery tools. We envision it as a companion to help scientists and engineers, building efficient, nature-inspired systems to unlock the future of AI research.

GitHub Project: github.com/SakanaAI/Shi...

25.09.2025 05:59 โ€” ๐Ÿ‘ 6    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Post image

ShinkaEvolve's efficiency comes from three key innovations:

1) Adaptive parent sampling to balance exploration and exploitation.

2) Novelty-based rejection filtering to avoid redundant work.

3) A bandit-based LLM ensemble that dynamically picks the best model for the job.

25.09.2025 05:58 โ€” ๐Ÿ‘ 3    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

3/ LLM Training: It discovered a novel load balancing loss for MoE models, improving model performance and perplexity.

25.09.2025 05:58 โ€” ๐Ÿ‘ 2    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

2/ Competitive Programming: On ALE-Bench, it improved an existing agent's solution, turning a 5th place result into a 2nd place leaderboard rank for one task.

25.09.2025 05:58 โ€” ๐Ÿ‘ 2    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0