1/30 https://arxiv.org/abs/2511.08892
2/30 https://arxiv.org/abs/2511.12414
3/30 https://arxiv.org/abs/2511.10647
4/30 https://arxiv.org/abs/2512.04047
5/30 https://arxiv.org/abs/2511.15304
6/30 https://arxiv.org/abs/2511.09030
7/30 https://arxiv.org/abs/2511.07416
8/30 https://arxiv.org/abs/2512.02556
9/30 https://arxiv.org/abs/2511.16652
10/30 https://arxiv.org/abs/2511.15935
11/30 https://arxiv.org/abs/2511.18659
12/30 https://arxiv.org/abs/2511.08923
13/30 https://arxiv.org/abs/2511.18538
14/30 https://arxiv.org/abs/2511.14593
15/30 https://arxiv.org/abs/2511.14993
16/30 https://arxiv.org/abs/2511.20626
17/30 https://arxiv.org/abs/2511.06876
18/30 https://arxiv.org/abs/2511.15848
19/30 https://arxiv.org/abs/2511.22982
20/30 https://arxiv.org/abs/2511.11793
21/30 https://arxiv.org/abs/2511.18423
22/30 https://arxiv.org/abs/2511.21689
23/30 https://arxiv.org/abs/2511.22699
24/30 https://arxiv.org/abs/2511.20785
25/30 https://arxiv.org/abs/2511.19399
26/30 https://arxiv.org/abs/2511.17879
27/30 https://arxiv.org/abs/2512.04677
28/30 https://arxiv.org/abs/2511.13254
29/30 https://arxiv.org/abs/2511.06221
30/30 https://arxiv.org/abs/2511.13612
Top 30 most popular arXiv papers in the last 30 days.
[1/30] [2/30] [3/30] [4/30] [5/30] [6/30] [7/30] [8/30] [9/30] [10/30] [11/30] [12/30] [13/30] [14/30] [15/30] [16/30] [17/30] [18/30] [19/30] [20/30] [21/30] [22/30] [23/30] [24/30] [25/30] [26/30] [27/30] [28/30] [29/30] [30/30]
07.12.2025 00:06 â ð 1 ð 0 ð¬ 0 ð 0
çæçAIã®ã¢ããªã±ãŒã·ã§ã³ã®ã»ãšãã©ã¯ã人ãããã³ãããå
¥åããŠå¿çãåŸ
ã€ãšãã鿬¡çãªã€ã³ã¿ã©ã¯ã·ã§ã³ãå«ã¿ãåå¿æéãé©å¿æ§ã¯éèŠãªèŠçŽ ã§ã¯ãªãã
察ç
§çã«ãã©ã€ãã»ãžã£ãã³ã°ã¯ãåµé çãªæµããç¶æããããã«å€æ§æ§ãä¿ã¡ãªãããçžæã®å°æ¥ã®åãã«ã¢ã¯ã»ã¹ããããšãªãããªã¢ã«ã¿ã€ã ã®èª¿æŽãšé©å¿ãå¿
èŠãšããå
±åã€ã³ã¿ã©ã¯ã·ã§ã³ã§ããã
åŠç¿åŸã®åŒ·ååŠç¿ã¯ãããªã·ãŒäžã®çžäºäœçšãéããŠå¹æçãªé©å¿ãå¯èœã«ããããã³ããŒã¬ã³ã¹ã«åºã¥ãå ±é
¬ãå©çšãããããåºåã®å€æ§æ§ãäœäžãããããšãå€ãã
ãã®ç Žç¶»ã¯ãå ±é
¬ãããã³ã°ããšããŠç¥ãããå€ãã®RLãã¹ããã¬ãŒãã³ã°ãã€ãã©ã€ã³ã«åœ±é¿ãåãŒããã鳿¥œã®åµé æ§ããã€ãããã¯ãªå€åãšçžäºåå¿ã«äŸåããã©ã€ããžã£ã ã§ã¯ç¹ã«æå®³ã§ããã
æ¬è«æã§ã¯ãã¡ããã£ããåé³ãžã®äŒŽå¥ã®RLãã¹ããã¬ãŒãã³ã°ã«ãããå ±é
¬ãããã³ã°ã軜æžããããã«ãããªã·ãŒã«ãã£ãŠçæãããè»éãçšããæ°ããæµå¯ŸçåŠç¿æ³ãææ¡ããã
å
±é²åããèå¥åšã¯ããŒã¿ååžããããªã·ãŒã®è»éãåé¢ããããªã·ãŒã¯ã³ããŒã¬ã³ã¹å ±é
¬ã«å ããŠèå¥åšã®åºåãæå€§åããã€ãŸããªãåºåãžã®åŽ©å£ãé²ãã
åºå®ããããã¹ãã¡ããã£ãšåŠç¿ãããã¡ããã£ãšãŒãžã§ã³ãã®äž¡æ¹ãçšããã·ãã¥ã¬ãŒã·ã§ã³ã§äŒŽå¥ã®å質ãšåºåã®å€æ§æ§ãè©äŸ¡ããçç·Žãã鳿¥œå®¶ãšã®ãªã¢ã«ã¿ã€ã ã®å¯Ÿè©±ã·ã¹ãã ã«ã¢ãã«ãå°å
¥ããŠãŠãŒã¶ãŒã¹ã¿ãã£ãè¡ã£ãã
å®éçè©äŸ¡ãšãŠãŒã¶ãŒããã®ãã£ãŒãããã¯ã«ãããåºåã®å€æ§æ§ãããŒã¢ããã¯ã»ã³ããŒã¬ã³ã¹ãé©å¿é床ããŠãŒã¶ãŒã»ãšãŒãžã§ã³ã·ãŒãæ¹åãããããšãå®èšŒãããã
ãã®çµæã¯ãçæã·ãŒã±ã³ã¹ã¢ãã«ã®RLãã¹ããã¬ãŒãã³ã°ã«ãããå ±é
¬ãããã³ã°ãç·©åãããã·ã³ãã«ãã€å¹æçãªæ¹æ³ã瀺ããŠããã
2511.17879
çæçAIã®ã¢ããªã±ãŒã·ã§ã³ã®ã»ãšãã©ã¯ã人ãããã³ãããå
¥åããŠå¿çãåŸ
ã€ãšãã鿬¡çãªã€ã³ã¿ã©ã¯ã·ã§ã³ãå«ã¿ãåå¿æéãé©å¿æ§ã¯éèŠãªèŠçŽ ã§ã¯ãªãã察ç
§çã«ãã©ã€ãã»ãžã£ãã³ã°ã¯ãåµé çãªæµããç¶æããããã«å€æ§æ§ãä¿ã¡ãªãããçžæã®å°æ¥ã®åãã«ã¢ã¯ã»ã¹ããããšãªãããªã¢ã«ã¿ã€ã ã®èª¿...
07.12.2025 00:06 â ð 0 ð 0 ð¬ 0 ð 0
Most applications of generative AI involve a sequential interaction in which a person inputs a prompt and waits for a response, and where reaction time and adaptivity are not important factors.
In contrast, live jamming is a collaborative interaction that requires real-time coordination and adaptation without access to the other player's future moves, while preserving diversity to sustain a creative flow.
Reinforcement learning post-training enables effective adaptation through on-policy interaction, yet it often reduces output diversity by exploiting coherence-based rewards.
This collapse, known as ``reward hacking'', affects many RL post-training pipelines, but is especially harmful in live jamming, where musical creativity relies on dynamic variation and mutual responsiveness.
In this paper, we propose a novel adversarial training method on policy-generated trajectories to mitigate reward hacking in RL post-training for melody-to-chord accompaniment.
A co-evolving discriminator separates policy trajectories from the data distribution, while the policy maximizes the discriminator output in addition to coherence rewards to prevent collapse to trivial outputs.
We evaluate accompaniment quality and output diversity in simulation with both fixed test melodies and learned melody agents, and we conduct a user study with the model deployed in a real-time interactive system with expert musicians.
Quantitative evaluation and user feedback demonstrate improved output diversity, harmonic coherence, adaptation speed and user agency.
Our results demonstrate a simple yet effective method to mitigate reward hacking in RL post-training of generative sequence models.
[26/30] 140 Likes, 50 Comments, 1 Posts
2511.17879, csâ€LG | csâ€SD, 26 Nov 2025
ðGenerative Adversarial Post-Training Mitigates Reward Hacking in Live Human-AI Music Interaction
Yusong Wu, Stephen Brade, Teng Ma, Tia-Jane Fowler, Enning Yang, Berker Banar, Aaron Courville, Natasha Jaques, Che...
07.12.2025 00:06 â ð 0 ð 0 ð¬ 1 ð 0
æ¢åã®æ¡æ£ããŒã¹ã®æ åçæææ³ã¯ãåºæ¬çã«é次èšç®ãšãã³ã°ãã©ã€ãºã³äžæŽåã«å¶çŽãããŠããããªã¢ã«ã¿ã€ã ã®ã¹ããªãŒãã³ã°ãªãŒãã£ãªé§ååã¢ãã¿ãŒåæã«ãããå®çšçãªæ¡çšãå¶éãããŠããã
æã
ã¯ã140åãã©ã¡ãŒã¿ã®æ¡æ£ã¢ãã«ãçšããŠãå¹ççãé«å¿ å®åºŠãç¡éã®é·ãã®ã¢ãã¿ãŒçæãå¯èœã«ãããã¢ã«ãŽãªãºã ãšã·ã¹ãã ã®å
±åèšèšãã¬ãŒã ã¯ãŒã¯ã§ããLive Avatarãçºè¡šããã
æã
ã®ã¢ãããŒãã¯ãè€æ°ã®GPUã«æž¡ã£ãŠãã€ãºé€å»ã¹ãããããã€ãã©ã€ã³åãã忣æšè«ãã©ãã€ã ã§ããTimestep-forcing Pipeline Parallelism (TPP)ãå°å
¥ããèªå·±ååž°ããã«ããã¯ã广çã«è§£æ¶ããå®å®ããäœã¬ã€ãã³ã·ã®ãªã¢ã«ã¿ã€ã ã¹ããªãŒãã³ã°ãä¿èšŒããã
æéçäžè²«æ§ãããã«é«ããåäžæ§ããªãããè²ã¢ãŒããã¡ã¯ãã軜æžããããã«ããã£ãã·ã¥ãããåç
§ç»åã䜿çšããŠåçã«å€èгãåèŒæ£ããããšã«ãããã·ãŒã±ã³ã¹ã®å¿ å®åºŠãç¶æããããŒãªã³ã°ã·ã³ã¯ãã¬ãŒã æ©æ§ïŒRSFMïŒãææ¡ããã
ããã«ãSelf-Forcing Distribution Matching DistillationïŒèªå·±åŒ·å¶ååžãããã³ã°èžçïŒã掻çšããããšã§ãèŠèŠçãªå質ãç ç²ã«ããããšãªããå€§èŠæš¡ã¢ãã«ã®å æçã§ã¹ããªãŒãã³ã°å¯èœãªé©å¿ãä¿é²ããŸãã
Live Avatarã¯ã5ã€ã®H800 GPUã§20 FPSã®ãšã³ãã»ããŒã»ãšã³ãçæãéæããæå
ç«¯ã®æ§èœã瀺ããŠããŸããç§ãã¡ã®ç¥ãéãããã®èŠæš¡ã§å®çšçãªãªã¢ã«ã¿ã€ã ã®é«å¿ å®åºŠã¢ãã¿ãŒçæãéæããã®ã¯ããããåããŠã§ãã
æã
ã®ç ç©¶ã¯ãç£æ¥çšé·å°ºæ ååæã¢ããªã±ãŒã·ã§ã³ã«é«åºŠãªæ¡æ£ã¢ãã«ãå°å
¥ããããã®æ°ãããã©ãã€ã ã確ç«ããã
2512.04677
æ¢åã®æ¡æ£ããŒã¹ã®æ åçæææ³ã¯ãåºæ¬çã«é次èšç®ãšãã³ã°ãã©ã€ãºã³äžæŽåã«å¶çŽãããŠããããªã¢ã«ã¿ã€ã ã®ã¹ããªãŒãã³ã°ãªãŒãã£ãªé§ååã¢ãã¿ãŒåæã«ãããå®çšçãªæ¡çšãå¶éãããŠãããæã
ã¯ã140åãã©ã¡ãŒã¿ã®æ¡æ£ã¢ãã«ãçšããŠãå¹ççãé«å¿ å®åºŠãç¡éã®é·ãã®ã¢ãã¿ãŒçæãå¯èœã«ãã...
07.12.2025 00:06 â ð 0 ð 0 ð¬ 0 ð 0
Existing diffusion-based video generation methods are fundamentally constrained by sequential computation and long-horizon inconsistency, limiting their practical adoption in real-time, streaming audio-driven avatar synthesis.
We present Live Avatar, an algorithm-system co-designed framework that enables efficient, high-fidelity, and infinite-length avatar generation using a 14-billion-parameter diffusion model.
Our approach introduces Timestep-forcing Pipeline Parallelism (TPP), a distributed inference paradigm that pipelines denoising steps across multiple GPUs, effectively breaking the autoregressive bottleneck and ensuring stable, low-latency real-time streaming.
To further enhance temporal consistency and mitigate identity drift and color artifacts, we propose the Rolling Sink Frame Mechanism (RSFM), which maintains sequence fidelity by dynamically recalibrating appearance using a cached reference image.
Additionally, we leverage Self-Forcing Distribution Matching Distillation to facilitate causal, streamable adaptation of large-scale models without sacrificing visual quality.
Live Avatar demonstrates state-of-the-art performance, reaching 20 FPS end-to-end generation on 5 H800 GPUs, and, to the best of our knowledge, is the first to achieve practical, real-time, high-fidelity avatar generation at this scale.
Our work establishes a new paradigm for deploying advanced diffusion models in industrial long-form video synthesis applications.
[27/30] 135 Likes, 4 Comments, 1 Posts
2512.04677, csâ€CV, 04 Dec 2025
ðLive Avatar: Streaming Real-time Audio-Driven Avatar Generation with Infinite Length
Yubo Huang, Hailong Guo, Fangtai Wu, Shifeng Zhang, Shijie Huang, Qijun Gan, Lin Liu, Sirui Zhao, Enhong Chen, Jiaming Liu, Steven Hoi
07.12.2025 00:06 â ð 0 ð 0 ð¬ 1 ð 0
1/30 https://arxiv.org/abs/2511.08892
2/30 https://arxiv.org/abs/2511.12414
3/30 https://arxiv.org/abs/2511.10647
4/30 https://arxiv.org/abs/2512.04047
5/30 https://arxiv.org/abs/2511.15304
6/30 https://arxiv.org/abs/2511.09030
7/30 https://arxiv.org/abs/2511.07416
8/30 https://arxiv.org/abs/2512.02556
9/30 https://arxiv.org/abs/2511.16652
10/30 https://arxiv.org/abs/2511.15935
11/30 https://arxiv.org/abs/2511.08923
12/30 https://arxiv.org/abs/2511.18538
13/30 https://arxiv.org/abs/2511.18659
14/30 https://arxiv.org/abs/2511.14593
15/30 https://arxiv.org/abs/2511.14993
16/30 https://arxiv.org/abs/2511.20639
17/30 https://arxiv.org/abs/2511.04570
18/30 https://arxiv.org/abs/2511.20626
19/30 https://arxiv.org/abs/2511.06876
20/30 https://arxiv.org/abs/2511.15848
21/30 https://arxiv.org/abs/2511.11793
22/30 https://arxiv.org/abs/2511.22982
23/30 https://arxiv.org/abs/2511.18423
24/30 https://arxiv.org/abs/2511.21689
25/30 https://arxiv.org/abs/2511.20785
26/30 https://arxiv.org/abs/2511.22699
27/30 https://arxiv.org/abs/2511.19399
28/30 https://arxiv.org/abs/2511.13254
29/30 https://arxiv.org/abs/2511.06221
30/30 https://arxiv.org/abs/2511.13612
Top 30 most popular arXiv papers in the last 30 days.
[1/30] [2/30] [3/30] [4/30] [5/30] [6/30] [7/30] [8/30] [9/30] [10/30] [11/30] [12/30] [13/30] [14/30] [15/30] [16/30] [17/30] [18/30] [19/30] [20/30] [21/30] [22/30] [23/30] [24/30] [25/30] [26/30] [27/30] [28/30] [29/30] [30/30]
06.12.2025 00:06 â ð 0 ð 0 ð¬ 0 ð 0
æ°äž»äž»çŸ©åœå®¶ã§ã¯ãäž»èŠãªæ¿ç決å®ã«ã¯éåžžãããçš®ã®å€æ°æ±ºãã³ã³ã»ã³ãµã¹ãå¿
èŠãšãããããããšãªãŒãã¯çµ±æ²»ããããã«å€§è¡ã®æ¯æã確ä¿ããªããã°ãªããªãã
æŽå²çã«ããšãªãŒãã¯åŠæ ¡æè²ããã¹ã¡ãã£ã¢ã®ãããªéãããææ®µãéããŠã®ã¿æ¯æã圢æããããšãã§ãããAIã«ãã説åŸã®é²æ©ã¯ãäžè«åœ¢æã®ã³ã¹ãã倧å¹
ã«åæžãã粟床ãé«ããéžå¥œã®ååžãã®ãã®ãæå³çãªãã¶ã€ã³ã®å¯Ÿè±¡ãšããã
æã
ã¯ããšãªãŒãã説åŸã³ã¹ããšå€æ°æ±ºå¶çŽã®ããšã§ãæ¿çéžå¥œã®ååžãã©ã®çšåºŠå€åãããããéžæããåçã¢ãã«ãéçºããã
ãšãªãŒããäžäººã§ããã°ãã©ã®ãããªæé©ãªä»å
¥ã瀟äŒãããäºæ¥µåããããªãããªã³ã»ãããã¡ã€ã«ïŒãäºæ¥µåã®åŒåãïŒãžãšæŒãããåŸåããããèª¬åŸæè¡ã®åäžã¯ãã®ããªãããå éãããã
察ç«ãã2ã€ã®ãšãªãŒãã亀äºã«æš©åãæ¡ã£ãŠããå Žåãåããã¯ãããžãŒã¯ãæèŠããŸãšãŸãããããã©ã€ãã«ãèŠãã«ãããã»ãããã¯ãå°åã«ç€ŸäŒãé§çãããã€ã³ã»ã³ãã£ããçã¿åºãã
ããããç·åãããšãããå®äŸ¡ãªèª¬åŸæè¡ã¯ã忥µåãçŽç²ã«åºçŸãã瀟äŒçå¯ç£ç©ã§ã¯ãªããã¬ããã³ã¹ã®æŠç¥çææ®µãšããŠæãçŽããã®ã§ãããAIã®èœåã鲿©ããã«ã€ããæ°äž»äž»çŸ©ã®å®å®ã«ãšã£ãŠéèŠãªæå³ãæã€ããšã«ãªãã
2512.04047
æ°äž»äž»çŸ©åœå®¶ã§ã¯ãäž»èŠãªæ¿ç決å®ã«ã¯éåžžãããçš®ã®å€æ°æ±ºãã³ã³ã»ã³ãµã¹ãå¿
èŠãšãããããããšãªãŒãã¯çµ±æ²»ããããã«å€§è¡ã®æ¯æã確ä¿ããªããã°ãªããªããæŽå²çã«ããšãªãŒãã¯åŠæ ¡æè²ããã¹ã¡ãã£ã¢ã®ãããªéãããææ®µãéããŠã®ã¿æ¯æã圢æããããšãã§ãããAIã«ãã説åŸã®é²æ©ã¯ãäžè«åœ¢æã®ã³...
06.12.2025 00:06 â ð 0 ð 0 ð¬ 0 ð 0
How elites could shape mass preferences as AI reduces persuasion costs | Hacker News
(1/1) 675 Likes, 640 Comments, 04 Dec 2025, Hacker News
06.12.2025 00:06 â ð 0 ð 0 ð¬ 1 ð 0
In democracies, major policy decisions typically require some form of majority or consensus, so elites must secure mass support to govern.
Historically, elites could shape support only through limited instruments like schooling and mass media; advances in AI-driven persuasion sharply reduce the cost and increase the precision of shaping public opinion, making the distribution of preferences itself an object of deliberate design.
We develop a dynamic model in which elites choose how much to reshape the distribution of policy preferences, subject to persuasion costs and a majority rule constraint.
With a single elite, any optimal intervention tends to push society toward more polarized opinion profiles - a ``polarization pull'' - and improvements in persuasion technology accelerate this drift.
When two opposed elites alternate in power, the same technology also creates incentives to park society in ``semi-lock'' regions where opinions are more cohesive and harder for a rival to overturn, so advances in persuasion can either heighten or dampen polarization depending on the environment.
Taken together, cheaper persuasion technologies recast polarization as a strategic instrument of governance rather than a purely emergent social byproduct, with important implications for democratic stability as AI capabilities advance.
[4/30] 675 Likes, 640 Comments, 1 Posts
2512.04047, econâ€GN | csâ€AI | csâ€CY, 03 Dec 2025
ðPolarization by Design: How Elites Could Shape Mass Preferences as AI Reduces Persuasion Costs
Nadav Kunievsky
06.12.2025 00:06 â ð 0 ð 0 ð¬ 1 ð 0
æã
ã¯ãå Žã®éåè«ã®ãã¢ãã¬ã»ã·ã¥ãŽã£ã³ã¬ãŒïŒTSïŒå®åŒåãçšããŠã屿ããã«ããã¢ã³å¯åºŠãžã®ç¶æ
äŸåã®è¿œå ïŒããªãã¡ç·åœ¢ã·ã¥ã¬ãã£ã³ã¬ãŒé²åãžã®ä¿®æ£ïŒãçžå¯Ÿè«çå
±åæ£ã«éåããå Žåãæ±ºå®ããã
æã
ã¯ãç¶æ
äŸåæ§ããçãããã¬ã·ã§åŸ®åé
ãå«ãããã©ãªãšãŒã·ã§ã³ç¬ç«æ§ã«å¿
èŠãªæ°ããäœçšçŽ å¯ç©æ¡ä»¶ãå°åºããã
éåååŠã®éç·åœ¢ä¿®æ£ã¯ã空éçåé¢ã«ãããäœçšçŽ é¢ä¿ã«åœ±é¿ãäžããå¯ç©åæ¡ä»¶ã®éåã«ã€ãªããã
2511.15935
æã
ã¯ãå Žã®éåè«ã®ãã¢ãã¬ã»ã·ã¥ãŽã£ã³ã¬ãŒïŒTSïŒå®åŒåãçšããŠã屿ããã«ããã¢ã³å¯åºŠãžã®ç¶æ
äŸåã®è¿œå ïŒããªãã¡ç·åœ¢ã·ã¥ã¬ãã£ã³ã¬ãŒé²åãžã®ä¿®æ£ïŒãçžå¯Ÿè«çå
±åæ£ã«éåããå Žåãæ±ºå®ãããæã
ã¯ãç¶æ
äŸåæ§ããçãããã¬ã·ã§åŸ®åé
ãå«ãããã©ãªãšãŒã·ã§ã³ç¬ç«æ§ã«å¿
èŠãªæ°ããäœçšçŽ å¯...
06.12.2025 00:06 â ð 0 ð 0 ð¬ 0 ð 0
We use the Tomonaga-Schwinger (TS) formulation of quantum field theory to determine when state-dependent additions to the local Hamiltonian density (i.e., modifications to linear Schrodinger evolution) violate relativistic covariance.
We derive new operator integrability conditions required for foliation independence, including the Frechet derivative terms that arise from state-dependence.
Nonlinear modifications of quantum mechanics affect operator relations at spacelike separation, leading to violation of the integrability conditions.
[10/30] 286 Likes, 103 Comments, 2 Posts
2511.15935, hep-th, 19 Nov 2025
ðRelativistic Covariance and Nonlinear Quantum Mechanics: Tomonaga-Schwinger Analysis
Stephen D. H. Hsu
06.12.2025 00:06 â ð 0 ð 0 ð¬ 1 ð 0
ãã«ããšãŒãžã§ã³ãã·ã¹ãã (MAS)ã¯ãå€§èŠæš¡èšèªã¢ãã«(LLM)ãç¬ç«ããåäžã¢ãã«ã®æšè«ããå調çãªã·ã¹ãã ã¬ãã«ã®ç¥èœãžãšæ¡åŒµããã
æ¢åã®LLMãšãŒãžã§ã³ããæšè«ãšã³ãã¥ãã±ãŒã·ã§ã³ã®ããã«ããã¹ãããŒã¹ã®èª¿åã«äŸåããŠããã®ã«å¯ŸããŠãæã
ã¯ã¢ãã«ãé£ç¶çãªæœåšç©ºéå
ã§çŽæ¥ã³ã©ãã¬ãŒã·ã§ã³ã§ããããã«ããããšã§äžæ©åé²ããã
LLMãšãŒãžã§ã³ãéã®çŽç²ãªæœåšçã³ã©ãã¬ãŒã·ã§ã³ãå¯èœã«ããããšã³ãã»ããŒã»ãšã³ãã®ãã¬ãŒãã³ã°äžèŠã®ãã¬ãŒã ã¯ãŒã¯ã§ããLatentMASã玹ä»ããã
LatentMASã§ã¯ãåãšãŒãžã§ã³ãã¯ãŸãæçµå±€ã®é ãåã蟌ã¿ã«ãã£ãŠèªåååž°çãªæœåšæèçæãè¡ãã
ãããŠå
±æãããæœåšçã¯ãŒãã³ã°ã¡ã¢ãªããåãšãŒãžã§ã³ãã®å
éšè¡šçŸãä¿åã»è»¢éãããã¹ã¬ã¹ãªæ
å ±äº€æãä¿èšŒããã
æã
ã¯ãLatentMASãåŸæ¥ã®ããã¹ãããŒã¹ã®MASãããè€éæ§ã倧å¹
ã«äœæžããªãããããé«ã衚çŸåãšå¯éçãªæ
å ±ä¿åãéæããããšãç«èšŒããçè«çåæãæäŸããã
ããã«ãæ°åŠãšç§åŠã®æšè«ãåžžèçãªçè§£ãã³ãŒãçæã«ãŸããã9ã€ã®å
æ¬çãªãã³ãããŒã¯ã察象ãšããå®èšŒçè©äŸ¡ã«ãããLatentMASã¯åŒ·åãªåäžã¢ãã«ããã³ããã¹ãããŒã¹ã®MASããŒã¹ã©ã€ã³ãäžè²«ããŠåé§ããæå€§14.6%ã®é«ç²ŸåºŠã70.8%ïœ83.7%ã®åºåããŒã¯ã³äœ¿çšéã®åæžã4åïœ4.3åã®é«éãªãšã³ãããŒãšã³ãã®æšè«ãå®çŸããããšã瀺ãããã
ãããã®çµæã¯ãæã
ã®æ°ããæœåšçã³ã©ãã¬ãŒã·ã§ã³ãã¬ãŒã ã¯ãŒã¯ããã·ã¹ãã ã¬ãã«ã®æšè«å質ãåäžããããšåæã«ã远å ã®ãã¬ãŒãã³ã°ãªãã§å€§å¹
ãªå¹çåäžãæäŸããããšã瀺ããŠããã
ã³ãŒããšããŒã¿ã¯https://github.com/Gen-Verse/LatentMASãå®å
šã«ãªãŒãã³ãœãŒã¹åãããŠããã
2511.20639
ãã«ããšãŒãžã§ã³ãã·ã¹ãã (MAS)ã¯ãå€§èŠæš¡èšèªã¢ãã«(LLM)ãç¬ç«ããåäžã¢ãã«ã®æšè«ããå調çãªã·ã¹ãã ã¬ãã«ã®ç¥èœãžãšæ¡åŒµãããæ¢åã®LLMãšãŒãžã§ã³ããæšè«ãšã³ãã¥ãã±ãŒã·ã§ã³ã®ããã«ããã¹ãããŒã¹ã®èª¿åã«äŸåããŠããã®ã«å¯ŸããŠãæã
ã¯ã¢ãã«ãé£ç¶çãªæœåšç©ºéå
ã§çŽæ¥ã³ã©ãã¬ãŒã·ã§ã³ã§...
06.12.2025 00:06 â ð 0 ð 0 ð¬ 0 ð 0
LatentMAS â agent collaboration from token space into the model's latent space | Hacker News
(3/3) 3 Likes, 1 Comments, 03 Dec 2025, Hacker News
06.12.2025 00:06 â ð 0 ð 0 ð¬ 1 ð 0
Paper page - Latent Collaboration in Multi-Agent Systems
Join the discussion on this paper page
(1/3) 109 Likes, 12 Comments, 27 Nov 2025, Hugging Face
06.12.2025 00:06 â ð 0 ð 0 ð¬ 1 ð 0
Multi-agent systems (MAS) extend large language models (LLMs) from independent single-model reasoning to coordinative system-level intelligence.
While existing LLM agents depend on text-based mediation for reasoning and communication, we take a step forward by enabling models to collaborate directly within the continuous latent space.
We introduce LatentMAS, an end-to-end training-free framework that enables pure latent collaboration among LLM agents.
In LatentMAS, each agent first performs auto-regressive latent thoughts generation through last-layer hidden embeddings.
A shared latent working memory then preserves and transfers each agent's internal representations, ensuring lossless information exchange.
We provide theoretical analyses establishing that LatentMAS attains higher expressiveness and lossless information preservation with substantially lower complexity than vanilla text-based MAS.
In addition, empirical evaluations across 9 comprehensive benchmarks spanning math and science reasoning, commonsense understanding, and code generation show that LatentMAS consistently outperforms strong single-model and text-based MAS baselines, achieving up to 14.6% higher accuracy, reducing output token usage by 70.8%-83.7%, and providing 4x-4.3x faster end-to-end inference.
These results demonstrate that our new latent collaboration framework enhances system-level reasoning quality while offering substantial efficiency gains without any additional training.
Code and data are fully open-sourced at https://github.com/Gen-Verse/LatentMAS.
[16/30] 217 Likes, 72 Comments, 3 Posts
2511.20639, csâ€CL | csâ€AI | csâ€LG, 25 Nov 2025
ðLatent Collaboration in Multi-Agent Systems
Jiaru Zou, Xiyuan Yang, Ruizhong Qiu, Gaotang Li, Katherine Tieu, Pan Lu, Ke Shen, Hanghang Tong, Yejin Choi, Jingrui He, James Zou, Mengdi Wang, Ling Yang
06.12.2025 00:06 â ð 0 ð 0 ð¬ 1 ð 0
å€§èŠæš¡ãªèšèªã¢ãã«ã¯åŒ·åãªãžã§ãã©ãªã¹ãã§ããããäººé¡æåŸã®è©ŠéšïŒHLEïŒã®ãããªæ·±ãè€éãªåé¡ãè§£ãããšã¯ãæŠå¿µçã«å°é£ã§ãããèšç®ã³ã¹ãããããã
æã
ã¯ãä»ã®ã¢ãã«ãæ§ã
ãªããŒã«ã管çããå°ããªãªãŒã±ã¹ãã¬ãŒã¿ãŒããã€ã³ããªãžã§ã³ã¹ã®äžéãæŒãäžããå°é£ãªãšãŒãžã§ã³ãã¿ã¹ã¯ã解決ããå¹çãåäžãããããšãã§ããããšã瀺ãã
ã€ã³ããªãžã§ã³ããªããŒã«ã調æŽããå°ããªãªãŒã±ã¹ãã¬ãŒã¿ãŒãèšç·Žããæ¹æ³ã§ããToolOrchestraã玹ä»ããã
ToolOrchestraã¯ãææãå¹çããŠãŒã¶ãŒå奜ãèæ
®ããå ±é
¬ã«ãã匷ååŠç¿ãæç€ºçã«äœ¿çšããŠããã
ToolOrchestraã䜿çšããããšã§ãæã
ã¯Orchestratorãšãã8Bã¢ãã«ãäœæãããOrchestratorã¯ãåŸæ¥ã®ããŒã«äœ¿çšãšãŒãžã§ã³ããããäœã³ã¹ãã§é«ã粟床ãéæãããšåæã«ãäžããããã¯ãšãªã«å¯ŸããŠã©ã®ããŒã«ã䜿çšããããšãããŠãŒã¶ãŒã®å¥œã¿ã«åãããããšãã§ããã
HLEã§ã¯ãOrchestratorã¯37.1%ã®ã¹ã³ã¢ãéæããGPT-5ïŒ35.1%ïŒãäžåããšåæã«ã2.5åã®å¹çãå®çŸããã
tau2-BenchãšFRAMESã§ã¯ãOrchestratorã¯GPT-5ã倧差ã§äžåã£ããã䜿çšããã³ã¹ãã¯çŽ30ïŒ
ã«éããªãã£ãã
åºç¯ãªåæã«ãããOrchestratorã¯è€æ°ã®ã¡ããªã¯ã¹ã®äžã§ããã©ãŒãã³ã¹ãšã³ã¹ãã®æé©ãªãã¬ãŒããªããéæããæªç¥ã®ããŒã«ã«ãé å¥ã«äžè¬åããããšã瀺ãããã
ãããã®çµæã¯ã軜éãªãªãŒã±ã¹ãã¬ãŒã·ã§ã³ã¢ãã«ãçšããŠå€æ§ãªããŒã«ãçµã¿åãããããšããæ¢åã®æ¹æ³ãããå¹ççãã€å¹æçã§ããããšã瀺ããŠãããå®çšçã§ã¹ã±ãŒã©ãã«ãªããŒã«æ¡åŒµåæšè«ã·ã¹ãã ãžã®éãéããã®ã§ããã
2511.21689
å€§èŠæš¡ãªèšèªã¢ãã«ã¯åŒ·åãªãžã§ãã©ãªã¹ãã§ããããäººé¡æåŸã®è©ŠéšïŒHLEïŒã®ãããªæ·±ãè€éãªåé¡ãè§£ãããšã¯ãæŠå¿µçã«å°é£ã§ãããèšç®ã³ã¹ããããããæã
ã¯ãä»ã®ã¢ãã«ãæ§ã
ãªããŒã«ã管çããå°ããªãªãŒã±ã¹ãã¬ãŒã¿ãŒããã€ã³ããªãžã§ã³ã¹ã®äžéãæŒãäžããå°é£ãªãšãŒãžã§ã³ãã¿ã¹ã¯ã解決ãã...
06.12.2025 00:06 â ð 0 ð 0 ð¬ 0 ð 0
Nvidia ToolOrchestra â 8B model "manager" improves intelligence and efficiency | Hacker News
(3/3) 5 Likes, 0 Comments, 28 Nov 2025, Hacker News
06.12.2025 00:06 â ð 0 ð 0 ð¬ 1 ð 0