Paper + code + interactive demos: gabegrand.github.io/battleship βοΈπ―
27.10.2025 19:17 β π 1 π 0 π¬ 0 π 0@gabegrand.bsky.social
PhD student @csail.mit.edu π€ & π§
Paper + code + interactive demos: gabegrand.github.io/battleship βοΈπ―
27.10.2025 19:17 β π 1 π 0 π¬ 0 π 0Special shoutout to @valeriopepe.bsky.social (co-first author), who is super talented and currently on the PhD job market!
27.10.2025 19:17 β π 0 π 0 π¬ 1 π 0Thanks to Valerio Pepe, Josh Tenenbaum, and Jacob Andreas for long-horizon collaboration and planning: this line of Battleship work has been *2 years* in the making!
27.10.2025 19:17 β π 0 π 0 π¬ 1 π 0Bottom line: The future of AI-driven discovery isn't just bigger modelsβit's smarter inference. By combining LMs with rational planning strategies, we can build agents that ask better questions, make better decisions, and collaborate effectively with humans.
27.10.2025 19:17 β π 1 π 1 π¬ 2 π 0Why does this matter? Discovery-driven AI (scientific experiments, theorem proving, drug discovery) requires hitting needles in combinatorially vast haystacks. If we want agents that explore rationally, we need to go beyond prompting.
27.10.2025 19:17 β π 1 π 0 π¬ 1 π 0Key takeaway: Current LMs arenβt rational information seekers: they struggle to ground answers in context, generate informative queries, and balance exploration vs. exploitation. But Bayesian inference at test time can dramatically close these gapsβefficiently.
27.10.2025 19:17 β π 1 π 0 π¬ 1 π 0Does this generalize? YES. We replicated on "Guess Who?" from TextArena and saw similar gains: GPT-4o (61.7% β 90.0%), Llama-4-Scout (30.0% β 72.4%). The framework works across information-seeking domains with combinatorial hypothesis spaces.
27.10.2025 19:17 β π 0 π 0 π¬ 1 π 0Deciding when to explore vs. act is also key. Skilled players (humans + GPT-5) spread out questions over the course of the game. Weak LMs spam all 15 upfront. The key isn't asking MOREβit's asking BETTER questions at the RIGHT time. Quality > quantity.
27.10.2025 19:17 β π 1 π 0 π¬ 1 π 0Here's the kicker: asking high-EIG questions alone doesn't guarantee wins. Weaker models struggle to convert information into good moves. Bayes-Mβwhich explicitly marginalizes over beliefsβis crucial for translating questions into action.
27.10.2025 19:17 β π 0 π 0 π¬ 1 π 0Our approach leverages inference scaling to enable models to ask more informative questions. Bayes-Q boosts EIG by up to 0.227 bits (94.2% of the theoretical ceiling) and virtually eliminates redundant questions (18.5% β 0.2% for Llama-4-Scout).
27.10.2025 19:17 β π 0 π 0 π¬ 1 π 0In head-to-head comparisons, both GPT-4o and Llama-4-Scout now beat GPT-5 while costing 2.8x and 99.7x less, respectively.
27.10.2025 19:17 β π 0 π 0 π¬ 1 π 0With all three Bayesian components (+Bayes-QMD), Llama-4-Scout jumps from near-random guessing (0.367 F1) to super-human level (0.764 F1). GPT-4o sees similar gains (0.450 β 0.782 F1). The deltas are really striking.
27.10.2025 19:17 β π 1 π 0 π¬ 1 π 0We developed three Bayesian strategies inspired by Bayesian Experimental Design (BED):
β Question (Bayes-Q): Optimizes expected info gain (EIG)
π― Move (Bayes-M): Maximizes hit probability
βοΈ Decision (Bayes-D): Decides when to ask vs. shoot using one-step lookahead
In our second set of experiments, we turned to the challenge of building rational question-asking agents to play the Captain role.
27.10.2025 19:17 β π 0 π 0 π¬ 1 π 0We find that having models write Python functions to answer questions boosts accuracy by +14.7% (absolute p.p.), and complements CoT reasoning.
27.10.2025 19:17 β π 0 π 0 π¬ 1 π 0One useful trick to improve answering accuracy is to use code generation. Code grounds reasoning in executable logic, not just vibes.
27.10.2025 19:17 β π 1 π 1 π¬ 1 π 0Many LMs really struggle with questions that require grounding answers in the board and dialogue context. GPT-4o drops from 72.8% β 60.4% accuracy on context-dependent questions. Llama-4-Scout: 68.0% β 54.0%. Humans? Basically flat (92.8% vs 91.9%).
27.10.2025 19:17 β π 0 π 0 π¬ 1 π 0Overall, humans are really reliable at answering questions on BattleshipQA (92.5% accuracy). In contrast, LM accuracy ranges widelyβfrom near-random (52.5%, GPT-4o-mini) to human-level (92.8%, o3-mini). But there's a catchβ¦
27.10.2025 19:17 β π 0 π 0 π¬ 1 π 0In our first experiment, we looked at QA accuracy in the Spotter role β this is an important sanity-check for how well players (humans & agents) can understand and reason about the game state.
27.10.2025 19:17 β π 0 π 0 π¬ 1 π 0To understand how people strategize & collaborate, we ran a two-player synchronous human study (N=42) and collected full action trajectories and chat dialogues. Our βBattleshipQAβ dataset provides a rich, multimodal benchmark for comparing human and agent behavior.
27.10.2025 19:17 β π 0 π 0 π¬ 1 π 0We created βCollaborative Battleshipββa two-player game where a Captain (who only sees a partial board) must balance asking questions vs. taking shots, while a Spotter (who sees everything) can only answer Yes/No. It's deceptively simple but cognitively demanding.
27.10.2025 19:17 β π 1 π 0 π¬ 1 π 0But LMs are trained to *answer* queries, not *ask* them. Can they learn to explore intelligently?
27.10.2025 19:17 β π 0 π 0 π¬ 1 π 0Many high-stakes AI applications require asking data-driven questionsβthink scientific discovery, medical diagnosis, or drug development.
27.10.2025 19:17 β π 0 π 0 π¬ 1 π 0Do AI agents ask good questions? We built βCollaborative Battleshipβ to find outβand discovered that weaker LMs + Bayesian inference can beat GPT-5 at 1% of the cost.
Paper, code & demos: gabegrand.github.io/battleship
Here's what we learned about building rational information-seeking agents... π§΅π½
Hello! Late to the party, but still excited to join this brave blue world ππ¦
27.10.2025 19:07 β π 1 π 0 π¬ 1 π 0