draft lab ai policy, feel free to use, modify, or discuss! todd.gureckislab.org/2026/03/06/g...
07.03.2026 02:18 — 👍 44 🔁 9 💬 1 📌 1draft lab ai policy, feel free to use, modify, or discuss! todd.gureckislab.org/2026/03/06/g...
07.03.2026 02:18 — 👍 44 🔁 9 💬 1 📌 1Graph of award probability of R35 and R01 from NIH factbook as a function of review rank percentile. As is apparent, 2025 is a significant departure, with lower award probabilities at all scores <40 and significant departures from norm, where even being in the top 10% is no longer a nearly certain indicator of success. Data source: https://report.nih.gov/nihdatabook/report/302
The data is in: the NIH goalposts have shifted.
What were once almost certain fundable scores have become coin flips and what used to be likely grants have become aspirational, leading to fewer awards.
Another manifestation of how HHS policies have led to fewer awards and less science.
RIP redundancy reduction?
Beautiful work by Liu & colleagues showing that neural redundancy increases with learning, as predicted by a Bayesian model:
www.science.org/doi/10.1126/...
Why we value things more as we are about to lose them
A novel theory that explains why subjective value and effort-based decisions change over time, shedding light on reference-dependent behavior in both every day and high-stakes decision-making contexts
www.sciencedirect.com/science/arti...
my course notes on a bayesian workflow for (single agent) cognitive modeling are now fully revised and online: fusaroli.github.io/AdvancedCogn...
Predictive checks, updating checks, sensitivity analyses and simulation based calibration in @mc-stan.org
Feedback is very welcome!
New paper out in Child Development (@srcdorg.bsky.social) with Dave Sobel (@candmlab.bsky.social)! ✨ We investigated how 5- to 7-year-old children decide to take on easy versus hard tasks while pursuing a goal. doi.org/10.1093/chid...
04.03.2026 15:52 — 👍 40 🔁 16 💬 3 📌 2This is consistent with earlier psychometric work that suggests 5-7 is the best response scale options, but good to see that the finding holds up in contemporary research. Also, good to see that labeling scales whether anchored or not has little impact on findings. academic.oup.com/ijpor/articl...
02.03.2026 00:21 — 👍 116 🔁 39 💬 1 📌 2
📢New paper out today in @cognitionjournal.bsky.social!
Does the value of an unchosen option — inferred through counterfactual reasoning — spread to related items in memory, similar to how the value of a chosen option — acquired through direct experience — does?
In short, yes!
I'm a cognitive scientist with an interest in epistemic vigilance, and this essay that's been going around gave me pause.
I don't think it's straightforward to apply the concept of epistemic vigilance to interactions with LLMs, as this essay does.
🧵/
sbgeoaiphd.github.io/rotating_the...
Check out our new paper which isolates a human brain signal that specifically tracks the growing urgency to commit to a choice pubmed.ncbi.nlm.nih.gov/41611534/. This one was a long time coming! Sterling work from @harveymccone.bsky.social and a bunch of past lab members!
26.02.2026 13:12 — 👍 16 🔁 11 💬 1 📌 0Happy to share my first first-author paper, new in Science Advances: Deciding for others alters metacognition leading to responsibility aversion www.science.org/doi/10.1126/... #ScienceAdvancesResearch @zne-uzh.bsky.social @econ.uzh.ch
25.02.2026 19:50 — 👍 19 🔁 9 💬 2 📌 1
Ultrasound gives our brain a nudge in the right direction 🧠
👀 Look to your left, look to your right!
We used #ultrasound to stimulate the brain and it changed human choice behavior within a fraction of a second. No surgery, no implants.
Link to paper ⬇️
www.nature.com/articles/s41...
Come join us at BAMB! to learn all about modelling behavior and what Barcelona’s beaches have to offer 🏐🏄♂️🏊♂️
Applications for 2026 are open here: www.bambschool.org
🚨 JOB alert: 📢
We are looking for a PhD student to work on our international @wellcometrust.bsky.social project on information gathering in OCD and Schizophrenia!
If you have a background in computational psychiatry / neuroimnaging and speak German, apply here: devcompsy.org/wp-content/u...
This Wednesday February 25! Dr. Michael Treadway (Emory University) is presenting in
@motcogmeet.bsky.social series: "Effort-Based Decision-Making and Its Discontents: Precision medicine approaches for understanding the pathophysiology and treatment of motivational deficits in mental illness" 1/
Where you look next isn’t arbitrary.
In our new paper, we model human eye movements in immersive visual search as reinforcement learning under cognitive constraints. 🧵
in my decision-making course we devote one class to a group exercise in which the students need to use what they learned in Act 1 ("Rational Decision Making") to shut down a rogue AI in the semi distant future; this is the intro.
23.02.2026 14:46 — 👍 22 🔁 5 💬 1 📌 2
Perdue banning all Chinese students just for national origin. No other reason. The Harper's Letter crowd must be crafting a humdinger of a new letter over this one.
www.theguardian.com/us-news/2026...
I reviewed 5+ fMRI papers on response inhibition within roughly the last year, and the same points come up over and over again. So I wrote a short note last week entitled "The unique limitations of BOLD-fMRI in the study of response inhibition". You can read it here.
osf.io/preprints/ps...
We are recruiting! Postdoctoral research fellow at www.sdn-lab.org, studying the computational & neural basis of social decision-making. Birmingham is a fantastic & affordable place to live, with one of the youngest populations in Europe & over 600 parks. Please share!
www.jobs.ac.uk/job/DQO275/p...
Two side-by-side images depicting the nested hierarchical IPOMDP and the non-hierarchical x-IPOMDP mechanism.
What happens when we can't use recursive belief to compete? We can use anomaly detection instead!
Here, we (led by soon-to-be-Dr @nitalon.bsky.social ) devise a multi-agent account where compression & reward expectation are used to notice deception
jair.org/index.php/ja...
Applications are now open for the summer school: 𝐌𝐚𝐭𝐡𝐞𝐦𝐚𝐭𝐢𝐜𝐚𝐥 𝐌𝐞𝐭𝐡𝐨𝐝𝐬 𝐢𝐧 𝐂𝐨𝐦𝐩𝐮𝐭𝐚𝐭𝐢𝐨𝐧𝐚𝐥 𝐍𝐞𝐮𝐫𝐨𝐬𝐜𝐢𝐞𝐧𝐜𝐞
🧠 Apply before March 15: www.compneuronrsn.org
📍 Located in beautiful Eresfjord 🇳🇴
🗓️ Between July 6-24
Supported by the @kavlifoundation.org
In collaboration with @kavlintnu.bsky.social
Beyond grateful to be selected as a 2026 Sloan Research Fellow in Neuroscience! 🧠🤓
It takes a village, and this wouldn't be possible without my amazing team, mentees, mentors, collaborators and colleagues! Very excited to continue our work on the neuroscience of social learning. #SloanFellow
Book cover. A silhouette of a person's head filled with colorful geometric shapes—perhaps symbolizing cognitive resources or deployment thereof. The style is attractive and modern, if generic. text: The Rational Use of Cognitive Resources Falk Lieder, Frederick Callaway, Thomas L. Griffithts
I'm excited to announce that I had my first (co-authored) book published today! "The Rational Use of Cognitive Resources" with Falk Lieder and Tom Griffiths (@cocoscilab.bsky.social ). You can read it for free! (see thread)
18.02.2026 01:05 — 👍 142 🔁 45 💬 2 📌 0We're hiring! This is a unique opportunity to translate our understanding of neural computation - from circuit-level mechanisms to computational principles - into the human brain, through the establishment of cutting-edge human neural recording capabilities with collaborators in London and abroad.
We’re hiring a Group Leader!
Join us to lead a transformative initiative in human systems neuroscience.
Find out more and apply ⤵️
www.sainsburywellcome.org/content/curr...
"While most AI tries to fix humans @simile_ai is building AI that understands them. They build digital twins that capture someone’s worldview, then simulate how customers, employees or entire populations will actually respond to change. Born out of Stanford generative agent research. Now backed by $100M to turn that into a category. AI is getting smarter and Simile is making it more human. We're proud to be in their corner."
A proposed solution is to build generative agents that represent specific individuals (Box 1). One such study [6] recruited a sample of ~1000 US participants nationally representative for age, gender, race, region, education, and political ideology; programmed an LLM chatbot to interview each participant for 2 h; and asked the participants to complete a battery of questionnaires and tasks. They then used the interview transcripts to prompt ~1000 LLM agents to role-play each of the human participants on the same questionnaires and tasks. Observing a high correspondence between the responses of the generative agents and their human counterparts, the researchers concluded that LLMs prompted in this way can capture the ‘idiosyncratic nature’ of real people across a range of situations [57]. Some researchers propose making generative agents even more representative by training them on their human counterparts’ ‘emails, messages and social media posts’, aswell as ‘text generated by friends, family or coworkers’ [23]. (We note this raises critical questions about informed consent; see Outstanding questions.) The logic here is that, because generative agents are built to represent a diverse sample of specific individuals, researchers could then run thousands of experiments on the generative agents and feel confident that the resultant data are faithful to the original samples. Researchers could even populate virtual worlds with generative agents, running large-scale simulations to test interventions and policies (Box 2). Nevertheless, the generative agents paradigm faces hard limits to its potential representativeness. By design, generative agents can only represent individuals who consent to sharing sensitive data with scientists, which carries substantial privacy risks [6,58]. Given these risks, people
with stronger privacy concerns are less likely to consent to such studies. Members of marginalized groups in the USA, including women, gender minorities, people of color, and disabled people, have heightened privacy concerns and more negative attitudes about AI [59,60]ii–iv. These groups have historically faced disproportionate surveillance [61,62] and theft of their biometric and behavioral data for scientific research [63–65], including training machine learning models [66]. Regimes of digital surveillance spread globally [67], creating frictions where global north ideologies touch down in the global south [68]. These entrenched and repeating patterns raise cascading problems for the generative agents approach: first, members of marginalized groups are less likely to participate and, second, those who do will be less representative of their groups. Any attempt to build AI Surrogates that are truly representative of diverse populations will likely face a hard limit that marginalized people are (justifiably) less willing to entrust their data to scientists.
Box 2. Generative agents and simulated worlds Researchers note that ‘many of themost interesting research questions, such as the psychology ofworld leaders, the effects of large-scale policy change, or the effects of large-scale events on the general public’ are ‘logistically infeasible’ to study in the laboratory ‘with any realistic amount of resources’ [23]. In response, generative agents populating simulated worlds are seen as promising research paths. For example, researchers could create generative agents based on the profiles of Palo Alto residents and simulate how the community would respond to different pandemic interventionsv. Much of the technical research on artificial agents acting in simulated worlds originates in fields beyond cognitive science, including computer science, sociology, economics, political science, computational social science, as well as private industry [9,112–116]. Developers of these agent architectures have lofty ambitions. They believe that this technology can ‘test interventions and theories and gain real-world insights’ [58], serving as ‘a high-fidelity platformfor policy outcome evaluation’ to enable ‘datadriven policy selection’ [115]. Given these ambitions, validating that these models can generalize to the real world is imperative [116], and some researchers caution that ‘current architectures must cover some distance before their use is reliable’ [58]. Yet, such validation faces a paradox: these models can only be validated against the ground truth of real-world data, but their appeal lies in simulating scenarios where ground truth is not available. Some researchers [22] propose to meet this challenge by identifying ‘the most proximal cases for which ground-truth data from human subjects is available’ and using those cases to validate the simulation’s predictions ‘before turning the model to a domain in which no ground truth exists’. However, there is currently ‘no consensus’ around how proximal is proximal enough [116]. Imp…
Stanford CS researchers just got a huge payday for promising AI agents that can simulate the real world. @mjcrockett.bsky.social and I wrote about these researcher's vision. Screen shotting quite a lengthy part of our paper, because we spent A LOT of time thinking about the paucity of this promise
13.02.2026 14:43 — 👍 82 🔁 24 💬 5 📌 6
Can you easily distinguish between value, valence, and salience?
Probably not, but the prefrontal cortex of mice seems to achieve this by creating a sort of multidimensional orthogonal neural space, where each dimension corresponds to one of these subjective elements
www.nature.com/articles/s41...
Very excited that this paper is out!
www.science.org/doi/full/10....
Led by the fabulous @dorsaamir.bsky.social with invaluable contributions from many awesome collaborators.
Another rigorous study calls into question the ability of animal models of alcohol use disorder to predict the discovery and development of new drug treatments.
Kudos to the authors for this solid, albeit negative, translational work!
www.nature.com/articles/s41...
Check out @orielf.bsky.social & I's chapter "Emotion and Choice: The Integral Role of Emotion in Constructing Value" in the new volume, Neuroeconomics: Core Topics and Current Directions, edited by @dvsmith.bsky.social @thepsychologist.bsky.social & @dfareri.bsky.social
doi.org/10.1007/978-...