We recently release new work Society of Agents and Economics. Checkout the blog below.
05.11.2025 17:49 β π 1 π 0 π¬ 0 π 0@gaganbansal.bsky.social
Senior Researcher at Microsoft Research | Human-AI Interaction | Building AutoGen at Microsoft
We recently release new work Society of Agents and Economics. Checkout the blog below.
05.11.2025 17:49 β π 1 π 0 π¬ 0 π 0Version 0.4.0.dev13 is here!
The release removes previously deprecated features, so ensure your code runs without warnings on dev12 before upgrading.
An initial migration guide is available: microsoft.github.io/autogen/0.4....
We're nearing the full 0.4.0 release!
AutoGen is now on BlueSky!
18.12.2024 02:27 β π 2 π 0 π¬ 0 π 0We are following Russel and Norvigβs definition, as mentioned in the introduction.
04.12.2024 04:03 β π 1 π 0 π¬ 0 π 0Joint work with my wonderful colleagues:
@jennwv.bsky.social
Dan Weld
Saleema Amershi
@erichorvitz.bsky.social
@adamfourney.bsky.social
Hussein Mozannar
Victor Dibia
#AIAgents #LLMs #TechNews
5/
We're calling on researchers and practitioners to prioritize these issues and enhance transparency, control, and trust in AI agents! π Read full details at microsoft.com/en-us/resear...
4/
Why does this matter?
Without proper grounding, we risk safety failures, loss of user control, and ineffective collaboration. Trust and transparency in AI systems hinge on addressing these challenges. We supplement each challenge with examples.
3/
Some challenges focus on how agents can convey necessary information to help users form accurate mental models (A1-5). Others address enabling users to express their goals, preferences, and constraints to guide agent behavior (U1-3). We also focus on many overarching issues (X1-4).
2/
βοΈNew paper!
Generative AI agents are powerful but complexβhow do we design them for transparency and human control? π€β¨
At the heart of this challenge is establishing common ground, a concept from human communication. We identify 12 key challenges in improving common ground between humans & agents.