If you donβt have the capacity to distinguish between whatβs true and whatβs not, your truths are just as incidental as your lies.
02.03.2025 03:54 β π 3 π 0 π¬ 0 π 0@notmahi.bsky.social
Building generally intelligent robots that *just work* everywhere, out of the box, at Berkeley AI Research (BAIR) and Meta FAIR. Previously at NYU Courant, MIT and visiting researcher at Meta AI. https://mahis.life/
If you donβt have the capacity to distinguish between whatβs true and whatβs not, your truths are just as incidental as your lies.
02.03.2025 03:54 β π 3 π 0 π¬ 0 π 0Reading comprehension is an important but easily overlooked quality IMO
01.03.2025 15:22 β π 3 π 0 π¬ 1 π 0Ever struggled with multi-sensor data from cameras, depth sensors, and other custom sensors? Meet AnySenseβan iPhone app for effortless data acquisition and streaming. Working with multimodal sensor data will never be a chore again!
26.02.2025 16:49 β π 5 π 2 π¬ 1 π 0
We just released AnySense, an iPhone app for effortless data acquisition and streaming for robotics. We leverage Appleβs development frameworks to record and stream:
1. RGBD + Pose data
2. Audio from the mic or custom contact microphones
3. Seamless Bluetooth integration for external sensors
Just found a new winner for the most hype-baiting, unscientific plot I have seen. (From the recent Figure AI release)
20.02.2025 22:01 β π 37 π 6 π¬ 1 π 1One reason to be intolerant of misleading hype in tech and science is that tolerating the small lies and deception is how you get tolerance of big lies
20.02.2025 18:17 β π 185 π 27 π¬ 4 π 0
Can we extend the power of world models beyond just online model-based learning? Absolutely!
We believe the true potential of world models lies in enabling agents to reason at test time.
Introducing DINO-WM: World Models on Pre-trained Visual Features for Zero-shot Planning.
My advisor warned me that academics trend towards bitterness. He encouraged me to intentionally resist this, remember where I came from, and never forget the privilege of getting to spend a life working with knowledge and ideas. He too said that bitterness and resentment is easy.
04.01.2025 16:17 β π 252 π 38 π¬ 2 π 5This is super helpful for a non-sim person, thanks for the perspective!
21.12.2024 00:05 β π 12 π 1 π¬ 0 π 0
New paper! We show that by using keypoint-based image representation, robot policies become robust to different object types and background changes.
We call this method Prescriptive Point Priors for robot Policies or P3-PO in short. Full project is here: point-priors.github.io
Modern policy architectures are unnecessarily complex. In our #NeurIPS2024 project called BAKU, we focus on what really matters for good policy learning.
BAKU is modular, language-conditioned, compatible with multiple sensor streams & action multi-modality, and importantly fully open-source!
Since we are nearing the end of the year, I'll revisit some of our work I'm most excited about from the last year and maybe a sneak peek of what we are up to next.
To start of, Robot Utility Models, which enables zero-shot deployment. In the video below, the robot hasnt seen these doors before.
I agree, the paper could definitely be clearer. My assumption is βsame training loopβ β βall else being equalβ, but that can be totally incorrect.
05.12.2024 23:56 β π 1 π 0 π¬ 0 π 0AFAIK it's the same dataset, they just use the larger pretrained model as the teacher model. Screenshot is from the DinoV2 paper section 5: arxiv.org/abs/2304.07193
05.12.2024 22:43 β π 1 π 0 π¬ 1 π 0
I'd like to introduce what I've been working at @hellorobot.bsky.social: Stretch AI, a set of open-source tools for language-guided autonomy, exploration, navigation, and learning from demonstration.
Check it out: github.com/hello-robot/...
Thread ->
Turns out aria-glasses are a very useful tool to demonstrate actions to robots: Based on egocentric video we track dynamic changes in a scene graph and use the representation to replay or plan interactions for robots
π behretj.github.io/LostAndFound/
π arxiv.org/abs/2411.19162
πΊ youtu.be/xxMsaBSeMXo
A reminder for folks in financial need: many PhD applications have application fee waivers, those waivers are not super onerous, and they are usually granted (at least at the two schools I'm familiar with). Please take advantage of them.
01.12.2024 16:49 β π 26 π 10 π¬ 1 π 0I wish it were only podcasts, I am seeing form steamrolling over content in academic papers more and more these days.
01.12.2024 16:42 β π 1 π 0 π¬ 0 π 0π
30.11.2024 15:38 β π 0 π 0 π¬ 1 π 0Would like to be added!
28.11.2024 11:49 β π 1 π 0 π¬ 1 π 0I collected some folk knowledge for RL and stuck them in my lecture slides a couple weeks back: web.mit.edu/6.7920/www/l... See Appendix B... sorry, I know, appendix of a lecture slide deck is not the best for discovery. Suggestions very welcome.
27.11.2024 13:36 β π 113 π 17 π¬ 3 π 3
On one of the first projects I supervised in my PhD, a student repeatedly ignored suggestions to commit and then accidentally deleted the project at the end of the semester. Please use git! There are even "fun" games you can use to learn it:
learngitbranching.js.org
We took a bunch of them in robot learning and made a tutorial about them! I tried to put everything that I find myself regularly telling my students there somewhere. Really think it can save some days to months of a new grad studentsβ life.
supervised-robot-learning.github.io
Interesting article but the author drank the Kool-Aid and never sought out other viewpoints: βFoundation models like GPT-4 have largely subsumed [previous] models that help robots with planning and vision, and locomotion and dexterity will probably soon be subsumed, too.β
26.11.2024 16:40 β π 27 π 4 π¬ 1 π 0
I'll be presenting AnySkin at the Stanford Center for Design Research today at 2pm! Stop by for a chat and try the sensor out!
More info: any-skin.github.io
A reminder that many feeds here are non algorithmic so reposting is more helpful than it is on twitter
25.11.2024 15:59 β π 23 π 4 π¬ 1 π 0I was presenting this at NEMS, yes :)
24.11.2024 18:33 β π 0 π 0 π¬ 0 π 0
This week's #PaperILike is "Robots for Humanity: In-Home Deployment of Stretch RE2" (Ranganeni et al., HRI 2024).
This is probably the most inspiring robot video/demo that I've ever seen.
Video: www.youtube.com/watch?v=K2U7...
Paper: dl.acm.org/doi/abs/10.1...