Fwiw Tesla isn't the first to build an LFG plant in the US.
Looks like LGES started production in May. (I think it's a joint venture with GM)
Tesla just has such a huge marketing megaphone so you hear about them more
@joshmcclellan.bsky.social
I study generalization for reinforcement learning
Fwiw Tesla isn't the first to build an LFG plant in the US.
Looks like LGES started production in May. (I think it's a joint venture with GM)
Tesla just has such a huge marketing megaphone so you hear about them more
Tesla has a lot more cash (big margins in 2020-2024, can sell stock etc), and got a head start on batteries.
But the other US automakers are working on this.
GM has a joint LFP plant set for 2027.
LGES already opened an LFG plant in US
GM ultium platform is also schnazzy.
I heard someone once say that Tesla's best selling product is its stock lol.
13.04.2025 17:20 β π 3 π 0 π¬ 0 π 0I'm looking to hire a student researcher to work on an exciting project for 6 months in DeepMind Montreal.
Requirements:
- Full-time masters/PhD student π§πΎβπ
- Substantial expertise in multi-agent RL, ideally including publication(s) π€π€
- Strong Python coding skills π
Is this you? Get in touch!
Super excited to share our paper, Simplifying Deep Temporal Difference Learning has been accepted as a spotlight at ICLR! My fab collaborator Matteo Gallici and I have written a three part blog on the work, so stay tuned for that! :)
@flair-ox.bsky.social
arxiv.org/pdf/2407.04811
NOETIX robot: 44lbs, <4 feet tall, 18 dof, Jetson on board. Starting at $5.5k. At this rate I am fairly convinced there will be robots absolutely everywhere within 5 years; although probably more form factors than just humanoids.
15.03.2025 20:23 β π 58 π 14 π¬ 144 π 62OpenAI has many problems, but I can think of few outcomes worse than Musk gaining control over it.
He will continue to drum up fear about rogue AI being an existential threat to justify his consolidation of power and use it to dienfranchise people.
finance.yahoo.com/news/elon-mu...
We've built a simulated driving agent that we trained on 1.6 billion km of driving with no human data.
It is SOTA on every planning benchmark we tried.
In self-play, it goes 20 years between collisions.
1. Today the NIH director issued a new directive slashing overhead rates to 15%.
I want to provide some context on what that means and why it matters.
grants.nih.gov/grants/guide...
A Song of Ice and Fire! I especially love the audiobooks
A couple of the early Witcher books are good too
We will be presenting this tomorrow at Neurips in the evening poster session! Come stop by to chat!
13.12.2024 03:05 β π 2 π 0 π¬ 0 π 0This robustness stems directly from its symmetry guarantees, allowing it to lose less performance when adapting to new scenarios.
If you'll be at Neurips come visit our poster next week to learn more and discuss the exciting future of MARL!
E2GN2 also shines when it comes to generalization. In tests where agents are trained on one SMACv2 scenario and then tested on a different one, E2GN2 demonstrates up to 5x greater performance than standard approaches.
06.12.2024 15:20 β π 0 π 0 π¬ 1 π 0How much better is E2GN2? We see a remarkable 2x-5x improvement in sample efficiency over standard graph neural networks in the challenging SMACv2 benchmark. This means faster training times, leading to more rapid progress in MARL research.
06.12.2024 15:20 β π 0 π 0 π¬ 1 π 0Imagine teaching a robot to play soccer. If it learns to pass the ball to the right, it should easily grasp how to pass to the left due to the inherent symmetries. E2GN2 bakes this concept of symmetry into the network architecture, allowing agents to learn more effectively
06.12.2024 15:20 β π 0 π 0 π¬ 1 π 0Traditional neural networks (ie MLPs, GNNs) learn input/output relationships with few constraints, structure, or priors on the policies learned. These generic architectures lack a strong inductive bias making them inefficient in terms of the training samples required.
06.12.2024 15:20 β π 0 π 0 π¬ 1 π 0Our work focuses on addressing the challenges of sample inefficiency and poor generalization in Multi-Agent Reinforcement Learning (MARL), a crucial area of AI research with applications in robotics, game playing, and more.
06.12.2024 15:20 β π 0 π 0 π¬ 1 π 0I'm excited to share that our paper, "Boosting Sample Efficiency and Generalization in Multi-agent Reinforcement Learning via Equivariance," has been accepted to NeurIPS 2024! π
#NeurIPS #MARL #AI #ReinforcementLearning #MachineLearning #Equivariance #GraphNeuralNetworks