Y solo quiero
que la ola que surge
del último suspiro de un segundo
me transporte
mecido
hasta el siguiente
@alexhergar.bsky.social
Assist. prof. at Université de Montreal and Mila · machine learning for science · climate change and health · open science · he/él/il #PalestinianLivesMatter 🍉 alexhernandezgarcia.github.io
Y solo quiero
que la ola que surge
del último suspiro de un segundo
me transporte
mecido
hasta el siguiente
Happy to have contributed to and now finally share LeMat-GenBench, a new open benchmark + leaderboard for generative crystalline materials models! ⚛️✨
It provides standardised metrics for validity, stability, & much more. Already includes results for 12 models!
🔗 Paper: arxiv.org/abs/2512.04562
1/4
A room at Mila filled with students attending a panel discussion with three professors, inspired by Growing up in Science.
Since @neuripsconf.bsky.social and co. keep organising conferences in states that don't let most of our students in and many try to reduce air travel, at @mila-quebec.bsky.social we keep organising alternative events for those who stay home.
This was a session inspired by Growing up in Science!
Very interesting and relevant work! Congrats!
08.12.2025 18:11 — 👍 1 🔁 0 💬 0 📌 01/4 Psyched to have our paper, 'Irresponsible AI: big tech's influence on AI research and associated impacts', at the Algorithmic Collective Action NeurIPS workshop today! This was a collaboration with @alexhergar.bsky.social, Alexandra Volokhova, and @dounia-kabakibo.bsky.social. Details below...
06.12.2025 16:17 — 👍 7 🔁 1 💬 1 📌 2Thrilled to have contributed to "Irresponsible AI: big tech’s influence on AI research and associated impacts."
My first project outside physics— learned a lot! Check out our paper on ArXiv or Ezekiel’s post for an overview and visit our poster today at 11:15 am PT if you’re at NeurIPS in San Diego!
Yeah I agree
07.12.2025 19:29 — 👍 0 🔁 0 💬 0 📌 0Not that I love it, but I've already switched to using semantic scholar more than google.
06.12.2025 03:21 — 👍 5 🔁 1 💬 1 📌 0🏹 Job alert: Principal Investigators in artificial intelligence and machine learning at @ellisinstitute.fi
📍 Helsinki 🇫🇮
📅 Apply by Jan 12th
🔗 https://www.ellisinstitute.fi/PI-recruit-2026
I need everyone, esp anyone working in education or tech (but really everyone) to WATCH THIS CLIP of @drtanksley.bsky.social discussing the technologies infiltrating our schools & psyches and how she is addressing it with our young people. youtu.be/5mtcSL4S3HQ
22.11.2025 13:43 — 👍 598 🔁 297 💬 11 📌 55I'd like to hire one or two graduate students for the autumn 2026 term.
I'm particularly eager to find students interested in working on active learning / Bayesian optimisation with generative models for scientific discovery applications.
If that resonates with you: mila.quebec/en/prospecti...
AI4Good Lab is a seven-week program that equips women and gender-diverse individuals with the skills necessary to develop their own machine learning projects.
Applications open now!
Do you still find generative flow networks (GFlowNets) a mysterious, weird kind of method?
Check out this interactive article about GFlowNets from collaborators at Johannes Kepler University Linz!
gfn-playground.caleydoapp.org
Yes, applications are open till December 1st. I recommend selecting more than one potential advisor whose research is well aligned with your interest to increase your chances.
05.11.2025 00:17 — 👍 0 🔁 0 💬 1 📌 0I'm looking to recruit a post-doc to help push forward our growing interests in insect ecotoxicology.
Apply here by Nov 30th!
(thanks for reposting)
career5.successfactors.eu/sfcareer/job...
Would you like to join Mila @mila-quebec.bsky.social as a MSc or PhD student?
Application from now until Dec 1 for Fall 2026 admission!
Mila is a vibrant institute for deep learning research and its applications.
I am looking for students interested in active learning for scientific discoveries.
I would even omit "outside the scientific community" from the conditional.
12.10.2025 03:07 — 👍 6 🔁 0 💬 0 📌 0“If you share your knowledge outside the scientific community, then you’re already exerting societal influence, according to Judi Mesman. This isn’t problematic for the prestige of science: ‘Pretending to be neutral doesn’t make sense to me.’”
www.nwo.nl/en/there-is-...
Very well matched couple of posts on my feed.
Links to them:
- bsky.app/profile/iris...
- bsky.app/profile/jane...
Free translation: Carbon offset are unsurprisingly useless due to capitalism
Or: The master’s tools will never dismantle the master’s house
Small, old abandoned car with graffiti.
04.10.2025 21:54 — 👍 0 🔁 0 💬 0 📌 0Please join me in congratulating Woman on her appointment
03.10.2025 11:02 — 👍 16534 🔁 2602 💬 404 📌 107Montreal gems
02.10.2025 22:17 — 👍 3 🔁 0 💬 1 📌 0"People think hope is wishful thinking, and actually hope is about action." Quote from the Wiser Than Me podcast with Julia Louis-Dreyfus.
"Only when our clever brain and our human heart work together in harmony can we achieve our true potential." Graphic by the Jane Goodall Institute of South Africa.
"You cannot get through a single day without having an impact on the world around you. What you do makes a difference, and you have to decide what kind of difference you want to make." Graphic by Ariana Huffington on Instagram.
My favourite Jane Goodall quotes....so many, so to the point, so wise. How different the world would be if enough people listened to her.
02.10.2025 02:35 — 👍 659 🔁 196 💬 14 📌 17In this spirit of fraternity, hope and caution, we call upon your leadership to uphold the following principles and red lines to foster dialogue and reflection on how AI can best serve our entire human family: Human life and dignity: AI must never be developed or used in ways that threaten, diminish, or disqualify human life, dignity, or fundamental rights. Human intelligence – our capacity for wisdom, moral reasoning, and orientation toward truth and beauty – must never be devalued by artificial processing, however sophisticated. AI must be used as a tool, not an authority: AI must remain under human control. Building uncontrollable systems or over-delegating decisions is morally unacceptable and must be legally prohibited. Therefore, development of superintelligence (as mentioned above) AI technologies should not be allowed until there is broad scientific consensus that it will be done safely and controllably, and there is clear and broad public consent. Accountability: only humans have moral and legal agency and AI systems are and must remain legal objects, never subjects. Responsibility and liability reside with developers, vendors, companies, deployers, users, institutes, and governments. AI cannot be granted legal personhood or “rights”. Life-and-death decisions: AI systems must never be allowed to make life or death decisions, especially in military applications during armed conflict or peacetime, law enforcement, border control, healthcare or judicial decisions.
Independent testing and adequate risk assessment must be required before deployment and throughout the entire lifecycle. Stewardship: Governments, corporations, and anyone else should not weaponize AI for any kind of domination, illegal wars of aggression, coercion, manipulation, social scoring, or unwarranted mass surveillance. Responsible design: AI should be designed and independently evaluated to avoid unintentional and catastrophic effects on humans and society, for example through design giving rise to deception, delusion, addiction, or loss of autonomy. No AI monopoly: the benefits of AI – economic, medical, scientific, social – should not be monopolized. No Human Devaluation: design and deployment of AI should make humans flourish in their chosen pursuits, not render humanity redundant, disenfranchised, devalued or replaceable. Ecological responsibility: our use of AI must not endanger our planet and ecosystems. Its vast demands for energy, water, and rare minerals must be managed responsibly and sustainably across the whole supply chain. No irresponsible global competition: We must avoid an irresponsible race between corporations and countries towards ever more powerful AI.
I was part of a working group on AI and Fraternity assembled by the Vatican. We met in Rome and worked on this over two days. I am happy to share the result of that intense effort: a Declaration we presented to the Pope and other government authorities
coexistence.global
Banner with a photo of the speaker, Alex Hernandez-Garcia, and the dates of the event, 24 and 25 of September and the slogans: J'y serai and I'm attending.
🎤 This week I’ll participate at ALL IN 2025, Canada’s largest AI event!
I’ll be in the panel: AI and Climate Change: Can AI Help Solve the Planet’s Biggest Challenges?
I hope it will be a good opportunity to discuss the impacts and opportunities of AI on Climate.
www.allinevent.ai #ALLIN2025
Wow, I thought the EU had regulations about labelling of advertising in the media, but I may be misremembering this.
22.09.2025 01:04 — 👍 1 🔁 0 💬 0 📌 0Was this not clearly labelled as an ad???
21.09.2025 18:54 — 👍 1 🔁 0 💬 1 📌 0Photo of the conference room hosting the Climate Mis/Disinformation Summit. Slide of the panel Governing the Climate and Information Crisis.
I'm attending the Climate Mis/Disinformation Summit at @uottawa.ca. The keynote by @katharinehayhoe.com was wonderful and the various panels are super interesting!
I'll add my humble two cents this afternoon in the panel on Digital Platforms, AI and the Climate Information Environment.
I'm kind of done with the review culture and system in machine learning...
- R1: good, accept!
- R2: everything is well done, but I don't like that. Bad score.
- R3: great, accept!
- R4: great, accept!
AC: One [loud] reviewer didn't like one thing, so reject.