Alex Hernandez-Garcia's Avatar

Alex Hernandez-Garcia

@alexhergar.bsky.social

Assist. prof. at Université de Montreal and Mila · machine learning for science · climate change and health · open science · he/él/il #PalestinianLivesMatter 🍉 alexhernandezgarcia.github.io

450 Followers  |  251 Following  |  60 Posts  |  Joined: 20.10.2024  |  2.0364

Latest posts by alexhergar.bsky.social on Bluesky

Y solo quiero
que la ola que surge
del último suspiro de un segundo
me transporte
mecido
hasta el siguiente

10.12.2025 21:05 — 👍 0    🔁 0    💬 0    📌 0
Preview
LeMat-GenBench: A Unified Evaluation Framework for Crystal Generative Models Generative machine learning (ML) models hold great promise for accelerating materials discovery through the inverse design of inorganic crystals, enabling an unprecedented exploration of chemical spac...

Happy to have contributed to and now finally share LeMat-GenBench, a new open benchmark + leaderboard for generative crystalline materials models! ⚛️✨

It provides standardised metrics for validity, stability, & much more. Already includes results for 12 models!

🔗 Paper: arxiv.org/abs/2512.04562
1/4

09.12.2025 17:05 — 👍 8    🔁 2    💬 1    📌 0
A room at Mila filled with students attending a panel discussion with three professors, inspired by Growing up in Science.

A room at Mila filled with students attending a panel discussion with three professors, inspired by Growing up in Science.

Since @neuripsconf.bsky.social and co. keep organising conferences in states that don't let most of our students in and many try to reduce air travel, at @mila-quebec.bsky.social we keep organising alternative events for those who stay home.

This was a session inspired by Growing up in Science!

08.12.2025 20:46 — 👍 2    🔁 0    💬 0    📌 0

Very interesting and relevant work! Congrats!

08.12.2025 18:11 — 👍 1    🔁 0    💬 0    📌 0

1/4 Psyched to have our paper, 'Irresponsible AI: big tech's influence on AI research and associated impacts', at the Algorithmic Collective Action NeurIPS workshop today! This was a collaboration with @alexhergar.bsky.social, Alexandra Volokhova, and @dounia-kabakibo.bsky.social. Details below...

06.12.2025 16:17 — 👍 7    🔁 1    💬 1    📌 2

Thrilled to have contributed to "Irresponsible AI: big tech’s influence on AI research and associated impacts."
My first project outside physics— learned a lot! Check out our paper on ArXiv or Ezekiel’s post for an overview and visit our poster today at 11:15 am PT if you’re at NeurIPS in San Diego!

06.12.2025 19:29 — 👍 6    🔁 1    💬 0    📌 0

Yeah I agree

07.12.2025 19:29 — 👍 0    🔁 0    💬 0    📌 0

Not that I love it, but I've already switched to using semantic scholar more than google.

06.12.2025 03:21 — 👍 5    🔁 1    💬 1    📌 0
Preview
Principal Investigator positions at ELLIS Institute Finland | ELLIS Institute Finland Call for new PIs in artificial intelligence and machine learning

🏹 Job alert: Principal Investigators in artificial intelligence and machine learning at @ellisinstitute.fi

📍 Helsinki 🇫🇮
📅 Apply by Jan 12th
🔗 https://www.ellisinstitute.fi/PI-recruit-2026

25.11.2025 11:05 — 👍 6    🔁 5    💬 0    📌 0
Howard University AI Panel
YouTube video by Tiera Tanksley Howard University AI Panel

I need everyone, esp anyone working in education or tech (but really everyone) to WATCH THIS CLIP of @drtanksley.bsky.social discussing the technologies infiltrating our schools & psyches and how she is addressing it with our young people. youtu.be/5mtcSL4S3HQ

22.11.2025 13:43 — 👍 598    🔁 297    💬 11    📌 55

I'd like to hire one or two graduate students for the autumn 2026 term.

I'm particularly eager to find students interested in working on active learning / Bayesian optimisation with generative models for scientific discovery applications.

If that resonates with you: mila.quebec/en/prospecti...

19.11.2025 20:34 — 👍 2    🔁 0    💬 0    📌 0

AI4Good Lab is a seven-week program that equips women and gender-diverse individuals with the skills necessary to develop their own machine learning projects.

Applications open now!

11.11.2025 23:55 — 👍 2    🔁 0    💬 0    📌 0

Do you still find generative flow networks (GFlowNets) a mysterious, weird kind of method?

Check out this interactive article about GFlowNets from collaborators at Johannes Kepler University Linz!

gfn-playground.caleydoapp.org

10.11.2025 15:16 — 👍 4    🔁 0    💬 0    📌 0

Yes, applications are open till December 1st. I recommend selecting more than one potential advisor whose research is well aligned with your interest to increase your chances.

05.11.2025 00:17 — 👍 0    🔁 0    💬 1    📌 0
Career Opportunities: Posdoctoral researcher in toxin susceptibility and evolution of resistance in insects (22517)

I'm looking to recruit a post-doc to help push forward our growing interests in insect ecotoxicology.
Apply here by Nov 30th!
(thanks for reposting)

career5.successfactors.eu/sfcareer/job...

16.10.2025 12:18 — 👍 30    🔁 37    💬 0    📌 1

Would you like to join Mila @mila-quebec.bsky.social as a MSc or PhD student?

Application from now until Dec 1 for Fall 2026 admission!

Mila is a vibrant institute for deep learning research and its applications.

I am looking for students interested in active learning for scientific discoveries.

15.10.2025 14:07 — 👍 7    🔁 0    💬 1    📌 1

I would even omit "outside the scientific community" from the conditional.

12.10.2025 03:07 — 👍 6    🔁 0    💬 0    📌 0
‘There is no clear boundary between science and activism’ | NWO If you share your knowledge outside the scientific community, then you’re already exerting societal influence, according to Judi Mesman. This isn’t problematic for the prestige of science: ‘Pretending to be neutral doesn’t make sense to me.’

“If you share your knowledge outside the scientific community, then you’re already exerting societal influence, according to Judi Mesman. This isn’t problematic for the prestige of science: ‘Pretending to be neutral doesn’t make sense to me.’”

www.nwo.nl/en/there-is-...

11.10.2025 23:34 — 👍 830    🔁 191    💬 21    📌 13

Very well matched couple of posts on my feed.

Links to them:
- bsky.app/profile/iris...
- bsky.app/profile/jane...

11.10.2025 16:28 — 👍 9    🔁 4    💬 1    📌 0

Free translation: Carbon offset are unsurprisingly useless due to capitalism

Or: The master’s tools will never dismantle the master’s house

07.10.2025 13:41 — 👍 3    🔁 1    💬 0    📌 0
Small, old abandoned car with graffiti.

Small, old abandoned car with graffiti.

04.10.2025 21:54 — 👍 0    🔁 0    💬 0    📌 0

Please join me in congratulating Woman on her appointment

03.10.2025 11:02 — 👍 16534    🔁 2602    💬 404    📌 107
Post image

Montreal gems

02.10.2025 22:17 — 👍 3    🔁 0    💬 1    📌 0
"People think hope is wishful thinking, and actually hope is about action." Quote from the Wiser Than Me podcast with Julia Louis-Dreyfus.

"People think hope is wishful thinking, and actually hope is about action." Quote from the Wiser Than Me podcast with Julia Louis-Dreyfus.

"Only when our clever brain and our human heart work together in  harmony can we achieve our true potential." Graphic by the Jane Goodall Institute of South Africa.

"Only when our clever brain and our human heart work together in harmony can we achieve our true potential." Graphic by the Jane Goodall Institute of South Africa.

"You cannot get through a single day without having an impact on the world around you. What you do makes a difference, and you have to decide what kind of difference you want to make." Graphic by Ariana Huffington on Instagram.

"You cannot get through a single day without having an impact on the world around you. What you do makes a difference, and you have to decide what kind of difference you want to make." Graphic by Ariana Huffington on Instagram.

My favourite Jane Goodall quotes....so many, so to the point, so wise. How different the world would be if enough people listened to her.

02.10.2025 02:35 — 👍 659    🔁 196    💬 14    📌 17
Post image In this spirit of fraternity, hope and caution, we call upon your leadership to uphold the following principles and red lines to foster dialogue and reflection on how AI can best serve our entire human family:

    Human life and dignity: AI must never be developed or used in ways that threaten, diminish, or disqualify human life, dignity, or fundamental rights. Human intelligence – our capacity for wisdom, moral reasoning, and orientation toward truth and beauty – must never be devalued by artificial processing, however sophisticated. 

    AI must be used as a tool, not an authority: AI must remain under human control. Building uncontrollable systems or over-delegating decisions is morally unacceptable and must be legally prohibited. Therefore, development of superintelligence (as mentioned above) AI technologies should not be allowed until there is broad scientific consensus that it will be done safely and controllably, and there is clear and broad public consent.

    Accountability: only humans have moral and legal agency and AI systems are and must remain legal objects, never subjects. Responsibility and liability reside with developers, vendors, companies, deployers, users, institutes, and governments. AI cannot be granted legal personhood or “rights”. 

    Life-and-death decisions: AI systems must never be allowed to make life or death decisions, especially in military applications during armed conflict or peacetime, law enforcement, border control, healthcare or judicial decisions.

In this spirit of fraternity, hope and caution, we call upon your leadership to uphold the following principles and red lines to foster dialogue and reflection on how AI can best serve our entire human family: Human life and dignity: AI must never be developed or used in ways that threaten, diminish, or disqualify human life, dignity, or fundamental rights. Human intelligence – our capacity for wisdom, moral reasoning, and orientation toward truth and beauty – must never be devalued by artificial processing, however sophisticated. AI must be used as a tool, not an authority: AI must remain under human control. Building uncontrollable systems or over-delegating decisions is morally unacceptable and must be legally prohibited. Therefore, development of superintelligence (as mentioned above) AI technologies should not be allowed until there is broad scientific consensus that it will be done safely and controllably, and there is clear and broad public consent. Accountability: only humans have moral and legal agency and AI systems are and must remain legal objects, never subjects. Responsibility and liability reside with developers, vendors, companies, deployers, users, institutes, and governments. AI cannot be granted legal personhood or “rights”. Life-and-death decisions: AI systems must never be allowed to make life or death decisions, especially in military applications during armed conflict or peacetime, law enforcement, border control, healthcare or judicial decisions.

    Independent testing and adequate risk assessment must be required before deployment and throughout the entire lifecycle.
    Stewardship: Governments, corporations, and anyone else should not weaponize AI for any kind of domination, illegal wars of aggression, coercion, manipulation, social scoring, or unwarranted mass surveillance. 

    Responsible design: AI should be designed and independently evaluated to avoid unintentional and catastrophic effects on humans and society, for example through design giving rise to deception, delusion, addiction, or loss of autonomy.  

    No AI monopoly: the benefits of AI – economic, medical, scientific, social – should not be monopolized. 

    No Human Devaluation: design and deployment of AI should make humans flourish in their chosen pursuits, not render humanity redundant, disenfranchised, devalued or replaceable. 

    Ecological responsibility: our use of AI must not endanger our planet and ecosystems. Its vast demands for energy, water, and rare minerals must be managed responsibly and sustainably across the whole supply chain.

    No irresponsible global competition: We must avoid an irresponsible race between corporations and countries towards ever more powerful AI.

Independent testing and adequate risk assessment must be required before deployment and throughout the entire lifecycle. Stewardship: Governments, corporations, and anyone else should not weaponize AI for any kind of domination, illegal wars of aggression, coercion, manipulation, social scoring, or unwarranted mass surveillance. Responsible design: AI should be designed and independently evaluated to avoid unintentional and catastrophic effects on humans and society, for example through design giving rise to deception, delusion, addiction, or loss of autonomy. No AI monopoly: the benefits of AI – economic, medical, scientific, social – should not be monopolized. No Human Devaluation: design and deployment of AI should make humans flourish in their chosen pursuits, not render humanity redundant, disenfranchised, devalued or replaceable. Ecological responsibility: our use of AI must not endanger our planet and ecosystems. Its vast demands for energy, water, and rare minerals must be managed responsibly and sustainably across the whole supply chain. No irresponsible global competition: We must avoid an irresponsible race between corporations and countries towards ever more powerful AI.

I was part of a working group on AI and Fraternity assembled by the Vatican. We met in Rome and worked on this over two days. I am happy to share the result of that intense effort: a Declaration we presented to the Pope and other government authorities

coexistence.global

23.09.2025 17:33 — 👍 283    🔁 106    💬 7    📌 13
Banner with a photo of the speaker, Alex Hernandez-Garcia, and the dates of the event, 24 and 25 of September and the slogans: J'y serai and I'm attending.

Banner with a photo of the speaker, Alex Hernandez-Garcia, and the dates of the event, 24 and 25 of September and the slogans: J'y serai and I'm attending.

🎤 This week I’ll participate at ALL IN 2025, Canada’s largest AI event!

I’ll be in the panel: AI and Climate Change: Can AI Help Solve the Planet’s Biggest Challenges?

I hope it will be a good opportunity to discuss the impacts and opportunities of AI on Climate.

www.allinevent.ai #ALLIN2025

22.09.2025 20:42 — 👍 4    🔁 1    💬 0    📌 0

Wow, I thought the EU had regulations about labelling of advertising in the media, but I may be misremembering this.

22.09.2025 01:04 — 👍 1    🔁 0    💬 0    📌 0

Was this not clearly labelled as an ad???

21.09.2025 18:54 — 👍 1    🔁 0    💬 1    📌 0
Photo of the conference room hosting the Climate Mis/Disinformation Summit. Slide of the panel Governing the Climate and Information Crisis.

Photo of the conference room hosting the Climate Mis/Disinformation Summit. Slide of the panel Governing the Climate and Information Crisis.

I'm attending the Climate Mis/Disinformation Summit at @uottawa.ca. The keynote by @katharinehayhoe.com was wonderful and the various panels are super interesting!

I'll add my humble two cents this afternoon in the panel on Digital Platforms, AI and the Climate Information Environment.

19.09.2025 17:25 — 👍 3    🔁 0    💬 0    📌 0

I'm kind of done with the review culture and system in machine learning...

- R1: good, accept!
- R2: everything is well done, but I don't like that. Bad score.
- R3: great, accept!
- R4: great, accept!

AC: One [loud] reviewer didn't like one thing, so reject.

18.09.2025 19:00 — 👍 3    🔁 0    💬 0    📌 0

@alexhergar is following 20 prominent accounts