I agree. This is really tricky, especially when we didn't have an open call. Biases remain even with human evaluation.
Contacted researchers were asked to write an application and the final selection came from there. But the LL+ML models helped us pinpoint a selected group of people
30.08.2025 20:49 —
👍 0
🔁 0
💬 0
📌 0
The purpose of using LLMs is to scan through loads of papers and label them according to previously human defined labels on a smaller set of relevant papers. Then other ML models are used on top to 'predict' and have an indication of what is promising and what fits our mission.
30.08.2025 20:40 —
👍 0
🔁 0
💬 0
📌 0
To expand on this, the purpose of the group is to find climate innovation in scientific papers. And being proactive at contacting researchers saying 'hey! we think your paper tackles climate change due to X and has potential to be commercialisable'
30.08.2025 20:35 —
👍 0
🔁 0
💬 0
📌 0
Hey Jase, I have a high level blog I wrote a couple of months ago if that's useful. We are still working on the validation of this process in the real-world as we just awarded our first cohort.
Can't say this is more effective as our remit as our mission is specific to finding climate innovation
30.08.2025 20:28 —
👍 1
🔁 0
💬 1
📌 0
Hi guys, as one of the people mentioned in the article, I'm happy to clarify any questions about the workflow. There is constant human verification along the process of finding research and researchers. I would not trust a fully automated LLM system and that is mentioned at the end of the article
30.08.2025 19:08 —
👍 0
🔁 0
💬 1
📌 0