Valentina Giunchiglia's Avatar

Valentina Giunchiglia

@valegiunca.bsky.social

PhD student at Imperial College London. Visiting Researcher at Harvard University Multimodal foundation models for precision medicine

25 Followers  |  13 Following  |  11 Posts  |  Joined: 05.12.2024  |  1.6824

Latest posts by valegiunca.bsky.social on Bluesky

Preview
LOGML 2025 London Geometry and Machine Learning Summer School, July 7-11 2025

🌟Applications open- LOGML 2025🌟

πŸ‘₯Mentor-led projects, expert talks, tutorials, socials, and a networking night
✍️Application form: logml.ai
πŸ”¬Projects: www.logml.ai/projects.html
πŸ“…Apply by 6th April 2025
βœ‰οΈQuestions? logml.committee@gmail.com

#MachineLearning #SummerSchool #LOGML #Geometry

11.03.2025 15:24 β€” πŸ‘ 20    πŸ” 9    πŸ’¬ 2    πŸ“Œ 1

The organisation and scientific advisory committees: @simofoti.bsky.social, @valegiunca.bsky.social, @pragya-singh.bsky.social, @daniel-platt.bsky.social, Vincenzo Marco De Luca, Massimiliano Esposito, Arne Wolf, Zhengang Zhong, Rahul Singh

11.03.2025 15:24 β€” πŸ‘ 4    πŸ” 2    πŸ’¬ 1    πŸ“Œ 0
Preview
LOGML 2025 London Geometry and Machine Learning Summer School, July 7-11 2025

Apply by the 16th February!

If you have any specific questions, contact: logml.committee@gmail.com

02.02.2025 12:58 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
LOGML 2025 London Geometry and Machine Learning Summer School, July 7-11 2025

We are currently recruiting mentors to lead up to 6 students on a week-long project at the intersection of geometry and ML. Mentors can be PhD students (not first years), Postdocs or lectures! Many projects result in top conferences and journal publications. Mentors expenses will be covered.

02.02.2025 12:58 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Preview
LOGML 2025 London Geometry and Machine Learning Summer School, July 7-11 2025

LOGML (London Geometry and Machine Learning) summer school is back and we are looking for mentors!

@logml.bsky.social aims to bring together mathematicians and computer scientists to collaborate on problems at the intersection of geometry and ML.

More information is available at www.logml.ai.

02.02.2025 12:58 β€” πŸ‘ 2    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0

@simofoti.bsky.social @pragya-singh.bsky.social @valegiunca.bsky.social @daniel-platt.bsky.social

@mmbronstein.bsky.social @marinkazitnik.bsky.social

22.01.2025 13:00 β€” πŸ‘ 5    πŸ” 2    πŸ’¬ 0    πŸ“Œ 0
Video thumbnail

⭐️Mentor applications open⭐️

We're excited to announce that LOGML summer school will return in London: July 7-11 2025. We are seeking mentors to lead group projects at the intersection of geometry and machine learning. Find out more and apply:

logml.ai

22.01.2025 13:00 β€” πŸ‘ 13    πŸ” 6    πŸ’¬ 1    πŸ“Œ 2
ProCyon: A multimodal foundation model for protein phenotypes

ProCyon: A multimodal foundation model for protein phenotypes

Figure 1

Figure 1

Figure 2

Figure 2

Figure 3

Figure 3

ProCyon: A multimodal foundation model for protein phenotypes [new]

16.12.2024 05:57 β€” πŸ‘ 3    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0

ProCyon: A multimodal foundation model for protein phenotypes https://www.biorxiv.org/content/10.1101/2024.12.10.627665v1

16.12.2024 05:50 β€” πŸ‘ 1    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0

@imperialcollegeldn.bsky.social
@harvard.edu
@kingscollegelondon.bsky.social
@imperialbrains.bsky.social

24.12.2024 22:03 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image

I am happy to finally share ProCyon, a multimodal multiscale model that integrates protein sequences, structures, and natural language to predict and generate protein phenotypes.

Paper: www.biorxiv.org/content/10.1...
Blog post: kempnerinstitute.harvard.edu/research/dee...

24.12.2024 22:03 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image Post image Post image Post image

ProCyon: A multimodal foundation model for protein phenotypes https://www.biorxiv.org/content/10.1101/2024.12.10.627665v1 🧬πŸ–₯️πŸ§ͺ https://github.com/mims-harvard/ProCyon

18.12.2024 21:00 β€” πŸ‘ 4    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0

#Neuroscience #Imperial #Cognition #CognitiveNeuroscience

13.12.2024 12:13 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

We tested it on 12 online tasks collected with Cognitron.

Compared to standard measures of RT and accuracy, IDoCT's measures of ability:

- have more interpretable latent cognitive factors
- are less sensitive to device
- have higher sensitivity and specificity

13.12.2024 12:13 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

We tested the model on simulated data and IDoCT could reliably recover ground truth measures of trial’s difficulty, ability and visuomotor delay

13.12.2024 12:13 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

IDoCT comes with a nice set of features:

- Robust: Works with as little as 100 participants
- Efficient: Scales up inexpensively to > 100K participants
- Flexible: Can work with potentially any online task collecting trial-by-trial responses

13.12.2024 12:13 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

IDoCT derives specific estimates of ability, and visuomotor delay from trial-by-trial measures of reaction time (RT) and accuracy, while also providing data-driven trial’s difficulty scales that detect the most challenging aspects/dimensions of each task

13.12.2024 12:13 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

🚨 It took two years but it finally happened!

Excited to share IDoCT - a novel computational model that can disentangle the motor and cognitive component from participants’ performance in online cognitive tasks - now published in Nature Digital Medicine.

13.12.2024 12:13 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

@valegiunca is following 13 prominent accounts