Exciting news! An article on India BioImaging (IBI) has been published in "Microscopy and Analysis" (@wileylifesci.bsky.social )(Issue 2/2026)
Featured under "Building the India Bioimaging Community," it highlights IBI's vision, activities & growing impact in strengthening bioimaging across India.
04.03.2026 17:22 β
π 2
π 2
π¬ 1
π 0
Congratulations ππΎπ
02.03.2026 21:16 β
π 1
π 0
π¬ 0
π 0
Excited to share our new paper (CVPR 2026 π): "MuViT: Multi-Resolution Vision Transformers for Learning Across Scales in Microscopy" which enables local predictions to use global context.
Great work led by @albertdm.bsky.social & another fun collab w @gioelelamanno.bsky.social! @scadsai.bsky.social
02.03.2026 13:42 β
π 49
π 19
π¬ 1
π 2
It was super cool to hear @alex-krull.bsky.social talking about his research at @humantechnopole.bsky.social. Alex never fails to inspire his audience π« π€©
11.02.2026 10:46 β
π 10
π 1
π¬ 0
π 0
yeah exactly! I think that the training objective (minimizing drift between p and q) will inherently try to prioritize perception more than distortion (low FID), and with one-shot prediction, I feel that the chance of hallucination looks high (similar to GANs), but let's see!
08.02.2026 14:51 β
π 0
π 0
π¬ 0
π 0
haha, true! Aside from these, I find the idea of training time distribution evolution intriguing. Still trying to understand the implication of drifting models for inverse problems tho :)
07.02.2026 18:55 β
π 1
π 0
π¬ 1
π 0
π€©
19.01.2026 14:00 β
π 1
π 0
π¬ 0
π 0
YouTube video by Jia-Bin Huang
How Residual Connections Are Getting an Upgrade [mHC]
Residual connections are adopted in virtually every deep learning model.
BUT, can we further improve it? Hyper-connections is an exciting recent exploration to generalize residual connections.
Check out the video explaining manifold constraints hyper-connections!
youtu.be/jYn_1PpRzxI
12.01.2026 22:11 β
π 17
π 1
π¬ 1
π 0
Image of the Human Technopole
https://it.wikipedia.org/wiki/Human_Technopole#/media/File:Human_Technopole_Milano.jpg
I'm super excited to announce that I will join the Human Technopole @humantechnopole.bsky.social in Milan in April 2026 as a Research Group Leader.
05.01.2026 11:49 β
π 70
π 6
π¬ 6
π 0
Deep Learning for Microscopy Image Analysis
Topics The following will be covered extensively during lectures, exercises, and project work: Image denoising and restoration (fully supervised and self-supervised) Image translation (e.g.,
π¨ Alarm!!! π¨
AI/ML course for microscopy image analysis!!! π§
In 2026 at Janelia (@hhmijanelia.bsky.social), no tuition, housing and meals provided! Isnβt that borderline unbelievable?!?
20 students, ~14 TAs and lecturers
ποΈ June 4-18 2026
βοΈ Jan 15 2026 βοΈ
π pls!!
www.janelia.org/you-janelia/...
28.12.2025 19:05 β
π 78
π 62
π¬ 0
π 3
Welcome to @humantechnopole.bsky.social, Jan π€©
19.12.2025 02:30 β
π 2
π 0
π¬ 1
π 0
Okay, maybe I didn't pay much attention to this before but there are some elegant works that truly amaze me. This is one of them. A task-agnostic setup built on flow matching that just works across problems. I really like this work. ππ
04.12.2025 15:09 β
π 3
π 0
π¬ 0
π 0
Invited talk by Rich Sutton at @neuripsconf.bsky.social was presenting the OaK architecture.
A βscaffoldβ for superintelligent agents in which human inputs (at design time) are depreciated. π§΅
03.12.2025 17:36 β
π 4
π 1
π¬ 1
π 0
We moved the AI@MBL course "Deep Learning for Microscopy Image Analysis" to HHMI Janelia (@hhmijanelia.bsky.social).
Join us for two weeks of intense lectures, exercise, and hands-on project work!
Course dates: June 4-18 2026
Application by: January 15 2026
www.janelia.org/you-janelia/...
17.11.2025 19:44 β
π 18
π 9
π¬ 1
π 1
Congratulations ππ
15.11.2025 18:34 β
π 1
π 0
π¬ 1
π 0
DeepInverse Joins the PyTorch Ecosystem: the library for solving imaging inverse problems with deep learning β PyTorch
π₯ DeepInverse is now part of the official PyTorch Landscapeπ₯
We are excited to join an ecosystem of great open-source AI libraries, including @hf.co diffusers, MONAI, einops, etc.
pytorch.org/blog/deepinv...
05.11.2025 17:31 β
π 10
π 5
π¬ 1
π 0
Anirban Ray, Vera Galinova, Florian Jug
ResMatching: Noise-Resilient Computational Super-Resolution via Guided Conditional Flow Matching
https://arxiv.org/abs/2510.26601
31.10.2025 06:41 β
π 0
π 1
π¬ 0
π 0
needless to say! and much appreciated π
27.10.2025 21:00 β
π 0
π 0
π¬ 0
π 0
super cool workπ
27.10.2025 19:13 β
π 1
π 0
π¬ 1
π 0
βWe may not win every battle, but we will win the war.β --- Such an appropriate characterization for posterior samplers. Each posterior sample fights its own battle against noise and degradation; some win, some lose. But the MMSE estimate wins the war π.
#iykuk #ImageRestoration
19.10.2025 09:33 β
π 0
π 0
π¬ 0
π 0
Diffusion Transformers with Representation Autoencoders by Boyang Zheng, et al (arxiv.org/abs/2510.116...)
Unexpected result: swapping the SD-VAE for a pretrained visual encoder improves FID, challenging the idea that encoders' information compression is not suited for generative modeling!
14.10.2025 19:08 β
π 15
π 1
π¬ 0
π 1
πππ
23.09.2025 08:39 β
π 2
π 0
π¬ 0
π 0
We had an awesome #OMIBS2025
Thanks to all the lecturers, staff members, vendor faculty, sponsors, and participants for making this an amazing course year!
26.08.2025 18:05 β
π 1
π 1
π¬ 0
π 0
Introducing Latent-X β our all-atom frontier AI model for protein binder design.
State-of-the-art lab performance, widely accessible via the Latent Labs Platform.
Free tier: platform.latentlabs.com
Blog: latentlabs.com/latent-x/
Technical report: tinyurl.com/latent-X
22.07.2025 06:21 β
π 12
π 4
π¬ 2
π 1
π
17.07.2025 15:47 β
π 1
π 0
π¬ 0
π 0