Anirban Ray's Avatar

Anirban Ray

@anirbanray.bsky.social

PhD Student working on bioimaging inverse problems with @florianjug.bsky.social at @humantechnopole.bsky.social + @tudresden.bsky.social | Prev: computer vision Hitachi R&D, Tokyo. πŸ”—: https://rayanirban.github.io/ Likes πŸΈπŸ‹οΈπŸ”οΈπŸ“ and ✈️

263 Followers  |  341 Following  |  59 Posts  |  Joined: 17.11.2024
Posts Following

Posts by Anirban Ray (@anirbanray.bsky.social)

Exciting news! An article on India BioImaging (IBI) has been published in "Microscopy and Analysis" (@wileylifesci.bsky.social )(Issue 2/2026)

Featured under "Building the India Bioimaging Community," it highlights IBI's vision, activities & growing impact in strengthening bioimaging across India.

04.03.2026 17:22 β€” πŸ‘ 2    πŸ” 2    πŸ’¬ 1    πŸ“Œ 0

Congratulations πŸŽŠπŸΎπŸŽ‰

02.03.2026 21:16 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Video thumbnail

Excited to share our new paper (CVPR 2026 πŸš€): "MuViT: Multi-Resolution Vision Transformers for Learning Across Scales in Microscopy" which enables local predictions to use global context.
Great work led by @albertdm.bsky.social & another fun collab w @gioelelamanno.bsky.social! @scadsai.bsky.social

02.03.2026 13:42 β€” πŸ‘ 49    πŸ” 19    πŸ’¬ 1    πŸ“Œ 2
Post image

It was super cool to hear @alex-krull.bsky.social talking about his research at @humantechnopole.bsky.social. Alex never fails to inspire his audience 🫠🀩

11.02.2026 10:46 β€” πŸ‘ 10    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0

yeah exactly! I think that the training objective (minimizing drift between p and q) will inherently try to prioritize perception more than distortion (low FID), and with one-shot prediction, I feel that the chance of hallucination looks high (similar to GANs), but let's see!

08.02.2026 14:51 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

haha, true! Aside from these, I find the idea of training time distribution evolution intriguing. Still trying to understand the implication of drifting models for inverse problems tho :)

07.02.2026 18:55 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image Post image Post image Post image

Excellent talk by Matteo Hessel from Google DeepMind at @humantechnopole.bsky.social earlier today!
Thanks for dropping by and infecting us with the Meta-RL bug!

05.02.2026 12:38 β€” πŸ‘ 3    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0

🀩

19.01.2026 14:00 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
#isbi2026 #generativeai #flowmatching #computationalsuperresolution #bioimageanalysis | Anirban Ray Excited to share that ResMatching (https://lnkd.in/dn_6VB9Q) has been accepted to IEEE International Symposium on Biomedical Imaging (ISBI) 2026! πŸŽ‰πŸŽ‰ This work represents a significant step forward in...

ResMatching accepted to IEEE ISBI 2025πŸŽ‰. We extend Guided CFM to Computational Super-Resolution under extreme noise, achieving SOTA performance and calibrated posterior sampling that reflects uncertainty in real data. Learn more here: www.linkedin.com/posts/anirba...

15.01.2026 04:04 β€” πŸ‘ 5    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
How Residual Connections Are Getting an Upgrade [mHC]
YouTube video by Jia-Bin Huang How Residual Connections Are Getting an Upgrade [mHC]

Residual connections are adopted in virtually every deep learning model.

BUT, can we further improve it? Hyper-connections is an exciting recent exploration to generalize residual connections.

Check out the video explaining manifold constraints hyper-connections!
youtu.be/jYn_1PpRzxI

12.01.2026 22:11 β€” πŸ‘ 17    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0
Preview
Self-Supervised Learning from Noisy and Incomplete Data Many important problems in science and engineering involve inferring a signal from noisy and/or incomplete observations, where the observation process is known. Historically, this problem has been tac...

πŸ“– We put together with Mike Davies a review of self-supervised learning for inverse problems, covering the main approaches in the literature with a unified notation and analysis.

arxiv.org/abs/2601.03244

08.01.2026 12:37 β€” πŸ‘ 8    πŸ” 4    πŸ’¬ 1    πŸ“Œ 0
Image of the Human Technopole

https://it.wikipedia.org/wiki/Human_Technopole#/media/File:Human_Technopole_Milano.jpg

Image of the Human Technopole https://it.wikipedia.org/wiki/Human_Technopole#/media/File:Human_Technopole_Milano.jpg

I'm super excited to announce that I will join the Human Technopole @humantechnopole.bsky.social in Milan in April 2026 as a Research Group Leader.

05.01.2026 11:49 β€” πŸ‘ 70    πŸ” 6    πŸ’¬ 6    πŸ“Œ 0
Preview
Deep Learning for Microscopy Image Analysis Topics The following will be covered extensively during lectures, exercises, and project work: Image denoising and restoration (fully supervised and self-supervised) Image translation (e.g.,

🚨 Alarm!!! 🚨
AI/ML course for microscopy image analysis!!! 🧐
In 2026 at Janelia (@hhmijanelia.bsky.social), no tuition, housing and meals provided! Isn’t that borderline unbelievable?!?

20 students, ~14 TAs and lecturers
πŸ—“οΈ June 4-18 2026

✍️ Jan 15 2026 ✍️

πŸ” pls!!

www.janelia.org/you-janelia/...

28.12.2025 19:05 β€” πŸ‘ 78    πŸ” 62    πŸ’¬ 0    πŸ“Œ 3

Welcome to @humantechnopole.bsky.social, Jan 🀩

19.12.2025 02:30 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Okay, maybe I didn't pay much attention to this before but there are some elegant works that truly amaze me. This is one of them. A task-agnostic setup built on flow matching that just works across problems. I really like this work. πŸ˜€πŸ˜ƒ

04.12.2025 15:09 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image

Invited talk by Rich Sutton at @neuripsconf.bsky.social was presenting the OaK architecture.
A β€œscaffold” for superintelligent agents in which human inputs (at design time) are depreciated. 🧡

03.12.2025 17:36 β€” πŸ‘ 4    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0
Post image Post image Post image

"Back to Basics: Let Denoising Generative Models Denoise" by
Tianhong Li & Kaiming He arxiv.org/abs/2511.13720
Diffusion models in pixel-space, without VAE, with clean image prediction = nice generation results. Not a new framework but a nice exploration of the design space of the diffusion models.

18.11.2025 17:05 β€” πŸ‘ 13    πŸ” 2    πŸ’¬ 0    πŸ“Œ 1

We moved the AI@MBL course "Deep Learning for Microscopy Image Analysis" to HHMI Janelia (@hhmijanelia.bsky.social).

Join us for two weeks of intense lectures, exercise, and hands-on project work!

Course dates: June 4-18 2026
Application by: January 15 2026

www.janelia.org/you-janelia/...

17.11.2025 19:44 β€” πŸ‘ 18    πŸ” 9    πŸ’¬ 1    πŸ“Œ 1

Congratulations πŸŽ‰πŸŽŠ

15.11.2025 18:34 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
DeepInverse Joins the PyTorch Ecosystem: the library for solving imaging inverse problems with deep learning – PyTorch

πŸ’₯ DeepInverse is now part of the official PyTorch LandscapeπŸ’₯

We are excited to join an ecosystem of great open-source AI libraries, including @hf.co diffusers, MONAI, einops, etc.

pytorch.org/blog/deepinv...

05.11.2025 17:31 β€” πŸ‘ 10    πŸ” 5    πŸ’¬ 1    πŸ“Œ 0
Post image Post image Post image Post image

I’m at #AIS25 today and tomorrow…
Where scientists, industry leaders, investors, and policymakers meet to explore the transformative impact of artificial intelligence on scientific discovery.
I think this is a very important conversation we must have NOW! πŸ‘

03.11.2025 13:27 β€” πŸ‘ 18    πŸ” 1    πŸ’¬ 4    πŸ“Œ 0

Anirban Ray, Vera Galinova, Florian Jug
ResMatching: Noise-Resilient Computational Super-Resolution via Guided Conditional Flow Matching
https://arxiv.org/abs/2510.26601

31.10.2025 06:41 β€” πŸ‘ 0    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0

needless to say! and much appreciated 😊

27.10.2025 21:00 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

super cool work😎

27.10.2025 19:13 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

β€œWe may not win every battle, but we will win the war.” --- Such an appropriate characterization for posterior samplers. Each posterior sample fights its own battle against noise and degradation; some win, some lose. But the MMSE estimate wins the war πŸ˜‰.
#iykuk #ImageRestoration

19.10.2025 09:33 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image Post image

Diffusion Transformers with Representation Autoencoders by Boyang Zheng, et al (arxiv.org/abs/2510.116...)

Unexpected result: swapping the SD-VAE for a pretrained visual encoder improves FID, challenging the idea that encoders' information compression is not suited for generative modeling!

14.10.2025 19:08 β€” πŸ‘ 15    πŸ” 1    πŸ’¬ 0    πŸ“Œ 1

πŸ‘πŸ‘πŸ‘

23.09.2025 08:39 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image

We had an awesome #OMIBS2025

Thanks to all the lecturers, staff members, vendor faculty, sponsors, and participants for making this an amazing course year!

26.08.2025 18:05 β€” πŸ‘ 1    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0
Video thumbnail

Introducing Latent-X β€” our all-atom frontier AI model for protein binder design.

State-of-the-art lab performance, widely accessible via the Latent Labs Platform.

Free tier: platform.latentlabs.com
Blog: latentlabs.com/latent-x/
Technical report: tinyurl.com/latent-X

22.07.2025 06:21 β€” πŸ‘ 12    πŸ” 4    πŸ’¬ 2    πŸ“Œ 1

πŸ‘

17.07.2025 15:47 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0