Where to Attend: A Principled Vision-Centric Position Encoding with Parabolas
Paper: arxiv.org/abs/2602.01418
Website: chrisohrstrom.github.io/parabolic-po...
Code: github.com/DTU-PAS/para...
@rgring.bsky.social @lanalpa.bsky.social
Where to Attend: A Principled Vision-Centric Position Encoding with Parabolas
Paper: arxiv.org/abs/2602.01418
Website: chrisohrstrom.github.io/parabolic-po...
Code: github.com/DTU-PAS/para...
@rgring.bsky.social @lanalpa.bsky.social
What if position encodings were designed for vision from scratch? We introduce PaPEβParabolic Position Encoding. Outperforms RoPE on 7/8 datasets and extrapolates to higher resolutions without fine-tuning or position interpolation. Paper, code, and website in thread π§΅
04.02.2026 08:22 β π 36 π 7 π¬ 3 π 0
Kudos to the authors: Junru Ren, Abhijoy Mandal, Rama El-khawaldeh, Shi Xuan Leong, @profhein.bsky.social, @aspuru.bsky.social, @lanalpa.bsky.social and Kourosh Darvish.
[6/6]
This approach enables more reliable real-time monitoring for automated workflows such as liquidβliquid extraction, distillation, and crystallizationβbringing us closer to truly adaptive, autonomous chemistry labs.
[5/6]
Now in Digital Discovery: Context-aware computer vision for chemical reaction state detection.
π pubs.rsc.org/en/content/a...
[1/6]
Exciting new research from our group! π¦Ύπ€π
03.11.2025 16:25 β π 2 π 0 π¬ 0 π 0Great talk! Very thought provoking!
24.06.2025 11:20 β π 1 π 0 π¬ 0 π 0
Join us and revolutionize Life Science Lab Automation! ππ€π
I am hiring a Postdoc in Robotics and Computer Vision for Life Science Laboratory Automation, in Copenhagen, Denmark.
Is that you? πββοΈ
efzu.fa.em2.oraclecloud.com/hcmUI/Candid...
This is part of a strategic partnership between DTU - Technical University of Denmark and Novo Nordisk to tackle one of life science industry's biggest challenges: Closed-Loop Design and Optimization of Biologics.
20.02.2025 11:41 β π 0 π 0 π¬ 0 π 0The goal of this position is to work as part of a larger cross-disciplinary team and contribute by enhancing the visual perception capabilities of robots, enabling them to interact autonomously with unknown and dynamic objects.
20.02.2025 11:41 β π 1 π 0 π¬ 1 π 0
I am hiring a Postdoctoral Researcher to work on Computer Vision for Autonomous Robots in Life Science Automation.
#ComputerVision #Robotics #LabAutomation
Apply here: efzu.fa.em2.oraclecloud.com/hcmUI/Candid...
** update: paper accepted in ICRA 2025 **
If you are attending #ICRA2025 in Atlanta, we will be happy to meet you and discuss about our SteeredMarigold method!!
For us, it is an important step in our RoBΓ©tArmΓ© Horizon Europe project, funded by @ec.europa.eu, improving the semantic understanding capabilities of our robotic system.
Special thanks to our project partner Christiansen og Essenbæk A/S for organizing construction site access!
We hope our data is an accelerator for everyone training and deploying deep learning models in construction β We think everyone faced the problem of limited data availability, which we hope to alleviate.
29.01.2025 11:39 β π 3 π 0 π¬ 1 π 0This journal article introduces a new publicly available dataset captured in construction environments. We have around 15k images, a mix of self-captured and publicly available images with segmentation labels for defects in reinforced concrete.
29.01.2025 11:39 β π 1 π 0 π¬ 1 π 0
π£NEW DATASET ALERT! π§±π§
We are happy to present our latest research work in @elsevierconnect.bsky.social "Automation in Construction" journal: doi.org/10.1016/j.au...
a work driven by my PhD student, Patrick Schmidt!
Find the corresponding repo here: github.com/DTU-PAS/ConR...
Depth completion for real-world depth sensors? π€
Check out our latest work: steeredmarigold.github.io
Our paper βA Survey on Dynamic Neural Networks: from Computer Vision to Multi-modal Sensor Fusion" is out as preprint!
By myself, @sscardapane.bsky.social, @rgring.bsky.social and @lanalpa.bsky.social
π arxiv.org/abs/2501.07451
@tpjenkins.bsky.social
14.01.2025 22:54 β π 1 π 0 π¬ 0 π 0
Can Dynamic Neural Networks boost Computer Vision and Sensor Fusion?
We are very happy to share this awesome collection of papers on the topic!
Dynamic NNs can significantly improve the perception capabilities of resource-constraint platforms, such as robots! I am looking forward to seeing what is possible!!
08.01.2025 09:29 β π 1 π 0 π¬ 0 π 0
[Last Chance]
Two more days before the DL for the Lecturer in Computer Vision Post @bristoluni.bsky.social [6 Jan DL]
This is a great opportunity to establish your own research group in a supportive environment. Pls circulate and apply.
www.jobs.ac.uk/job/DKR225/l...
Of course, we traveled a lot, and of course, we produced great research!
I am wondering what 2025 can bring to us! π€
π We had great gatherings and academic discussions throughout the year, the best of which was hosted by our friends and colleagues at @aicentre.dk Pioneer Centre for AI !
20.12.2024 16:05 β π 1 π 0 π¬ 1 π 0π Ronja GΓΌldenring @rgring.bsky.social was awarded the prestigious "Young Researcher Award" from DTU - Technical University of Denmark for her PhD thesis!
20.12.2024 16:05 β π 1 π 0 π¬ 1 π 0
The academic journey is wonderful when you share it with a great team!
π I am very proud of my 2 PhD students who graduated in 2024, Dimitrios Arapis (now with Novo Nordisk) and Ronja GΓΌldenring (continuing with us in DTU Electro).
Computer Vision: Fact & Fiction is now available on YouTube ππΌ I made a playlist for it with the seven chapters. Enjoy this time capsule from two decades ago!
19.12.2024 16:50 β π 58 π 16 π¬ 4 π 4
Great to see similar ideas to our own βSteeredMarigoldβ zero-shot depth completion method!
arxiv.org/abs/2409.10202
We were there too!
29.11.2024 19:48 β π 1 π 0 π¬ 0 π 0Maybe this is it? arxiv.org/abs/2409.10202 @jakubgregorek.bsky.social
28.11.2024 09:29 β π 8 π 1 π¬ 1 π 0