Kajetan Schweighofer's Avatar

Kajetan Schweighofer

@kschweig.bsky.social

ELLIS PhD Student @ JKU supervised by Sepp Hochreiter Working on Predictive Uncertainty in ML

224 Followers  |  331 Following  |  4 Posts  |  Joined: 11.11.2024  |  1.7602

Latest posts by kschweig.bsky.social on Bluesky

Preview
Introducing TiRex - xLSTM based time series model | NXAI TiRex model at the top ๐Ÿฆ– We are proud of TiRex - our first time series model based on #xLSTM technology. Key take aways: ๐Ÿฅ‡ Ranked #1 on official international leaderboards โžก๏ธ Outperforms models ...

TiRex ๐Ÿฆ– time series xLSTM model ranked #1 on all leaderboards.

โžก๏ธ Outperforms models by Amazon, Google, Datadog, Salesforce, Alibaba

โžก๏ธ industrial applications

โžก๏ธ limited data

โžก๏ธ embedded AI and edge devices

โžก๏ธ Europe is leading

Code: lnkd.in/eHXb-XwZ
Paper: lnkd.in/e8e7xnri

shorturl.at/jcQeq

02.06.2025 12:11 โ€” ๐Ÿ‘ 5    ๐Ÿ” 5    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Post image

Happy to introduce ๐Ÿ”ฅLaM-SLidE๐Ÿ”ฅ!

We show how trajectories of spatial dynamical systems can be modeled in latent space by

--> leveraging IDENTIFIERS.

๐Ÿ“šPaper: arxiv.org/abs/2502.12128
๐Ÿ’ปCode: github.com/ml-jku/LaM-S...
๐Ÿ“Blog: ml-jku.github.io/LaM-SLidE/
1/n

22.05.2025 12:24 โ€” ๐Ÿ‘ 7    ๐Ÿ” 8    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 1
Post image

1/11 Excited to present our latest work "Scalable Discrete Diffusion Samplers: Combinatorial Optimization and Statistical Physics" at #ICLR2025 on Fri 25 Apr at 10 am!
#CombinatorialOptimization #StatisticalPhysics #DiffusionModels

24.04.2025 08:57 โ€” ๐Ÿ‘ 16    ๐Ÿ” 7    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Video thumbnail

โš ๏ธ Beware: Your AI assistant could be hijacked just by encountering a malicious image online!

Our latest research exposes critical security risks in AI assistants. An attacker can hijack them by simply posting an image on social media and waiting for it to be captured. [1/6] ๐Ÿงต

18.03.2025 18:25 โ€” ๐Ÿ‘ 8    ๐Ÿ” 8    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 3
Preview
X-IL: Exploring the Design Space of Imitation Learning Policies Designing modern imitation learning (IL) policies requires making numerous decisions, including the selection of feature encoding, architecture, policy representation, and more. As the field rapidly a...

Exploration imitation learning architectures: Transformer, Mamba, xLSTM: arxiv.org/abs/2502.12330
*LIBERO: โ€œxLSTM shows great potentialโ€
*RoboCasa: โ€œxLSTM models, we achieved success rate of 53.6%, compared to 40.0% of BC-Transformerโ€
*Point Clouds: โ€œxLSTM model achieves a 60.9% success rateโ€

19.02.2025 19:43 โ€” ๐Ÿ‘ 6    ๐Ÿ” 3    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

Ever wondered why presenting more facts can sometimes *worsen* disagreements, even among rational people? ๐Ÿค”

It turns out, Bayesian reasoning has some surprising answers - no cognitive biases needed! Let's explore this fascinating paradox quickly โ˜บ๏ธ

07.01.2025 22:25 โ€” ๐Ÿ‘ 233    ๐Ÿ” 77    ๐Ÿ’ฌ 8    ๐Ÿ“Œ 2

Often LLMs hallucinate because of semantic uncertainty due to missing factual training data. We propose a method to detect such uncertainties using only one generated output sequence. Super efficient method to detect hallucination in LLMs.

20.12.2024 12:52 โ€” ๐Ÿ‘ 15    ๐Ÿ” 3    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 2
Preview
Rethinking Uncertainty Estimation in Natural Language Generation Large Language Models (LLMs) are increasingly employed in real-world applications, driving the need to evaluate the trustworthiness of their generated text. To this end, reliable uncertainty estimatio...

๐—ก๐—ฒ๐˜„ ๐—ฃ๐—ฎ๐—ฝ๐—ฒ๐—ฟ ๐—”๐—น๐—ฒ๐—ฟ๐˜: Rethinking Uncertainty Estimation in Natural Language Generation ๐ŸŒŸ

Introducing ๐—š-๐—ก๐—Ÿ๐—Ÿ, a theoretically grounded and highly efficient uncertainty estimate, perfect for scalable LLM applications ๐Ÿš€

Dive into the paper: arxiv.org/abs/2412.15176 ๐Ÿ‘‡

20.12.2024 11:44 โ€” ๐Ÿ‘ 9    ๐Ÿ” 5    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 1
Post image

๐Ÿ”Š Super excited to announce the first ever Frontiers of Probabilistic Inference: Learning meets Sampling workshop at #ICLR2025 @iclr-conf.bsky.social!

๐Ÿ”— website: sites.google.com/view/fpiwork...

๐Ÿ”ฅ Call for papers: sites.google.com/view/fpiwork...

more details in thread below๐Ÿ‘‡ ๐Ÿงต

18.12.2024 19:09 โ€” ๐Ÿ‘ 84    ๐Ÿ” 19    ๐Ÿ’ฌ 2    ๐Ÿ“Œ 3
Post image

Just 10 days after o1's public debut, weโ€™re thrilled to unveil the open-source version of the technique behind its success: scaling test-time compute

By giving models more "time to think," Llama 1B outperforms Llama 8B in mathโ€”beating a model 8x its size. The full recipe is open-source!

16.12.2024 21:42 โ€” ๐Ÿ‘ 82    ๐Ÿ” 18    ๐Ÿ’ฌ 4    ๐Ÿ“Œ 2
Post image

Proud to announce our NeurIPS spotlight, which was in the works for over a year now :) We dig into why decomposing aleatoric and epistemic uncertainty is hard, and what this means for the future of uncertainty quantification.

๐Ÿ“– arxiv.org/abs/2402.19460 ๐Ÿงต1/10

03.12.2024 09:45 โ€” ๐Ÿ‘ 74    ๐Ÿ” 12    ๐Ÿ’ฌ 3    ๐Ÿ“Œ 2

Cool work!

03.12.2024 15:29 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Post image

Thrilled to share our NeurIPS spotlight on uncertainty disentanglement! โœจ We study how well existing methods disentangle different sources of uncertainty, like epistemic and aleatoric. While all tested methods fail at this task, there are promising avenues ahead. ๐Ÿงต ๐Ÿ‘‡ 1/7

๐Ÿ“–: arxiv.org/abs/2402.19460

03.12.2024 13:38 โ€” ๐Ÿ‘ 57    ๐Ÿ” 7    ๐Ÿ’ฌ 4    ๐Ÿ“Œ 1
ML for molecules and materials in the era of LLMs [ML4Molecules] ELLIS workshop, HYBRID, December 6, 2024

The Machine Learning for Molecules workshop 2024 will take place THIS FRIDAY, December 6.

Tickets for in-person participation are "SOLD" OUT.

We still have a few free tickets for online/virtual participation!

Registration link here: moleculediscovery.github.io/workshop2024/

03.12.2024 12:35 โ€” ๐Ÿ‘ 19    ๐Ÿ” 14    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

๐Ÿ™Œ

29.11.2024 12:15 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

Would love to join, working on Bayesian ML

27.11.2024 14:43 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

Very cool Michael, congrats! ๐Ÿ˜„

23.11.2024 07:45 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

@kschweig is following 20 prominent accounts