Matthias Lanzinger's Avatar

Matthias Lanzinger

@mlanzinger.bsky.social

Assistant Professor TU Wien, previously Uni of Oxford | Research in #databasetheory, #AI & #GNNs

47 Followers  |  46 Following  |  8 Posts  |  Joined: 13.11.2024  |  1.7052

Latest posts by mlanzinger.bsky.social on Bluesky


TU WIen HauptgebΓ€ude

TU WIen HauptgebΓ€ude

Hallo Bluesky-Community!
Die TU Wien teilt ab sofort spannende News zu Forschung und Wissenschaft hier auf Bluesky.
Bild c_Matthias Heisler

03.03.2025 07:51 β€” πŸ‘ 75    πŸ” 10    πŸ’¬ 1    πŸ“Œ 1

New paper: Simulating Time With Square-Root Space

people.csail.mit.edu/rrw/time-vs-...

It's still hard for me to believe it myself, but I seem to have shown that TIME[t] is contained in SPACE[sqrt{t log t}].

To appear in STOC. Comments are very welcome!

21.02.2025 22:19 β€” πŸ‘ 264    πŸ” 74    πŸ’¬ 17    πŸ“Œ 14

The (soft) deadline for this position is on Friday.

17.02.2025 07:16 β€” πŸ‘ 5    πŸ” 5    πŸ’¬ 0    πŸ“Œ 0

Anyone want to start a betting pool on the % of #iclr meta-reviews that are fully LLM generated?

22.01.2025 17:32 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

5/5
Finally, a prototype implementation demonstrates the practical impact on query evaluation. Early experiments show potential for large performance gains on difficult queries in standard database systems (e.g., PostgreSQL, SparkSQL)

Check it out: arxiv.org/pdf/2412.11669

20.12.2024 08:40 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Post image

4/
β€’ SHW can achieve smaller widths than HW while remaining tractable to compute, boosting efficiency.
β€’ SHW induces a hierarchy of width measures that equal generalised hypertree width in the limit.

20.12.2024 08:39 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

3/ πŸš€ Our contribution:
We propose Soft Hypertree Width (SHW) – a relaxed version of hypertree width that provides additional algorithmic flexibility, which allows for efficiently incorporating preferences and constraints into query decompositions.

20.12.2024 08:39 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

2/ The problem: Theoretical performance of queries in databases depends on the "width" of decompositions of the query. The smaller the β€œwidth”, the more efficient evaluation is. But real-world performance isn’t just about the smallest width… constraints & preferences matter too!

20.12.2024 08:38 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image Post image

1/ πŸ“’ Excited to share our new paper: β€œSoft and Constrained Hypertree Width” – a step forward in optimizing database queries by introducing a flexible, and effective hypergraph decomposition framework. 🧡 arxiv.org/pdf/2412.11669

20.12.2024 08:38 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Preview
We use cookies to offer you the best possible website experience. Your cookie preferences will be stored in your browser’s local storage. This includes cookies necessary for the website's operation.

Open Tenure Track Positions at the University of Vienna! πŸ‘¨β€πŸ”¬ πŸ‘©β€πŸ”¬
#univie is currently accepting applications for several
#tenuretrack positions across a range of disciplines. πŸ§ͺ πŸ›
Work with us to find answers to the questions for tomorrow. πŸ” ‡ #academicsky
jobs.univie.ac.at/go/Tenure-Tr...

06.12.2024 09:46 β€” πŸ‘ 19    πŸ” 12    πŸ’¬ 1    πŸ“Œ 1
Preview
#5QW: Stefan Neumann β€œIt’s important to understand networks in terms of what’s going on within them and how they influence our behaviors, relationships, and societal structures.”

Happy to share my recent interview with TU Wien Informatics, where I discuss algorithms, social network analysis, and the beauty of theoretical computer science.

Check it out here: informatics.tuwien.ac.at/news/2713

#TCS #Algorithms #SocialNetworks

02.12.2024 15:20 β€” πŸ‘ 4    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0

The PCP theorem, a jewel of theoretical computer science, establishes that any NP statement can be assessed by a randomized verifier who only checks a vanishing fraction of the proof (indeed, a constant # of characters!)

This has had incredible impact, most notably on how ML reviews are conducted

26.11.2024 05:33 β€” πŸ‘ 155    πŸ” 20    πŸ’¬ 7    πŸ“Œ 4
Preview
Homomorphism Counts as Structural Encodings for Graph Learning Graph Transformers are popular neural networks that extend the well-known Transformer architecture to the graph domain. These architectures operate by applying self-attention on graph nodes and incorp...

Also more recently arxiv.org/abs/2410.18676 :)

25.11.2024 14:18 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Emily and I have been working towards this moment for years. Two big papers just to set up one obscure reference to 90s hip-hop. Feeling proud to have finally achieved it.

25.11.2024 11:31 β€” πŸ‘ 25    πŸ” 1    πŸ’¬ 2    πŸ“Œ 0

@mlanzinger is following 20 prominent accounts