Srishti's Avatar

Srishti

@srishtiy.bsky.social

ELLIS PhD Fellow @belongielab.org | @aicentre.dk | University of Copenhagen | @amsterdamnlp.bsky.social | @ellis.eu Multi-modal ML | Alignment | Culture | Evaluations & Safety| AI & Society Web: https://www.srishti.dev/

313 Followers  |  424 Following  |  14 Posts  |  Joined: 27.01.2025
Posts Following

Posts by Srishti (@srishtiy.bsky.social)

Post image

Which, whose, and how much knowledge do LLMs represent?

I'm excited to share our preprint answering these questions:

"Epistemic Diversity and Knowledge Collapse in Large Language Models"

πŸ“„Paper: arxiv.org/pdf/2510.04226
πŸ’»Code: github.com/dwright37/ll...

1/10

13.10.2025 11:25 β€” πŸ‘ 89    πŸ” 26    πŸ’¬ 2    πŸ“Œ 1

Happy to share that our work on multi-modal framing analysis of news was accepted to #EMNLP2025!

Understanding news output and embedded biases is especially important in today's environment and it's imperative to take a holistic look at it.

Looking forward to presenting it in Suzhou!

21.08.2025 13:24 β€” πŸ‘ 25    πŸ” 6    πŸ’¬ 1    πŸ“Œ 0
Microsoft Forms

πŸŽ“ Looking for PhD opportunities in #NLProc for a start in Spring 2026?

πŸ—’οΈ Add your expression of interest to join @copenlu.bsky.social here by 20 July: forms.office.com/e/HZSmgR9nXB

Selected candidates will be invited to submit a DARA fellowship application with me: daracademy.dk/fellowship/f...

27.06.2025 06:51 β€” πŸ‘ 14    πŸ” 13    πŸ’¬ 0    πŸ“Œ 0
Dara

πŸ“£ I am happy to support Ph.D applications to the Danish Advanced Research Academy. My main areas of research include multimodal learning and tokenization-free language processing. Feel free to reach out if you have similar interests! Applications due August 29 www.daracademy.dk/fellowship/f...

26.06.2025 14:40 β€” πŸ‘ 4    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0
Post image

Congratulations Andrew Rabinovich (PhD β€˜08) on winning the Longuet-Higgins Prize at #CVPR2025! (1/2)

13.06.2025 19:56 β€” πŸ‘ 17    πŸ” 5    πŸ’¬ 2    πŸ“Œ 0
Post image

My favorite part of going to conferences: @belongielab.org alumni get-togethers! A big thank you to Menglin for coordinating the lunch at @cvprconference.bsky.social πŸ™

Left: Tsung-Yi Lin, Guandao Yang, Katie Luo, Boyi Li; Right: Menglin Jia, Subarna Tripathi, Ph.D., Srishti, Xun Huang

13.06.2025 00:07 β€” πŸ‘ 19    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0
Post image Post image

Panel talk happening right now at @vlms4all.bsky.social ! Come join us at #CVPR25 (room: 104E)

12.06.2025 22:37 β€” πŸ‘ 3    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0
Preview
[EvalEval Infra] Better Infrastructure for LM Evals Welcome to EvalEval Working Group Infrastructure! Please help us get set up by filling out this form - we are excited to get to know you! This is an interest form to contribute/collaborate on a research project, building standardized infrastructure for AI evaluation. Status Quo: The AI evaluation ecosystem currently lacks standardized methods for storing, sharing, and comparing evaluation results across different models and benchmarks. This fragmentation leads to unnecessary duplication of compute-intensive evaluations, challenges in reproducing results, and barriers to comprehensive cross-model analysis. What's the project? We plan to address these challenges by developing a comprehensive standardized format for capturing the complete evaluation lifecycle. This format will provide a clear and extensible structure for documenting evaluation inputs (hyperparameters, prompts, datasets), outputs, metrics, and metadata. This standardization enables efficient storage, retrieval, sharing, and comparison of evaluation results across the AI research community. Building on this foundation, we will create a centralized repository with both raw data access and API interfaces that allow researchers to contribute evaluation runs and access cached results. The project will integrate with popular evaluation frameworks (LM-eval, HELM, Unitxt) and provide SDKs to simplify adoption. Additionally, we will populate the repository with evaluation results from leading AI models across diverse benchmarks, creating a valuable resource that reduces computational redundancy and facilitates deeper comparative analysis. Tasks? As a collaborator, you would be expected to: Work towards merging/integrating popular evaluation frameworks (LM-eval, HELM, Unitxt) Group 1 - Extend to Any Task: Design universal metadata schemas that work for ANY NLP task, extending beyond current frameworks like lm-eval/DOVE to support specialized domains (e.g., machine translation) Group 2 - Save the Relevant: Develop efficient query/download systems for accessing only relevant data subsets from massive repositories (DOVE: 2TB, HELM: extensive metadata) The result will be open infrastructure for the AI research community, plus an academic publication. When? We're looking for researchers who can join ASAP and work with us for at least 5 to 7 months. We are hoping to find researchers who would take this on as an active project (8 hours+/week) in this period.

πŸš€ Technical practitioners & grads β€” join to build an LLM evaluation hub!
Infra Goals:
πŸ”§ Share evaluation outputs & params
πŸ“Š Query results across experiments

Perfect for 🧰 hands-on folks ready to build tools the whole community can use

Join the EvalEval Coalition here πŸ‘‡
forms.gle/6fEmrqJkxidy...

12.06.2025 15:01 β€” πŸ‘ 3    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0

Please join us for the FGVC workshop at CVPR 2025 @cvprconference.bsky.social on Wed 11th of June. The full schedule and list of fantastic speakers can be found on our website:
sites.google.com/view/fgvc12

09.06.2025 10:43 β€” πŸ‘ 10    πŸ” 4    πŸ’¬ 0    πŸ“Œ 0
Post image

Can you train a performant language model using only openly licensed text?

We are thrilled to announce the Common Pile v0.1, an 8TB dataset of openly licensed and public domain text. We train 7B models for 1T and 2T tokens and match the performance similar models like LLaMA 1 & 2

06.06.2025 19:18 β€” πŸ‘ 147    πŸ” 59    πŸ’¬ 2    πŸ“Œ 2
Post image

"Large [language] models should not be viewed primarily as intelligent agents but as a new kind of cultural and social technology, allowing humans to take advantage of information other humans have accumulated." henryfarrell.net/wp-content/u...

07.06.2025 16:59 β€” πŸ‘ 80    πŸ” 18    πŸ’¬ 2    πŸ“Œ 5
Preview
NeurIPS participation in Europe We seek to understand if there is interest in being able to attend NeurIPS in Europe, i.e. without travelling to San Diego, US. In the following, assume that it is possible to present accepted papers ...

Would you present your next NeurIPS paper in Europe instead of traveling to San Diego (US) if this was an option? SΓΈren Hauberg (DTU) and I would love to hear the answer through this poll: (1/6)

30.03.2025 18:04 β€” πŸ‘ 280    πŸ” 160    πŸ’¬ 6    πŸ“Œ 12
Preview
β€œI don’t want to outsource my brain”: How political cartoonists are bringing AI into their work Pulitzer-winning cartoonists are experimenting with AI image generators.

"I don’t want to just be entering text prompts for the rest of my life."

I spoke to political cartoonists, including Pulitzer-winner Mark Fiore, about how they are using AI image generators in their work. My latest for @niemanlab.org.
www.niemanlab.org/2025/06/i-do...

03.06.2025 18:10 β€” πŸ‘ 6    πŸ” 3    πŸ’¬ 0    πŸ“Œ 0
Culture is not trivia: sociocultural theory for cultural NLP. By Naitian Zhou and David Bamman from the Berkeley School of Information and Isaac L. Bleaman from Berkeley Linguistics.

Culture is not trivia: sociocultural theory for cultural NLP. By Naitian Zhou and David Bamman from the Berkeley School of Information and Isaac L. Bleaman from Berkeley Linguistics.

There's been a lot of work on "culture" in NLP, but not much agreement on what it is.

A position paper by me, @dbamman.bsky.social, and @ibleaman.bsky.social on cultural NLP: what we want, what we have, and how sociocultural linguistics can clarify things.

Website: naitian.org/culture-not-...

1/n

18.02.2025 20:45 β€” πŸ‘ 121    πŸ” 35    πŸ’¬ 5    πŸ“Œ 3
Post image

Check out our new preprint π“πžπ§π¬π¨π«π†π‘πšπƒ.
We use a robust decomposition of the gradient tensors into low-rank + sparse parts to reduce optimizer memory for Neural Operators by up to πŸ•πŸ“%, while matching the performance of Adam, even on turbulent Navier–Stokes (Re 10e5).

03.06.2025 03:16 β€” πŸ‘ 30    πŸ” 7    πŸ’¬ 2    πŸ“Œ 2

PhD student, Srishti Yadav and her collaborators, out with new, interdisciplinary workπŸ‘‡

02.06.2025 18:09 β€” πŸ‘ 3    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0

Check out our new paper led by @srishtiy.bsky.social and @nolauren.bsky.social! This work brings together computer vision, cultural theory, semiotics, and visual studies to provide new tools and perspectives for the study of ~culture~ in VLMs.

02.06.2025 12:37 β€” πŸ‘ 26    πŸ” 8    πŸ’¬ 1    πŸ“Œ 0

A delight to work with great colleagues to bring theory around visual culture and cultural studies to how we think about visual language models.

02.06.2025 10:42 β€” πŸ‘ 16    πŸ” 5    πŸ’¬ 0    πŸ“Œ 0

This work was an amazing collaboration with @nolauren.bsky.social @mariaa.bsky.social @taylor-arnold.bsky.social @jiaangli.bsky.social Siddhesh Pawar, Antonia Karamolegkou, @scfrank.bsky.social @zhaochongan.bsky.social Negar Rostamzadeh, @danielhers.bsky.social @serge.belongie.com Ekaterina Shutova

02.06.2025 10:36 β€” πŸ‘ 5    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

We find that decades of visual cultural studies offer powerful ways to decode cultural meaning in images!! Rather than proposing yet another benchmark, our goal with this paper was to revisit and re-contextualize foundational theories of culture so that it can pave way for more inclusive frameworks.

02.06.2025 10:36 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image

We then propose 5 frameworks to evaluate cultures in VLMs:
1️⃣ Processual Grounding - who defines culture?
2️⃣ Material Culture - what is represented?
3️⃣ Symbolic Encoding - how is meaning layered?
4️⃣ Contextual Interpretation - who understands and frames meaning?
5️⃣ Temporality -when is culture situated?

02.06.2025 10:36 β€” πŸ‘ 2    πŸ” 2    πŸ’¬ 1    πŸ“Œ 0

In this paper, we call for integrating methods from 3 fields :
πŸ“š Cultural Studies – how values, beliefs & identities are shaped through cultural forms like images.
πŸ” Semiotics – how signs & symbols convey meaning
🎨 Visual Studies – how visuals communicate across time & place

02.06.2025 10:36 β€” πŸ‘ 3    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0
Post image

Modern Vision-Language Models (VLMs) often fail at cultural understanding. But culture isn’t just recognizing things like food, clothes, rituals etc. It's how meaning is made and understood; it also about symbolism, context, and how these things evolve over time.

02.06.2025 10:36 β€” πŸ‘ 2    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0
Paper title "Cultural Evaluations of Vision-Language Models
Have a Lot to Learn from Cultural Theory"

Paper title "Cultural Evaluations of Vision-Language Models Have a Lot to Learn from Cultural Theory"

I am excited to announce our latest work πŸŽ‰ "Cultural Evaluations of Vision-Language Models Have a Lot to Learn from Cultural Theory". We review recent works on culture in VLMs and argue for deeper grounding in cultural theory to enable more inclusive evaluations.

Paper πŸ”—: arxiv.org/pdf/2505.22793

02.06.2025 10:36 β€” πŸ‘ 57    πŸ” 18    πŸ’¬ 3    πŸ“Œ 5
Post image

This morning at P1 a handful of lucky of lab members got to see the telescope while centre secretary BjΓΆrg had the dome open for a building tour πŸ”­ (1/7)

09.05.2025 22:30 β€” πŸ‘ 16    πŸ” 3    πŸ’¬ 1    πŸ“Œ 1
Post image

πŸš€New PreprintπŸš€
Can Multimodal Retrieval Enhance Cultural Awareness in Vision-Language Models?

Excited to introduce RAVENEA, a new benchmark aimed at evaluating cultural understanding in VLMs through RAG.
arxiv.org/abs/2505.14462

More details:πŸ‘‡

23.05.2025 17:04 β€” πŸ‘ 17    πŸ” 7    πŸ’¬ 1    πŸ“Œ 2

When you have a lot of work before the deadline push, you keep thinking of others things (distractions) you’d like to do. The day you get free, those things suddenly don’t seem important anymore. And kind of miss work! πŸ™„

23.05.2025 17:38 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

This is amazing!! I saw that dataset original webpage was being archived this month. I was wondering what’ll happen to this data.

20.05.2025 12:18 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Screenshot of the dataset viewer on the Hugging Face Hub. Shows a set of metadata for the newspaper navigator dataset. It also has previews of a few rows showing images alongside metadata columns.

Screenshot of the dataset viewer on the Hugging Face Hub. Shows a set of metadata for the newspaper navigator dataset. It also has previews of a few rows showing images alongside metadata columns.

πŸ—žοΈ Just released a Parquet version of the Newspaper Navigator dataset on @hf.co!

- 3M+ visual elements from historic US newspapers β€” photos, maps, cartoons, OCR + metadata.
- Parquet = fast filters, easier analysis.
- Great for ML + cultural research.

πŸ‘‰ huggingface.co/datasets/big...

20.05.2025 11:50 β€” πŸ‘ 14    πŸ” 7    πŸ’¬ 1    πŸ“Œ 0

We work under this telescope and sometimes get to visit it!

10.05.2025 06:29 β€” πŸ‘ 10    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0