Mina Huh's Avatar

Mina Huh

@minahuh.bsky.social

Ph.D. student @UTCompSci | Google Ph.D. Fellow | Research Scientist Intern @AdobeResearch | GenAI for Creativity & Accessibility

47 Followers  |  79 Following  |  9 Posts  |  Joined: 05.08.2023  |  1.9452

Latest posts by minahuh.bsky.social on Bluesky

πŸ“„ arxiv.org/abs/2502.10190
This is a work done during my internship @ Adobe Research with awesome @dingzeyuli.bsky.social Kim Pimmel, Hijung Valentina Shin @amypavel.bsky.social Mira Dontcheva.

27.03.2025 01:35 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Before and after of a version edited by user with the prompt: "Shorten the part about dining halls". The edited version has the shorter section of Dining and Housing.

Before and after of a version edited by user with the prompt: "Shorten the part about dining halls". The edited version has the shorter section of Dining and Housing.

Users can also narrow down the options by sorting and further customize them by refining and regenerating AI suggestions. For each new version, VideoDiff provides a diff summary highlighting how it differs from the original.

27.03.2025 01:35 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Before and after of a version edited by user with the prompt: "Shorten the part about dining halls". The edited version has the shorter section of Dining and Housing.

Before and after of a version edited by user with the prompt: "Shorten the part about dining halls". The edited version has the shorter section of Dining and Housing.

VideoDiff generates diverse AI recommendations for making rough cuts, inserting B-roll, and adding text effects. It supports easy comparison by aligning videos and highlighting differences using Timeline and Transcript views.

27.03.2025 01:35 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Screenshot of the paper with the author list and the teaser figure. Author list: Mina Huh, Dingzeyu Li, Kim Pimmel, Hijung Valentina Shin, Amy Pavel, Mira Dontcheva.
Teaser figure shows the workflow with VideoDiff with three steps: (1). VideoDiff supports easy comparison by aligning videos and highlighting differences using timeline and transcript views (2). Users can narrow down options by sorting and further customize them by refining and regenerating AI suggestions (3). Timelines and transcripts of videos are color highlighted to show which parts are command and unique.

Screenshot of the paper with the author list and the teaser figure. Author list: Mina Huh, Dingzeyu Li, Kim Pimmel, Hijung Valentina Shin, Amy Pavel, Mira Dontcheva. Teaser figure shows the workflow with VideoDiff with three steps: (1). VideoDiff supports easy comparison by aligning videos and highlighting differences using timeline and transcript views (2). Users can narrow down options by sorting and further customize them by refining and regenerating AI suggestions (3). Timelines and transcripts of videos are color highlighted to show which parts are command and unique.

Recent AI models can suggest endless video edits, offering many alternatives to video creators. But how can we easily sift through them all?

In our #CHI2025 paper, we propose VideoDiff, an AI video editing tool designed for editing with alternatives.

27.03.2025 01:35 β€” πŸ‘ 4    πŸ” 1    πŸ’¬ 2    πŸ“Œ 0

Congrats Peya!! πŸŽ‰πŸŽ‰

26.03.2025 01:56 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Following inquiries, we are extending the deadline to the 20th. We look forward to your submission!

15.03.2025 03:13 β€” πŸ‘ 4    πŸ” 3    πŸ’¬ 0    πŸ“Œ 0

and awesome team of organizers: Kate Glazko, Jazette Johnson, @amypavel.bsky.social @jcmankoff.bsky.social

27.02.2025 02:29 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

We have awesome panelists including Stephanie Valencia^2, Dhruv Jain, @arianaaboulafia.bsky.social @chanceyfleet.bsky.social

27.02.2025 02:29 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
The image displays information about an event called "GAi+A11y @ CHI 2025: Generative AI and Accessibility Workshop." The workshop includes invited panelists Stephanie Valencia from the University of Maryland, Dhruv Jain from the University of Michigan, Ariana Aboulafia, a disability rights attorney, and Chancey Fleet from NY Public Library. Organizers are listed as Kate Glazko from the University of Washington, Mina Huh from UT Austin, Jazette Johnson from the University of Washington, Amy Pavel from UT Austin, and Jennifer Mankoff from the University of Washington. The submission deadline for the workshop is March 12th, 2025, and the website for more information is "https://accessgaichi.com." The logos of CHI 2025, UT Austin, and UW appear at the bottom.

The image displays information about an event called "GAi+A11y @ CHI 2025: Generative AI and Accessibility Workshop." The workshop includes invited panelists Stephanie Valencia from the University of Maryland, Dhruv Jain from the University of Michigan, Ariana Aboulafia, a disability rights attorney, and Chancey Fleet from NY Public Library. Organizers are listed as Kate Glazko from the University of Washington, Mina Huh from UT Austin, Jazette Johnson from the University of Washington, Amy Pavel from UT Austin, and Jennifer Mankoff from the University of Washington. The submission deadline for the workshop is March 12th, 2025, and the website for more information is "https://accessgaichi.com." The logos of CHI 2025, UT Austin, and UW appear at the bottom.

Introducing the 1st workshop on Generative AI and Accessibility (GAI+A11y@CHI25)! Come join us to build a community and discuss the opportunities and risks of GAI+Accessibility. More information on accessgaichi.com #CHI2025

27.02.2025 02:26 β€” πŸ‘ 10    πŸ” 4    πŸ’¬ 1    πŸ“Œ 2

@minahuh is following 20 prominent accounts