π arxiv.org/abs/2502.10190
This is a work done during my internship @ Adobe Research with awesome @dingzeyuli.bsky.social Kim Pimmel, Hijung Valentina Shin @amypavel.bsky.social Mira Dontcheva.
@minahuh.bsky.social
Ph.D. student @UTCompSci | Google Ph.D. Fellow | Research Scientist Intern @AdobeResearch | GenAI for Creativity & Accessibility
π arxiv.org/abs/2502.10190
This is a work done during my internship @ Adobe Research with awesome @dingzeyuli.bsky.social Kim Pimmel, Hijung Valentina Shin @amypavel.bsky.social Mira Dontcheva.
Before and after of a version edited by user with the prompt: "Shorten the part about dining halls". The edited version has the shorter section of Dining and Housing.
Users can also narrow down the options by sorting and further customize them by refining and regenerating AI suggestions. For each new version, VideoDiff provides a diff summary highlighting how it differs from the original.
27.03.2025 01:35 β π 1 π 0 π¬ 1 π 0Before and after of a version edited by user with the prompt: "Shorten the part about dining halls". The edited version has the shorter section of Dining and Housing.
VideoDiff generates diverse AI recommendations for making rough cuts, inserting B-roll, and adding text effects. It supports easy comparison by aligning videos and highlighting differences using Timeline and Transcript views.
27.03.2025 01:35 β π 1 π 0 π¬ 1 π 0Screenshot of the paper with the author list and the teaser figure. Author list: Mina Huh, Dingzeyu Li, Kim Pimmel, Hijung Valentina Shin, Amy Pavel, Mira Dontcheva. Teaser figure shows the workflow with VideoDiff with three steps: (1). VideoDiff supports easy comparison by aligning videos and highlighting differences using timeline and transcript views (2). Users can narrow down options by sorting and further customize them by refining and regenerating AI suggestions (3). Timelines and transcripts of videos are color highlighted to show which parts are command and unique.
Recent AI models can suggest endless video edits, offering many alternatives to video creators. But how can we easily sift through them all?
In our #CHI2025 paper, we propose VideoDiff, an AI video editing tool designed for editing with alternatives.
Congrats Peya!! ππ
26.03.2025 01:56 β π 1 π 0 π¬ 1 π 0Following inquiries, we are extending the deadline to the 20th. We look forward to your submission!
15.03.2025 03:13 β π 4 π 3 π¬ 0 π 0and awesome team of organizers: Kate Glazko, Jazette Johnson, @amypavel.bsky.social @jcmankoff.bsky.social
27.02.2025 02:29 β π 1 π 0 π¬ 0 π 0We have awesome panelists including Stephanie Valencia^2, Dhruv Jain, @arianaaboulafia.bsky.social @chanceyfleet.bsky.social
27.02.2025 02:29 β π 1 π 0 π¬ 1 π 0The image displays information about an event called "GAi+A11y @ CHI 2025: Generative AI and Accessibility Workshop." The workshop includes invited panelists Stephanie Valencia from the University of Maryland, Dhruv Jain from the University of Michigan, Ariana Aboulafia, a disability rights attorney, and Chancey Fleet from NY Public Library. Organizers are listed as Kate Glazko from the University of Washington, Mina Huh from UT Austin, Jazette Johnson from the University of Washington, Amy Pavel from UT Austin, and Jennifer Mankoff from the University of Washington. The submission deadline for the workshop is March 12th, 2025, and the website for more information is "https://accessgaichi.com." The logos of CHI 2025, UT Austin, and UW appear at the bottom.
Introducing the 1st workshop on Generative AI and Accessibility (GAI+A11y@CHI25)! Come join us to build a community and discuss the opportunities and risks of GAI+Accessibility. More information on accessgaichi.com #CHI2025
27.02.2025 02:26 β π 10 π 4 π¬ 1 π 2