Is Cluely...somehow ethical now?
04.08.2025 16:40 β π 0 π 0 π¬ 0 π 0@siree.sh.bsky.social
PhD student @ltiatcmu.bsky.social. Working on NLP that centers worker agency. Otherwise: coffee, fly fishing, and keeping peach pits around, for...some reason https://siree.sh
Is Cluely...somehow ethical now?
04.08.2025 16:40 β π 0 π 0 π¬ 0 π 0πππ
02.08.2025 07:29 β π 1 π 0 π¬ 1 π 0The military keynesianism of our day (though it's of course possible that military keynesianism is the military keynesianism of our day, and this is just _another_ one)
01.08.2025 13:06 β π 0 π 0 π¬ 0 π 0I need one of these sized for a cone!!
30.07.2025 07:20 β π 0 π 0 π¬ 0 π 0Womp, wrong hashtag: #acl2025
28.07.2025 15:42 β π 1 π 0 π¬ 0 π 0We argue that models need to be better along four axes: they need to be accessible, personalizable, support iteration, and socially aware.
How might we do that? Come to the poster to find out!
I had so much fun doing this work with Nupoor, @jerelev.bsky.social and @strubell.bsky.social!
Modern tools struggle to support many of these use cases. Even where research exists for certain problems, such as with terminology drift, it requires writing code to effectively use, and is therefore inaccessible, while the accessible tools do not address those use cases at all.
28.07.2025 15:26 β π 1 π 0 π¬ 1 π 0We also saw that they maintained deeply personal methods for reading documents, which led to idiosyncratic, iteratively constructed mental models of the corpora.
Our findings echo early findings in STS, most notably Bruno Latour's account of the social construction of facts!
Most notably, experts tended to maintain and use nuanced models of the social production of the documents they read. In the sciences, this might look like asking whether a paper follows standard practices in a field, where it was published, and whether it looks "too neat"
28.07.2025 15:26 β π 1 π 0 π¬ 1 π 0Screenshot of paper title "Beyond Text: Characterizing Domain Expert Needs in Document Research"
Coming soon (6pm!) to the #ACL poster session: how do experts work with collections of documents, and do LLMs do those things?
tl;dr: only sometimes! While we have good tools for things like information extraction, the way that experts read documents goes deeper - come to our poster to learn more!
@mariaa.bsky.social's: bsky.app/profile/did:...
28.07.2025 08:28 β π 2 π 0 π¬ 1 π 0Yay, thank you! Was going to do this later today, and now I don't have to π
26.07.2025 07:21 β π 2 π 0 π¬ 0 π 0It also feels like a credibility thing - having been in industry without a PhD, you rarely have the leeway to push for even slightly risky things. By the time you've built that credibility, it's hard not to have internalized the practice of risk minimization
25.07.2025 16:31 β π 0 π 0 π¬ 0 π 0Excited to listen to this! Personalizable viz is so promising, and it's still *so* complicated, at least in my experience
25.07.2025 16:19 β π 1 π 0 π¬ 1 π 0This thread really does make me wonder why we moved away from soft prompt tuning. I can see the affordance benefit of being able to write the prompts, but it doesn't feel like there is necessarily a "theory" of prompt optimization in discrete space that makes it worth keeping prompts in language
16.07.2025 22:09 β π 3 π 0 π¬ 1 π 0The answer is yes - East if you're from West of here, Midwest if you're from further east. And also a secret third thing! (rust belt)
13.07.2025 19:47 β π 4 π 0 π¬ 0 π 0This looks incredible!! Very excited to read it ππ
02.07.2025 21:10 β π 1 π 0 π¬ 1 π 0ok, reading a bit more, I could def see Kenneth Goldsmith advocating this point. Ty for the reference!
23.06.2025 18:33 β π 1 π 0 π¬ 0 π 0Do you have a link to this? I'd love to read more - what aesthetic innovation removes the necessity of the human element to appreciation?
23.06.2025 18:23 β π 0 π 0 π¬ 1 π 0All of these sound great! I would also maybe suggest rather than (or maybe in addition to) a starter pack, a list of attendees - that way, the list can back a custom feed, capturing things that aren't explicitly tagged for the conference, and the feed also becomes interest-based after the conf
17.06.2025 17:40 β π 4 π 0 π¬ 1 π 0This is some excellent viz π€© also love that there's a Sam Learner on the team that built it!
17.06.2025 14:36 β π 1 π 0 π¬ 1 π 0Gently, I would like to say: When people tell you that they would appreciate a feature that does something automatically, it's not responsive to that concern to explain that by going through several steps for every individual instance, they can get the same result in each instance.
12.06.2025 13:45 β π 72 π 2 π¬ 3 π 0OpenAI has effectively conned people into thinking that Chatbots & AI "Assistants" are The FEWTCHA of AI. Friends, they are most likely *not.* Neither are the big cloud-based Generative AI services.
Small, purpose-fit, on-device models that make your existing activities easier/better? There you go.
When it comes to text prediction, where does one LM outperform another? If you've ever worked on LM evals, you know this question is a lot more complex than it seems. In our new #acl2025 paper, we developed a method to find fine-grained differences between LMs:
π§΅1/9
The MA lottery is the follow-up season - it's the same folks and the same feed!
07.06.2025 16:19 β π 2 π 0 π¬ 0 π 0I absolutely love the work coming out of WGBH, about the big dig and the MA lottery!! www.wgbh.org/podcasts/scr...
Both 10ish parters, and lovely to listen to in series.
I'm excited to read (and download/cite, ofc) this when it's out!!
05.06.2025 16:35 β π 1 π 0 π¬ 0 π 0π
05.06.2025 16:16 β π 0 π 0 π¬ 1 π 0π€©π€©
26.05.2025 21:38 β π 0 π 0 π¬ 0 π 0I don't know if conditions being bad in PA is a weather thing, but if you haven't, consider cherry springs state park! www.pa.gov/agencies/dcn...
26.05.2025 21:33 β π 0 π 0 π¬ 1 π 0