Benjamin Laufer's Avatar

Benjamin Laufer

@laufer.bsky.social

PhD student at Cornell Tech. bendlaufer.github.io

223 Followers  |  181 Following  |  20 Posts  |  Joined: 12.09.2023  |  1.6929

Latest posts by laufer.bsky.social on Bluesky

I think this technology is radically changing the practice of research. There are a billion research ethics questions it raises!

03.06.2025 12:52 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

I used to think that AI prompting had no place in the research process and I was afraid to use AI for anything. Now, AI has permeated my paper-writing process, from brainstorming to code writing (eg, tab on cursor) to proof strategizing to LaTeX formatting to grammar checking and beyond.

03.06.2025 12:52 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

I am finding that AI chatbots and language models are rapidly changing my own personal research practices – and my own ethical judgments about the appropriateness of the use of AI.

03.06.2025 12:52 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Preview
Algorithmic Displacement of Social Trust

In case useful… Helen Nissenbaum and I argue in this piece that AI can undermine the foundations of sound knowledge production, not just by offering a shallow vision of research, but by contaminating the process even for those who avoid it: knightcolumbia.org/content/algo...

03.06.2025 12:39 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Excited to speak at Princeton @princetoncitp.bsky.social next week!

25.04.2025 03:12 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
I'm hiring for a machine learning data scientist & research assistant for summer 2025!

Join me in working on a project on invasive species management, by predicting species risk and planning capture strategies. This work will be in partnership with an innovative startup that is working directly to capture environmentally destructive invasive species in the US.

This project will be working with a US-based conservation partner to build predictive machine learning models and design harvest strategies for the removal of invasive animal species. The primary goal is to develop usable ML models and optimization tools to inform practical environmental decision-making on the ground.

There will also be opportunities to extend this work into a research publication for a top-tier AI venue.

The project would be:
Paid, full-time internship for summer 2025
Possibility of extension beyond the summer (part-time or full-time)
Remote, but candidates should be authorized to work in the US
Supervised by Lily Xu, assistant professor at Columbia University

Ideal candidate background will include:
Strong background in CS, data science, and/or applied math
Experience developing machine learning models
Excellent coding skills in Python
Excellent writing and interpersonal communication skills
Genuine interest in conservation/sustainability
Nice to have: background in optimization methods
Nice to have: experience with GIS and geospatial data
Candidates would ideally have already completed an undergraduate degree, but exceptional undergrads will also be considered.

How to apply:
Please send a CV and 3–5 paragraphs of your background/interests via email to <lily.x@columbia.edu> with the subject line "Application: ML for invasive species management". 

Applications will be reviewed on a rolling basis.

I'm hiring for a machine learning data scientist & research assistant for summer 2025! Join me in working on a project on invasive species management, by predicting species risk and planning capture strategies. This work will be in partnership with an innovative startup that is working directly to capture environmentally destructive invasive species in the US. This project will be working with a US-based conservation partner to build predictive machine learning models and design harvest strategies for the removal of invasive animal species. The primary goal is to develop usable ML models and optimization tools to inform practical environmental decision-making on the ground. There will also be opportunities to extend this work into a research publication for a top-tier AI venue. The project would be: Paid, full-time internship for summer 2025 Possibility of extension beyond the summer (part-time or full-time) Remote, but candidates should be authorized to work in the US Supervised by Lily Xu, assistant professor at Columbia University Ideal candidate background will include: Strong background in CS, data science, and/or applied math Experience developing machine learning models Excellent coding skills in Python Excellent writing and interpersonal communication skills Genuine interest in conservation/sustainability Nice to have: background in optimization methods Nice to have: experience with GIS and geospatial data Candidates would ideally have already completed an undergraduate degree, but exceptional undergrads will also be considered. How to apply: Please send a CV and 3–5 paragraphs of your background/interests via email to <lily.x@columbia.edu> with the subject line "Application: ML for invasive species management". Applications will be reviewed on a rolling basis.

I'm hiring for a machine learning data scientist & research assistant for summer 2025!

Join me on a project on invasive species management with an innovative startup doing on-the-ground removal of environmentally destructive invasive animals.

Paid, full-time w/ possibility to extend.

25.04.2025 01:50 β€” πŸ‘ 23    πŸ” 12    πŸ’¬ 1    πŸ“Œ 0
Post image

4) The β€œmost common dog” in NYC is a Yorkshire Terrier named Bella. Jack Russel Terriers are often β€œJack” and Charles Spaniels β€œCharlie.” Huskies are always named Lunaβ€”the reason for which is unclear (?).

02.04.2025 14:16 β€” πŸ‘ 9    πŸ” 2    πŸ’¬ 2    πŸ“Œ 0

This was a lot of fun

02.04.2025 14:18 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
Talks at NetSI | Ben Laufer Regulation along the AI Development Pipeline for Fairness, Safety and Related Goals

More details: www.networkscienceinstitute.org/talks/ben-la...

31.03.2025 21:54 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

I am in Boston, excited to give a talk at northeastern tomorrow 11am!

β€œRegulation along the AI Development Pipeline for Fairness, Safety and Related Goals”

31.03.2025 21:53 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Post image Post image

(1/n) New paper/code! Sparse Autoencoders for Hypothesis Generation

HypotheSAEs generates interpretable features of text data that predict a target variable: What features predict clicks from headlines / party from congressional speech / rating from Yelp review?

arxiv.org/abs/2502.04382

18.03.2025 15:29 β€” πŸ‘ 14    πŸ” 5    πŸ’¬ 1    πŸ“Œ 1

Please repost to get the word out! @nkgarg.bsky.social and I are excited to present a personalized feed for academics! It shows posts about papers from accounts you’re following bsky.app/profile/pape...

10.03.2025 15:12 β€” πŸ‘ 118    πŸ” 80    πŸ’¬ 6    πŸ“Œ 11
Post image

We have a new review on generative AI in medicine, to appear in the Annual Review of Biomedical Data Science! We cover over 250 papers in the recent literature to provide an updated overview of use cases and challenges for generative AI in medicine.

18.12.2024 16:13 β€” πŸ‘ 23    πŸ” 8    πŸ’¬ 1    πŸ“Œ 2

Thank you for the support, David! It means a lot. :)

15.12.2024 15:52 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
AFME2024 - Accepted Papers Oral Presentations

Here is a link with the spotlight designation for our work at the Algorithmic Fairness workshop, as well as the fantastic set of other papers appearing there: www.afciworkshop.org/accepted-pap...

13.12.2024 13:34 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Check out our paper, β€œFundamental Limits in the Search for Less Discriminatory Algorithms β€”and How to Avoid Them” and come to our spotlight talk, this Saturday at 5:50pm.

Paper preprint (WIP): mraghavan.github.io/files/LDA_wo...

13.12.2024 13:34 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

πŸͺ©New paperπŸͺ© (WIP) appearing at @neuripsconf.bsky.social Regulatable ML and Algorithmic Fairness AFME workshop (oral spotlight).

In collaboration with @s010n.bsky.social and Manish Raghavan, we explore strategies and fundamental limits in searching for less discriminatory algorithms.

13.12.2024 13:34 β€” πŸ‘ 8    πŸ” 2    πŸ’¬ 2    πŸ“Œ 0
Emerging Scholars in Information Policy - Center for Information Technology Policy

* Emerging scholars β€” a 2-year staff position in tech policy for candidates who have Bachelor’s degrees. It's an unusual program that combines classes, 1-on-1 mentoring, and work experience with real-world impact. Apply by Jan 10.
citp.princeton.edu/programs/cit...

02.12.2024 21:49 β€” πŸ‘ 8    πŸ” 2    πŸ’¬ 1    πŸ“Œ 0
Why β€˜open’ AI systems are actually closed, and why this matters - Nature A review of the literature on artificial intelligence systems to examine openness reveals that open AI systems are actually closed, as they are highly dependent on the resources of a few large corpora...

πŸ“’NEW: 'Open' AI systems aren't open. The vague term, combined w frothy AI hype is (mis)shaping policy & practice, assuming 'open source' AI democratizes access & addresses power concentration. It doesn't.

@smw.bsky.social, @davidthewid.bsky.social & I correct the recordπŸ‘‡
nature.com/articles/s41...

02.12.2024 14:23 β€” πŸ‘ 967    πŸ” 341    πŸ’¬ 25    πŸ“Œ 37
Post image

howdy!

the Georgetown Law Journal has published "Less Discriminatory Algorithms." it's been very fun to work on this w/ Emily Black, Pauline Kim, Solon Barocas, and Ming Hsu.

i hope you give it a read β€” the article is just the beginning of this line of work.

www.law.georgetown.edu/georgetown-l...

18.11.2024 16:40 β€” πŸ‘ 50    πŸ” 15    πŸ’¬ 4    πŸ“Œ 4

genAI has made us more suspicious that emails, cover letters, artworks, etc. are produced by AI. this shift forces us to change our behavior in order to prove our human-ness: a "burden of authenticity".
waking my account up to share a recent blog post on the subject: rajivmovva.com/2024/11/08/g...

30.11.2024 18:09 β€” πŸ‘ 6    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0

Thank you Seth! :D

26.11.2024 20:25 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

I passed my β€œA Exam” yesterday meaning I am officially a β€œPhD Candidate” rather than a β€œPhD Student.” (Huge title change, I know.)

Thanks to everybody who has supported me along the way!

26.11.2024 17:38 β€” πŸ‘ 14    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

I’d love to be added!

26.11.2024 13:50 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

I’d love to be included!

22.11.2024 19:47 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Hey! @friedler.net made a FAccT starter pack: bsky.app/starter-pack...

19.11.2024 03:52 β€” πŸ‘ 10    πŸ” 5    πŸ’¬ 0    πŸ“Œ 0

Hi to my new connections. Is Bluesky taking off? I’m excited!!

17.11.2024 22:13 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

In a new essay for @knightcolumbia.org with Helen Nissenbaum, we offer an account of what's wrong with social media, and what's at stake.

We also discuss generative AI and, broadly, the problems posed by untrustworthy algorithmic systems.

05.12.2023 21:29 β€” πŸ‘ 3    πŸ” 2    πŸ’¬ 0    πŸ“Œ 0

Great paper!
"…algorithmic amplification is problematic because...it chokes out trustworthy processes that we have relied on for guiding valued societal practices and for selecting, elevating, and amplifying content” via @laufer.bsky.social, Helen Nissenbaum
knightcolumbia.org/content/algo...

05.12.2023 21:27 β€” πŸ‘ 2    πŸ” 3    πŸ’¬ 1    πŸ“Œ 0

I am in Boston giving a talk tomorrow at Harvard’s EconCS seminar (1:30pm).

The talk is on genAI/ML technologies billed as "general-purpose". I'll discuss: which purposes, why and how? It's ongoing work with Hoda Heidari and Jon Kleinberg.

HMU if you're around to meet, etc!

02.11.2023 16:14 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

@laufer is following 20 prominent accounts