Daniel Drucker's Avatar

Daniel Drucker

@danieldrucker.bsky.social

Philosophy professor at UT Austin who thinks about attitudes, epistemology, and communication. https://www.danieldrucker.info/

853 Followers  |  974 Following  |  1,295 Posts  |  Joined: 01.08.2023  |  2.4129

Latest posts by danieldrucker.bsky.social on Bluesky

Post image

Why do we derogate effective altruists, activists, & other radically prosocial individuals? In new work, we discuss how doing good that deviates from social norms gets stigmatized. New preprint w/ @dcameron.bsky.social @tlau.bsky.social @desmond-ong.bsky.social: osf.io/preprints/ps...

08.10.2025 18:05 β€” πŸ‘ 32    πŸ” 15    πŸ’¬ 0    πŸ“Œ 0

Would you double up on "yabba"s in the full phrase or just go with "yabba doo"?

09.10.2025 14:03 β€” πŸ‘ 8    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Fine, I'm not sure that's wrong about guilt-by-association arguments generally, but the associations are extremely weak in this specific case.

06.10.2025 16:32 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Maybe short-term? Long-term it is not good or rational politics that benefits from those modes of engagement, in my opinion.

06.10.2025 12:32 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

That sounds a little like justifying dishonesty, though?

06.10.2025 12:31 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Guilt by association arguments are often weak even with strong associations, which is very much not the case here. Argue against the idea itself!

06.10.2025 12:19 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

One thing that's fairly crazy-making to me is how rarely people are suspicious that the thing they want politicians to run on are the things that would be most emotionally satisfying to them for the politicians to run on. It's great when they align, but you should be worried you're forcing it.

04.10.2025 20:10 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Endnotes for a start. Two-column portrait doesn’t work great for reading philosophy imo, but I think that may be more idiosyncratic.

03.10.2025 19:08 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

They brought back endnotes though!

03.10.2025 18:29 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Why did PPR ruin its layout so badly?? philpapers.org/go.pl?aid=RO... (the paper looks interesting, no shade to the paper)

03.10.2025 15:33 β€” πŸ‘ 12    πŸ” 1    πŸ’¬ 7    πŸ“Œ 0

Amazing typo

02.10.2025 20:24 β€” πŸ‘ 7    πŸ” 0    πŸ’¬ 1    πŸ“Œ 1
Post image

SALT 36 will be held at my alma mater, the University of Buenos Aires, on July 29-31 2026. This will be the first time the conference takes place in South America.

Abstract deadline: Dec 15, 2025
Link: saltconf.github.io/salt36/

02.10.2025 09:21 β€” πŸ‘ 22    πŸ” 6    πŸ’¬ 0    πŸ“Œ 0

When a post begins by skeptically invoking my training as a philosopher I know I'm in for a good time

29.09.2025 17:58 β€” πŸ‘ 4    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Don't threaten me with a good time etc etc

(I don't mean for it to be a universal norm for all posts, but a lot of stuff has had a "THEY don't want you to know about x" feel for me lately.)

29.09.2025 17:54 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Right! I mean as a general phenomenon we'd all be better off giving specific quotes, because attribution standards seem problematically loose these days.

29.09.2025 17:52 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Doesn't have to be here, and it's not really specific to this at all, but I've been seeing so many misattributions of views to Klein lately that I think people in general would be better off giving specific quotes when available.

29.09.2025 17:48 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Is there anyone who says there's no upside to losing the battle? I think what they'd say is that in most cases where someone's defending losing the battle for coalition inspiration, the consequences of losing the election are dire enough that the hope for greater strength later is not worth it.

29.09.2025 17:43 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

This is my regular reminder to everyone that jstor is open to the general public now; a free account there will give you access to 100 papers a year.

29.09.2025 13:19 β€” πŸ‘ 3426    πŸ” 1215    πŸ’¬ 86    πŸ“Œ 72

There's some good philosophical/academic/scientific discussion on here, since it's where most of the academics are now I think. I'd say it takes roughly as much curation as Twitter as far as who you follow, but the bystanders are better, by and large, once it's not mostly politics.

25.09.2025 19:30 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

I promise there are islands of sanity. My feed's average quality is much higher than on Twitter's. (The kind of politics Itai was wading into does bring out this place's worst characteristics, tjough.)

25.09.2025 19:21 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Right, I think that's part of it, but then there's a related one that it's easy to confuse it with, namely whether they're subject to rational norms, etc. I think it may be a consequence of satisfying the second criteria, but may not be, and I think people care about it independently.

25.09.2025 14:33 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

I suspect 2 is the crucial one, and it needs so much more refinement (what is comprehension?) that all the same worries will arise again.

25.09.2025 14:22 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Yeah, since ad hocness is essentially a question of motivation, our motivations are most evident given our priors.

25.09.2025 14:15 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

It’s the easiest way. Another way to mitigate the feeling of ad hocness is to give good theoretical grounding of the changes. But if it changes too much in close proximity to the realization that the criteria count LLMs as intelligent, it will still seem objectionable as hoc.

25.09.2025 14:09 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 2    πŸ“Œ 0

If the criteria change frequently not in response to actual prior counterexamples (parrots say) but simply when the theorist recognizes that, by those criteria, LLMs are intelligent.

25.09.2025 14:00 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Given this, we agree, so long as you don’t do it too much, especially in ways motivated just by classing LLMs themselves as unintelligent.

25.09.2025 13:53 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Genuinely confused how you got that from what I said? I’m saying that in inquiries where what you care about is finding stuff out, messing with criteria constantly in order to avoid categorizing apparent like with like is not typically reasonable but more like playing a no-lose game.

25.09.2025 13:45 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

(And inquiries guided by our desires in this way are rarely successful or reasonable, we know from long experience.)

25.09.2025 13:29 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

If your criteria constantly change in ad hoc ways to rule out the intelligence of some system your prior criteria would've included, you look less guided by a conception of intelligence that you're articulating sub-optimally, and more guided by the desire to ensure these systems are unintelligent.

25.09.2025 13:29 β€” πŸ‘ 3    πŸ” 0    πŸ’¬ 2    πŸ“Œ 0

Wouldn't being self-undermining be more, "here's some specifiable alternative superior way of doing things I could adopt, but I'll stick with my way"? Thinking your ways are generically superior goes beyond that, plausibly irrationally I think?

24.09.2025 20:12 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

@danieldrucker is following 20 prominent accounts