Cameron Domenico Kirk-Giannini's Avatar

Cameron Domenico Kirk-Giannini

@cdkg.bsky.social

Language πŸ—£οΈπŸ’¬, AI πŸ€–πŸ‘Ύ, social philosophy πŸ³οΈβ€πŸŒˆπŸ³οΈβ€βš§οΈ, and religion πŸ˜‡πŸ˜ˆ at Rutgers University.

296 Followers  |  61 Following  |  19 Posts  |  Joined: 11.10.2023  |  1.6375

Latest posts by cdkg.bsky.social on Bluesky

Super excited to finally be able to share a project I've been working on for quite some time β€” a new paper on the Singularity Hypothesis! We argue that there are more good arguments for it and fewer good arguments against it than a lot of philosophers assume.

philpapers.org/archive/KIRR...

16.07.2025 15:38 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Philosophers and AI folks β€” I'm writing a paper on the singularity hypothesis, and I'm looking for some recent (i.e. since late 2024) expressions of skepticism about it from philosophers or ML folks that I can quote. The more well known the person, the better! Any ideas?

03.06.2025 10:59 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
Cameron Domenico Kirk-Giannini, Gaslighting and Epistemic Competence - PhilPapers Anti-intentionalist, purely epistemic accounts of gaslighting that center its dilemmatic structure have a range of attractive features. However, they appear to face an overgeneration problem: if there...

Social philosophers! Check out this short new paper in which I revisit my dilemmatic account of gaslighting and think about what kind of evidence should lead us to doubt our epistemic competence in different domains.

philpapers.org/rec/KIRGAE

04.05.2025 14:01 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Excited to share a new review paper I wrote with William D'Alessandro about the range of exciting philosophical and technical work currently being done on AI safety! Forthcoming at Philosophy Compass.

philpapers.org/archive/DALA...

30.04.2025 14:42 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
AI safety: a climb to Armageddon? - Philosophical Studies This paper presents an argument that certain AI safety measures, rather than mitigating existential risk, may instead exacerbate it. Under certain key assumptions - the inevitability of AI failure, th...

Third, in "AI safety: A climb to Armageddon?" Herman Cappelen, Josh Dever, and John Hawthorne ask a question that gets far too little attention in AI safety: Could the work we're doing simply be ensuring that safety failures will be worse when they occur?

link.springer.com/article/10.1...

07.03.2025 05:26 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Those without institutional access can download Sven's paper here:

cd.kg/wp-content/u...

07.03.2025 05:25 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
Off-switching not guaranteed - Philosophical Studies Hadfield-Menell et al. (2017) propose the Off-Switch Game, a model of Human-AI cooperation in which AI agents always defer to humans because they are uncertain about our preferences. I explain two rea...

Second, in "Off-Switching Not Guaranteed," Sven Neth describes a number of important problems for Stuart Russell's idea of provably beneficial AI.

link.springer.com/article/10.1...

07.03.2025 05:25 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Preview
Bias, machine learning, and conceptual engineering - Philosophical Studies Large language models (LLMs) such as OpenAI’s ChatGPT reflect, and can potentially perpetuate, social biases in language use. Conceptual engineering aims to revise our concepts to eliminate such bias....

First, in "Bias, Machine Learning, and Conceptual Engineering," Rachel Rudolph and colleagues explore the connections between LLM training and conceptual engineering, with special attention to questions of bias.

link.springer.com/article/10.1...

07.03.2025 05:24 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Exicted to share *three* important new papers from the special issue on AI safety!

07.03.2025 05:24 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
AI wellbeing - Asian Journal of Philosophy Under what conditions would an artificially intelligent system have wellbeing? Despite its clear bearing on the ethics of human interactions with artificial systems, this question has received little ...

It's finally out! πŸ‘‰ Click to find out whether YOUR AI assistant is a moral patient!

In all seriousness, though, this is an important project and I hope it helps advance discussion of the possible moral properties of artificial systems.

link.springer.com/article/10.1...

01.02.2025 22:03 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
How to Solve the Gender Inclusion Problem | Hypatia | Cambridge Core How to Solve the Gender Inclusion Problem

My paper "How to Solve the Gender Inclusion Problem" is now typeset and officially citable!

www.cambridge.org/core/journal...

24.01.2025 14:05 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Those without institutional access can find the paper here: www.cd.kg/wp-content/u...

22.01.2025 17:37 β€” πŸ‘ 1    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
Deception and manipulation in generative AI - Philosophical Studies Large language models now possess human-level linguistic abilities in many contexts. This raises the concern that they can be used to deceive and manipulate on unprecedented scales, for instance sprea...

Excited to share this paper by Christian Tarsney from the special issue on AI safety I'm editing. It defends a useful new account of deception and manipulation in AI systems.

link.springer.com/article/10.1...

22.01.2025 17:37 β€” πŸ‘ 6    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

We argue that the best way to think about AI safety has it include *both* work on catastrophic risks and work that's traditionally been situated within AI ethics.

This matters because disciplinary boundaries affect who's treated as an expert and who gets to help set policy.

13.01.2025 19:26 β€” πŸ‘ 4    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

By now you've probably heard about AI safety β€” but have you ever wondered what AI safety actually *is*, or how it's related to AI ethics?

Well, you're in luck! Jacqueline Harding and I have a new paper answering these questions.

philpapers.org/archive/HARW...

13.01.2025 19:26 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Our goal in the paper is to provide a readable introduction to the main issues in this area, together with references to relevant literature and some of our own takes on the state of the debate. We hope the paper will serve as a go-to reference on AI risk arguments for the next couple of years.

24.01.2024 20:12 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
Adam Bales, William D'Alessandro & Cameron Domenico Kirk-Giannini, Artificial Intelligence: Argument... Recent progress in artificial intelligence (AI) has drawn attention to the technology’s transformative potential, including what some see as its prospects for causing large-scale harm. We review two...

Philosophers and AI folks β€” I'm excited to share a new paper on AI and catastrophic risk, coauthored with Adam Bales and Bill D'Alessandro, which is now forthcoming at Phil Compass!

philpapers.org/rec/BALAIA-5

24.01.2024 20:11 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Preview
AI is closer than ever to passing the Turing test for β€˜intelligence’. What happens when it does? The Turing test, first proposed in 1950 by Alan Turing, was framed as a test that could supposedly tell us whether an AI system could β€˜think’ like a human.

I wrote a short explainer-type piece on the Turing Test with my colleague Simon Goldstein!

16.10.2023 22:04 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Hello, world!

11.10.2023 20:42 β€” πŸ‘ 7    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

@cdkg is following 20 prominent accounts