Duygu Uygun-Tunc's Avatar

Duygu Uygun-Tunc

@uyguntunc.bsky.social

Philosophy of science, epistemology, philosophy of mind. Collegiate Assistant Professor and Harper-Schmitt Fellow @ UChicago. PhD from Universität Heidelberg & Helsinki University. www.uyguntunc.com

1,200 Followers  |  1,012 Following  |  55 Posts  |  Joined: 25.09.2023  |  1.6024

Latest posts by uyguntunc.bsky.social on Bluesky

The Paul Meehl Graduate School Meta Research Symposium 2025 is on October 17. Keynote speakers are @uyguntunc.bsky.social and @lspitzer.bsky.social. The symposium is free to attend for everyone - also if you are not a PhD student. And will soon announce an extra workshop the afternoon of the 16th.

25.06.2025 07:49 — 👍 8    🔁 6    💬 1    📌 0

Yeah, obviously. But the important question is, can this be a winning strategy to create an alternative science? I highly doubt it. In the long run, selective application of rigor to bend facts is basically shooting yourself in the foot.

24.05.2025 22:44 — 👍 3    🔁 0    💬 0    📌 0

Setting aside the fact that the history of science is not neatly organisable under one methodological principle, this is not the right time to smuggle methodological anarchism. We need rigor more than ever, to be able to prevent wacky science from gaining ground and sidelining legitimate science.

24.05.2025 22:35 — 👍 2    🔁 0    💬 0    📌 0
Post image

We talk about this in one dedicated section (6). Here there are 2 questions that are conflated, one concerns error-control and the other evidential readiness. Values clearly apply to the second question but should not influence how we answer the first.

08.05.2025 01:13 — 👍 1    🔁 0    💬 0    📌 0
Is the value-free ideal of science untenable? Part I: Inductive risk - PhilSci-Archive

In our new paper with @mntunc.bsky.social
(philsci-archive.pitt.edu/id/eprint/25...) we reassess the Inductive Risk Argument (IRA) and its implications for the value-free ideal of science. We say that IRA's call for social value-encroachment in scientific inference is mistaken. Here's an overview:

02.05.2025 12:32 — 👍 4    🔁 1    💬 1    📌 0
Post image Post image Post image Post image

Finally saw Aristotle's Lyceum. Not much to see, really. It was fun to imagine where he might have sat, though:)

04.05.2025 21:54 — 👍 4    🔁 0    💬 0    📌 0

But important to note is that convergent evidence can also be misleading in the way consensus can be misleading - if convergence is not robust, namely if the errors of different lines of inquiry are not independent. Convergent evidence then strenghtens bias.

04.05.2025 13:54 — 👍 0    🔁 0    💬 0    📌 0

I may have missed the whole context, but the term convergent evidence is more reflective of the inquiry process exactly because it does not misleadingly focus on the social aspect of it, which is a consequence not a cause. When ppl say science is a 'consensual' activity they miss its epistemology.

04.05.2025 13:42 — 👍 0    🔁 0    💬 1    📌 0

25/We believe these arguments undermine the epistemic insufficiency thesis. But even if one remains unconvinced, there are further critical problems with legitimate value encroachment, which we explore separately. We'll address those issues in the next thread.
#PhilSci #metasci

02.05.2025 12:32 — 👍 2    🔁 0    💬 0    📌 0

24/Such judgments are fallible—but they are aimed at epistemic values like coherence or accuracy, which are thought to be reliable guides to better hypotheses - while social values have no such property.

02.05.2025 12:32 — 👍 1    🔁 0    💬 1    📌 0

23/When scientists judge that one piece of evidence is more compelling than another, or that one method better tracks truth, they exercise epistemic discretion. So, rational disagreement in science is possible.

02.05.2025 12:32 — 👍 1    🔁 0    💬 1    📌 0

22/Science progresses because it remains open to challenge and revision. Letting social or ethical concerns lead to premature acceptance or rejection of scientific claims threatens this process, leading to epistemic stagnation masked as social legitimacy or moral progress.

02.05.2025 12:32 — 👍 1    🔁 0    💬 1    📌 0

21/The 2nd problem is that when values influence thresholds, they can force scientific questions to settle prematurely, foreclosing debate, testing, and revision. This undermines science’s core virtue: its openness to self-correction.

02.05.2025 12:32 — 👍 1    🔁 0    💬 2    📌 0

20/Adjusting the need for scientific rigor based on social value-judgments risks wishful thinking, because one would systematically lower the standards for pet hypotheses and increase them for alternatives.

02.05.2025 12:32 — 👍 1    🔁 0    💬 1    📌 0

19/Thus, fallibility presents an ever-present challenge for science—but it is a challenge best met by reinforcing epistemic norms, not by opening inference to moral or social encroachment. Otherwise, we risk 1) wishful thinking and 2)premature closure.

02.05.2025 12:32 — 👍 1    🔁 0    💬 1    📌 0

18/ On the contrary, the fallibility of scientific knowledge increases, rather than weakens, the need for epistemic discipline. If science is prone to error, introducing non-epistemic values into evidential reasoning only amplifies risks of distortion and misjudgment.

02.05.2025 12:32 — 👍 1    🔁 0    💬 1    📌 0

17/Instead, they merely reaffirm what **fallibilism** already implies: scientific reasoning involves uncertainty, vagueness, and the need for judgment. But good scientific judgment need not involve extra-scientific considerations.

02.05.2025 12:32 — 👍 1    🔁 0    💬 1    📌 0

16/Therefore, appeals to arbitrariness fail to establish IRA’s stronger claim: that non-epistemic values must systematically influence the internal inferential standards of science.

02.05.2025 12:32 — 👍 1    🔁 0    💬 1    📌 0

15/In fact, **fallibilism** about scientific knowledge already prepares us for this: scientific judgments can be uncertain and revisable, but they can still be disciplined by epistemic norms alone.

02.05.2025 12:32 — 👍 1    🔁 0    💬 1    📌 0

14/Vagueness simply means that **reasonable people may disagree** near the margins. It does not mean that there is no right answer, nor that decisions must be guided by external, non-epistemic considerations.

02.05.2025 12:32 — 👍 1    🔁 0    💬 1    📌 0

13/Similarly, scientific evidential thresholds ("sufficient evidence") can be vague but they are by no means arbitrary. They are guided by epistemic standards, while open to rational contestation, and they usually reflect a strong disciplinary consensus.

02.05.2025 12:32 — 👍 1    🔁 0    💬 1    📌 0

12/The paradox suggests that vagueness leads to indeterminate boundaries, but it doesn't imply that these concepts are arbitrary. In fact, there are many proposed solutions to this paradox, showing that vagueness doesn’t require arbitrary judgments.

02.05.2025 12:32 — 👍 1    🔁 0    💬 1    📌 0

11/The Sorites Paradox highlights a problem with vague predicates: if one grain of sand isn’t a heap, and adding one grain doesn’t change that, when does a heap actually form? It challenges how we define thresholds for vague terms like "heap" or "significant effect”.

02.05.2025 12:32 — 👍 1    🔁 0    💬 1    📌 0

10/Now, some might argue that because evidential thresholds involve vagueness, threshold-setting must be arbitrary. The classic Sorites Paradox is invoked: when is evidence "enough"? But vagueness alone does not entail arbitrariness.

02.05.2025 12:32 — 👍 1    🔁 0    💬 1    📌 0

9/Second, there is **constrained arbitrariness**, where the precise location of threshold is somewhat vague or flexible but it is justified by clear epistemic norms (e.g., reproducibility, accuracy). Scientific thresholds often fall into this second category.

02.05.2025 12:32 — 👍 1    🔁 0    💬 1    📌 0

8/ First, one can simply set the success criteria post hoc. Evidential thresholds typically aren't arbitrary in this sense.Thresholds are often set through shared practices and standards within a research community. Even if imperfect, these practices and standards are not without reason.

02.05.2025 12:32 — 👍 1    🔁 0    💬 1    📌 0

7/ We argue that this move is problematic. **Arbitrariness** comes in different senses, and there is no sense that both applies to how scientists set evidential thresholds and lends support to IRA.

02.05.2025 12:32 — 👍 1    🔁 0    💬 1    📌 0

6/So, IRA says social or moral values must guide where to set the evidential thresholds by weighing the practical consequences of different kinds of error (e.g. acceptable type 1&2 error-rates).

02.05.2025 12:32 — 👍 1    🔁 0    💬 1    📌 0

5/First, Epistemic Insufficiency. IRA highlights that evidential sufficiency thresholds are not dictated by the *internal* norms and standards of science alone. Scientific inference is not algorithmic, hence evidential thresholds are **arbitrary**.

02.05.2025 12:32 — 👍 1    🔁 0    💬 1    📌 0

4/Thus IRA rests on two claims: (1) Epistemic Insufficiency: epistemic norms alone can’t settle evidential thresholds. (2) Legitimate Value-Encroachment: non-epistemic values should influence scientific inferences. We challenge both claims.

02.05.2025 12:32 — 👍 2    🔁 0    💬 1    📌 0

@uyguntunc is following 20 prominent accounts