#VisionScience community - please repost and share with trainees who might be interested in the job!
17.11.2023 00:22 β π 1 π 0 π¬ 0 π 0@neurotheory.bsky.social
Theoretical and Computational Neuroscientist at UCSD www.ratrix.org. Neural coding, natural scene statistics, visual behavior, value-based decision-making, statistical & non-statistical inference, history & philosophy of science, research rigor, pedagogy.
#VisionScience community - please repost and share with trainees who might be interested in the job!
17.11.2023 00:22 β π 1 π 0 π¬ 0 π 0A rat views a visual stimulus in a psychophysical task. Image credit: Philip Meier.
We are hiring! Opening for a lab manager/animal technician to support behavioral and neurophysiological studies of vision, decision-making and value-based choice in rats and mice. Learn more about the lab at www.ratrix.org, learn more about the job or apply at: employment.ucsd.edu/laboratory-t...
17.11.2023 00:20 β π 4 π 2 π¬ 1 π 0A rat views a visual stimulus in a psychophysical task. Image credit: Philip Meier.
We are hiring! Opening for a lab manager/animal technician to support behavioral and neurophysiological studies of vision, decision-making and value-based choice in rats and mice. Learn more about the lab at www.ratrix.org, learn more about the job or apply at: employment.ucsd.edu/laboratory-t...
17.11.2023 00:20 β π 4 π 2 π¬ 1 π 0Geisel Library at UCSD
Opening for Postdoc in Data Science at UCSD.
Study multi-timescale correlations and non-stationarity in neural/behavioral data; assess implications for statistical inference; develop improved methods robust to these effects. Co-advisor Armin Schwartzman. Contact: preinagel@ucsd.edu, armins@ucsd.edu
Sure. Understanding the effects of each quantitatively and contextually aids interpretation of metascience data, e.g., the likely impact on past literature where a practice has been used; and helps focus attention and education on those practices that are actually responsible for the most harm.
02.11.2023 15:16 β π 1 π 0 π¬ 0 π 0I do mention sequential analysis, and cite your fine paper and others, for those who want to learn more. If my simulation reflects their existing practice, prespecifying what they already do may be their preferred choice, and is valid. Bonferroni kills power so it would be a particularly bad choice.
02.11.2023 15:11 β π 0 π 0 π¬ 1 π 0Others may think "I had no idea reporting p values constrained me in such a way! I shouldn't be reporting p values." A difference in our perspective is, I work in fields where most work is not even intended to be confirmatory, people just give p values because they're told to, and don't know better.
02.11.2023 15:07 β π 0 π 0 π¬ 1 π 0Actually I think I have a higher opinion of my readers than you; I don't think they'll glibly walk away with a superficial take. Some will come away with: "I really care about controlling false positives, but cool, I could get more flexibility and statistical power with sequential analysis."
02.11.2023 15:04 β π 0 π 0 π¬ 1 π 0Important difference - this would be w=19 in my simulations, (my Fig1). I claim FP<Ξ±(1+w/2) or <0.525 which accords. They got much lower than this bc they limited the maximum sample size. I focused on effect of limiting w (only adding data if p is near alpha) which I think reflects common practice.
02.11.2023 14:58 β π 1 π 0 π¬ 1 π 0Bottom line: collecting more data to shore up a finding isnβt bad science, itβs just bad for p-values. My purpose is not to justify or encourage p-hacking, but rather to bring to light some poorly-appreciated facts, and enable more informed and transparent choices. Plus, it was a fun puzzle. [3/3]
01.11.2023 23:45 β π 5 π 1 π¬ 0 π 0With the goal of teaching the perils of N-hacking, I simulated a realistic lab situation. To my surprise, the increase in false positives was slight. Moreover, N-hacking increased the chance a result would be replicable (the PPV). This paper shows why and when this is the case. [2/3]
01.11.2023 23:42 β π 3 π 0 π¬ 1 π 0If a study finds an effect that is not significant, it is considered a "questionable research practice" to collect more data to reach significance. This kind of p-hacking is often cited as a cause of unreproducible results. This is troubling, as the practice is common in biology. [1/3]
01.11.2023 23:41 β π 2 π 0 π¬ 1 π 0"The Crow and the Pitcher" illustration by Kendra Shedenhelm
Pleased to announce that my paper is out on the consequences of collecting more data to shore up a non-significant result (N-hacking). TLDR: although it's not correct practice, it's not always deleterious. Β (illustration by Kendra Shedenhelm) doi.org/10.1371/jour...
01.11.2023 23:39 β π 29 π 17 π¬ 3 π 4I think we have run up against the limitations of this medium. I actually have no idea what you meant by that post. But what I take away is we are interested in a related question, we appear to have different views and referents, and it would be worth digging into in some less staccato format.
23.10.2023 04:31 β π 1 π 0 π¬ 1 π 0Using methods of measurement whose mechanisms are well understood, thus whose assumptions and limitations are front-of-mind; also frequent use of triangulation: testing a casual model or theory by many distinct techniques that have different assumptionsβ¦ I can give examples but not briefly. (2/2)
22.10.2023 21:09 β π 2 π 0 π¬ 2 π 0itβs hard to articulate examples in a few words, as a key characteristic is not thinking βscienceβ can ever be a short, simple, one-shot thing. It takes a long series of purposeful observations, theory, highly constrained models that make specific predictions, many many controlled experiments, (1/2)
22.10.2023 21:01 β π 2 π 0 π¬ 1 π 0looks interesting but canβt tell from a quick look if itβs the same methodology I had in mind, or one that includes both sides if the debate we were having, or something else entirely
22.10.2023 20:53 β π 0 π 0 π¬ 0 π 0There may be other reasons to study what scientists do in fields with poor track records, or fields with no track record,
But I think Phil of Sci aimed at improving practice can be advanced by studying what *good* or *great* scientists actually do, if you can identify who those are.
Of course there is a normative judgement thereβ these scientific fields have been incredibly successful at generating secure knowledge (discoveries and that stood the test of time, replicated endlessly, generalized broadly, parsimoniously explained much, led to successful practical applicationsβ¦)
22.10.2023 17:19 β π 2 π 0 π¬ 1 π 0Opposite perspective. As one trained in classical biochemistry, genetics and (early/foundational) molecular biology, I think scientists in these fields have/had some brilliant, innovative, rigorous and productive methods that are not yet well codified, nor yet understood in philosophy or statistics
22.10.2023 17:14 β π 11 π 0 π¬ 3 π 0P.S. examples: some biochemistry, molecular biology, basic research on experimental model organisms. E.g., almost no p values in the landmark papers of molecular biology (Meselson & Stahl, Hershey & Chase, Jacob & Monod, Nirenberg & Matthaei... not even the overtly statistical Luria & Delbruck).
25.09.2023 16:39 β π 1 π 0 π¬ 0 π 0the rigor and reliability of the research could be very high, because the p values were not the basis for the conclusions. I think eliminating the reporting of performative p values is very important. Otherwise people outside the field are misled about the epistemic basis and status of the claims.
25.09.2023 16:12 β π 1 π 0 π¬ 1 π 0If you ask them what the p value does tell them, they mostly think it is the PPV. This is what they want to know. So with better education they probably wouldn't use p values, and might use Bayesian statistics. In such subfields most p values in the literature are invalid (post hoc), and yet [5/6]
25.09.2023 16:06 β π 1 π 0 π¬ 1 π 0Some scientists compute p values as an afterthought, after drawing conclusions, while writing papers. They do it because they think it is expected or required. But often they have strong justifications for their conclusions, which are given in the paper, and which are what sways their readers. [4/n]
25.09.2023 16:01 β π 1 π 0 π¬ 1 π 0I asked colleagues how much they rely on p values to decide if they believe something. The only ones who said p was important were trained in psychology (and they said it was the ONLY reason to believe something). Which is interesting. Psychology's excessive reliance may have caused their woes [3/n]
25.09.2023 15:51 β π 3 π 0 π¬ 1 π 0such as epidemiologists, field ecologists, pre-clinical/clinical researchers rely on p values, bc they study things with large variability relative to effects, substantial risk of chance associations, and for other reasons. Many other biologists only use p values if/when they are forced to. [2/n]
25.09.2023 15:47 β π 2 π 0 π¬ 1 π 0Can't resist this bait. If there's anything I've learned from wading into these waters it's that statements about "scientists" are senseless. Practice and norms differ wildly between disciplines. I used to argue "well, bioligists ___..." but that's even too vast an umbrella. Some biologists [1/n]
25.09.2023 15:42 β π 2 π 0 π¬ 2 π 0actually, re-read, I know that paper.
I will be surprised if we actually disagree, I think weβre more likely talking about different claims.
will definitely read!
24.09.2023 15:16 β π 1 π 0 π¬ 1 π 0I used to think that, but I was persuaded that NHST requires prospective design: you can only know the type I error rate of the null if the null is well defined, and post hoc you canβt know what procedure you might have followed if the data had been otherwise. Did you mean to challenge that view?
24.09.2023 15:12 β π 0 π 0 π¬ 2 π 0