Academic Chatter's Avatar

Academic Chatter

@academic-chatter.bsky.social

Join #AcademicChatter for support & community in higher education.

13,201 Followers  |  552 Following  |  81 Posts  |  Joined: 10.06.2023
Posts Following

Posts by Academic Chatter (@academic-chatter.bsky.social)

US citizen researching in New Zealand (hoping to stay after my studies 🀞🏽).

01.03.2026 05:49 β€” πŸ‘ 1    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0

One of the things I value about this space is its international reach.
If you’re happy to share, where are you posting from? 🌎❀️

28.02.2026 13:42 β€” πŸ‘ 7    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0

One of the things I value about this space is its international reach.
If you’re happy to share, where are you posting from? 🌎❀️

28.02.2026 13:42 β€” πŸ‘ 7    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0

”once misinformation from AI-generated summaries remains uncorrected and seeps into published theses, research papers, and other outputs, it could contribute to a loop of misinformation.”

24.02.2026 20:32 β€” πŸ‘ 114    πŸ” 52    πŸ’¬ 4    πŸ“Œ 2
Preview
Assistant Teaching Professor in Computational Social Science and Cognitive Science University of California, San Diego is hiring. Apply now!

Our department is hiring an Assistant Teaching Professor!! This is a joint-appointed position with Computational Social Sciences (css.ucsd.edu). It's 75+ degrees F and sunny today, just thought I'd mention apol-recruit.ucsd.edu/JPF04461

27.02.2026 14:42 β€” πŸ‘ 39    πŸ” 27    πŸ’¬ 1    πŸ“Œ 2

This recent RCT of an "AI stethoscope" claims the technology "shows promise" for diagnosing cardiovascular conditions.

It does not.

It is a textbook example of the risks of conducting unprincipled 'per protocol analyses'. Once again, peer review at a major medical journal has failed.

🧡 1/

25.02.2026 16:44 β€” πŸ‘ 414    πŸ” 184    πŸ’¬ 7    πŸ“Œ 31
Preview
What oral histories can teach us about effective environmental research - LSE Impact What can oral histories tell us about the tacit knowledge required for successful collaborative environmental and sustainability research.

πŸ‘€ICYMI: "Through the oral histories, we capture aspects of the experience of research that might be difficult to access in other ways."

✍️ @angecass.bsky.social & Paul Merchant

#ResearchMethods #Sustainability #AcademicSKy

25.02.2026 16:44 β€” πŸ‘ 12    πŸ” 7    πŸ’¬ 0    πŸ“Œ 4

There seem to be a whole (older) cohort of eminent social psychologists who essentially do not understand the concept of sampling error, and I wonder to what extent that lack of understanding was an adaptive trait for a career.

24.02.2026 07:42 β€” πŸ‘ 82    πŸ” 14    πŸ’¬ 5    πŸ“Œ 3
Post image

198 effect sizes in ego depletion resesrch showed an effect size of d=0.62. Preregistered large replications (including some by original authors) yielded an effect size of 0. No one has been able to offer any other explanation for this huge research waste than massive p-hacking.

23.02.2026 20:03 β€” πŸ‘ 126    πŸ” 45    πŸ’¬ 5    πŸ“Œ 7

Higher Ed is not about job training! STEM has long benefitted from this rhetoric but we are all threatened by this concept. The University is meant to be broad and forward thinking, not something that follows the trends of the market which are inherently short-sighted and lacking imagination.

23.02.2026 08:34 β€” πŸ‘ 18    πŸ” 5    πŸ’¬ 1    πŸ“Œ 0

Peer review is not there for you to tell experts in the field how you'd prefer the style of the paper to be, or what method or analysis you have a clear bias for.

Your job is to check for correctness, spot errors or gaps, protect the integrity of the scientific method, not piss on others.

23.02.2026 09:57 β€” πŸ‘ 57    πŸ” 11    πŸ’¬ 4    πŸ“Œ 4
Microsabbaticals at Princeton Psychology Microsabbaticals at Princeton Psychology provide a several-week-long visit to our department for early-career faculty. The program focuses on early-career scholars who would benefit from interactions ...

Are you a junior faculty member interested in spending 2-4 weeks at Princeton Psych? Consider applying for our Microsabbatical program! It’s a fully funded visit for professional development and creating long-term collaborations.
psych.princeton.edu/diversity/mi...

18.02.2026 20:04 β€” πŸ‘ 61    πŸ” 49    πŸ’¬ 0    πŸ“Œ 2
Preview
Good impact narratives must connect to our everyday lives - LSE Impact Research impact is often communicated in abstract and bureaucratic language. To be meaningful it needs to connect to peoples' everyday lives.

πŸ‘€ICYMI: "despite the emphasis on impact, its meaning remains abstract for non-specialist audiences."

#Impact #SciComm #PublicEngagement

19.02.2026 17:14 β€” πŸ‘ 4    πŸ” 2    πŸ’¬ 0    πŸ“Œ 0
Reads: Most importantly, there is no AI without massive financial and ideological backing. It is therefore pointless to discuss its techniques or capabilities without asking who controls it, who benefits from it, who builds and deploys it, and what it is doing in the world. As Stafford Beer (2002) argued, the purpose of a system is what it does.

Reads: Most importantly, there is no AI without massive financial and ideological backing. It is therefore pointless to discuss its techniques or capabilities without asking who controls it, who benefits from it, who builds and deploys it, and what it is doing in the world. As Stafford Beer (2002) argued, the purpose of a system is what it does.

Reads: Though less explicit than Thiel’s call to replace politics with technology, major tech firms have effectively privatised core digital public goods. Platforms like Facebook, Google Search, and OpenAI’s ChatGPT operate at infrastructural scale in Ireland, shaping
information, communication, and access to knowledge. Yet their algorithms remain opaque, their governance remains private, with minimal democratic accountability to the public who depend on them; effectively ceding aspects of democratic process to commercial interests.

The monopolization of digital spaces has turned democracy into something the highest bidder can buy and is degrading the digital public goods themselves. As the AI industry, social media and search platforms grow more extractive and less trustworthy, they erode the foundations of democratic life: trust, dialogue, and accountability, blurring the line between truth and falsehood.

An example is the deepfake video falsely showing President Catherine Connolly withdrawing from the presidential race last October, which amassed over 160,0001 Facebook views before being removed.

GenAI’s non-deterministic, stochastic architecture produces plausible output without regard for accuracy or truth.

This makes generative AI a societal disaster and a major threat to truth, democratic processes, information ecosystems, knowledge production, and the social fabric

Reads: Though less explicit than Thiel’s call to replace politics with technology, major tech firms have effectively privatised core digital public goods. Platforms like Facebook, Google Search, and OpenAI’s ChatGPT operate at infrastructural scale in Ireland, shaping information, communication, and access to knowledge. Yet their algorithms remain opaque, their governance remains private, with minimal democratic accountability to the public who depend on them; effectively ceding aspects of democratic process to commercial interests. The monopolization of digital spaces has turned democracy into something the highest bidder can buy and is degrading the digital public goods themselves. As the AI industry, social media and search platforms grow more extractive and less trustworthy, they erode the foundations of democratic life: trust, dialogue, and accountability, blurring the line between truth and falsehood. An example is the deepfake video falsely showing President Catherine Connolly withdrawing from the presidential race last October, which amassed over 160,0001 Facebook views before being removed. GenAI’s non-deterministic, stochastic architecture produces plausible output without regard for accuracy or truth. This makes generative AI a societal disaster and a major threat to truth, democratic processes, information ecosystems, knowledge production, and the social fabric

Reads: For truth, democracy, and the rule of law to endure in the AI era, we need to cultivate an ecosystem of transparency and accountability. Yet governance by algorithms inherently places our digital public squares and democratic processes in the hands of those
building these systems in line with their political and profit-seeking agendas. Without real mechanisms in place, talk of transparency and accountability are empty gestures.

An internal Meta memo outlining plans to launch facial recognition in smart glasses β€œduring a dynamic political environment where many civil society groups that we would expect to attack us would have their resources focused on other concerns”5 illustrates how those advocating for accountability are under-resourced, retaliated against, and targeted.

Large tech and AI companies, despite selling promises of innovation and societal benefit, monetize and undermine the very society they claim to serve. What is needed is not just regulation, but active enforcement.

Given the track record of tech giants, stricter regulation and enforcement is not β€œanti–freedom of speech” or anti-competitiveness. It is one of the clearest ways governments can show they serve the public interest. After all, innovation that disregards truth and democratic processes risks undermining democracy itself.

Reads: For truth, democracy, and the rule of law to endure in the AI era, we need to cultivate an ecosystem of transparency and accountability. Yet governance by algorithms inherently places our digital public squares and democratic processes in the hands of those building these systems in line with their political and profit-seeking agendas. Without real mechanisms in place, talk of transparency and accountability are empty gestures. An internal Meta memo outlining plans to launch facial recognition in smart glasses β€œduring a dynamic political environment where many civil society groups that we would expect to attack us would have their resources focused on other concerns”5 illustrates how those advocating for accountability are under-resourced, retaliated against, and targeted. Large tech and AI companies, despite selling promises of innovation and societal benefit, monetize and undermine the very society they claim to serve. What is needed is not just regulation, but active enforcement. Given the track record of tech giants, stricter regulation and enforcement is not β€œanti–freedom of speech” or anti-competitiveness. It is one of the clearest ways governments can show they serve the public interest. After all, innovation that disregards truth and democratic processes risks undermining democracy itself.

I appeared as an expert witness before the Joint Committee on AI at the Houses of Oireachtas (parliament of Ireland) to discuss "AI: truth and democracy" this morning. You can read my opening statement here: www.oireachtas.ie/en/publicati...

17.02.2026 15:01 β€” πŸ‘ 159    πŸ” 68    πŸ’¬ 6    πŸ“Œ 5
Preview
Why are UK universities failing? - LSE Impact The HE sector in the UK faces the prospect of a university going into administration. How have universities fallen so low and is change possible?

πŸ—£οΈ"The sector has become a hybrid institution caught between public expectations and market imperatives, governed by leaders rewarded for financial agility, rather than intellectual integrity."

#AcademicSky #HigherEducation

15.02.2026 10:39 β€” πŸ‘ 22    πŸ” 8    πŸ’¬ 0    πŸ“Œ 4

New paper, on a worrying trend in meta-science: the practice of anonymising datasets on, e.g., published articles. We argue that this is at odds with norms established in research synthesis, explore arguments for anonymisation, provide counterpoints, and demonstrate implications and epistemic costs.

13.02.2026 16:50 β€” πŸ‘ 97    πŸ” 52    πŸ’¬ 6    πŸ“Œ 7

A variation: Scientists who claim they’re β€œnot interested in causality” because they assume the term only applies to deterministic, law-like relationships that are unrealistic in their field. Instead, they’re interested in how β€œX drives Y”, the effects of X, the β€œextent to which X matters for Y”>

11.02.2026 06:54 β€” πŸ‘ 93    πŸ” 17    πŸ’¬ 8    πŸ“Œ 2

🧠 We’ve widened the entry criteria for our MSc in Cognitive Neuroscience at Durham: www.durham.ac.uk/study/course...

We now welcome students from a broad range of backgroundsβ€”Psychology, Biology, Engineering, Physics and more as Neuroscience thrives on diverse perspectives

11.02.2026 22:39 β€” πŸ‘ 12    πŸ” 4    πŸ’¬ 2    πŸ“Œ 0

Insights from the Perspectives on Scientific Error (#PSE8) conference

🧡 Day 1: what a thought provoking morning.

Replication remains one of the cornerstones of scientific progress, yet replication studies are still notoriously difficult to publish. Many never make it into print at all.

11.02.2026 09:07 β€” πŸ‘ 3    πŸ” 1    πŸ’¬ 1    πŸ“Œ 0
Preview
Rethinking Economics, the movement changing how the subject is taught Born of student disquiet after the 2008 crash, the group says it is reshaping economists’ education

β€œBy demanding that economics education should be more pluralist ethically conscientious, historically aware & oriented towards the real world, Rethinking Economics exposed the staggering deficiency in how economists are educated & induced significant changes in economics teaching around the world”

11.02.2026 08:41 β€” πŸ‘ 30    πŸ” 10    πŸ’¬ 0    πŸ“Œ 1

Dear Blueskies, question: could you recommend a recent-ish, peer reviewed study on implicit/unconscious bias in faculty hiring? I know the classics, but I am wondering what new(ish) research is out there that could be used in a workshop setting. Thank you very much! #AcademicSky #EduSky πŸ—ƒοΈ

10.02.2026 23:33 β€” πŸ‘ 41    πŸ” 34    πŸ’¬ 1    πŸ“Œ 0
Post image

We just dropped the next 4 essays in Reframing Impact: AI Summit 2026, our new series with Aapti Institute and
@themaybe.org – this time with Nikhil Dey, @timnitgebru.bsky.social, @audreyt.org, and @abeba.bsky.social 🧡

10.02.2026 17:29 β€” πŸ‘ 9    πŸ” 7    πŸ’¬ 5    πŸ“Œ 1

The UK *is* losing a generation of scientists.

I know lots of brilliant people who have left their jobs / the country because of the limited jobs & funding.

Once lost, they cannot ever be replaced.

The UK government is overseeing the death of UK academic science, and it doesn't seem to care.

09.02.2026 21:57 β€” πŸ‘ 99    πŸ” 38    πŸ’¬ 6    πŸ“Œ 1

I'm on the edge of the "R Statistics Community" in what looks like an @societyforepi.bsky.social gathering on causal inference with @lizstuart.bsky.social, @jeremylabrecque.bsky.social, @jlrohmann.bsky.social, @idiaz.bsky.social, @mattpfox.bsky.social, @miguelhernan.org, & @noahgreifer.bsky.social

09.02.2026 13:53 β€” πŸ‘ 25    πŸ” 2    πŸ’¬ 2    πŸ“Œ 0
Preview
Apprenticeships in book publishing matter more than ever Publishing needs to ditch old assumptions and embrace the power of apprenticeships, says writer and communications leader Elinor Potts

"Studying for an apprenticeship breaks down siloes and widens the perspective to a broader marketplace of ideas and ways of working."

@elinorpotts.bsky.social writes on the value of apprenticeships in book publishing for @thebookseller.com

βœ… Read more here: www.thebookseller.com/comment/appr...

09.02.2026 09:28 β€” πŸ‘ 4    πŸ” 4    πŸ’¬ 0    πŸ“Œ 0
Preview
Connecting Research and Learning Across Disciplines | National Humanities Center By combining the analytical rigor and critical thinking skills developed within the sciences and the humanities,Β  we can address complex societal challenges.

We’re thrilled to continue our partnership with @howard.edu's College of Arts & Sciences for β€œConnecting Research and Learning Across Disciplines,” an institute for faculty that will be held August 3–7, 2026, on Howard’s historic campus. Registration closes on 4/30/26. Please share.

06.02.2026 14:08 β€” πŸ‘ 3    πŸ” 3    πŸ’¬ 0    πŸ“Œ 0
About the PhD: 
Audits and evaluation of AI systems β€” and the broader context that AI systems operate in β€” have become central to conceptualising, quantifying, measuring and understanding the operations, failures, limitations, underlying assumptions, and downstream societal implications of AI systems. Existing AI audit and evaluation efforts are fractured, done in a siloed and ad-hoc manner, and with little deliberation and reflection around conceptual rigour and methodological validity.

This PhD is for a candidate that is passionate about exploring what a conceptually cogent, methodologically sound, and well-founded AI evaluation and safety research might look like. This requires grappling with questions such as:

    What does it mean to represent β€œground truth” in proxies, synthetic data, or computational simulation?
    How do we reliably measure abstract and complex phenomena?
    What are the epistemological or methodological implications of quantification and measurement approaches we choose to employ? Particularly, what underlying presuppositions, values, or perspectives do they entail?
    How do we ensure the lived experiences of impacted communities play a critical role in the development and justification of measurement metrics and proxies?
    Through exploration of these questions, the candidate is expected to engage with core concepts in the philosophy of science, history of science, Black feminist epistemologies, and similar schools of thought to develop an in-depth understanding of existing practices with the aim of applying it to advance shared standards and best practice in AI evaluation.

The candidate is expected to integrate empirical (for example, through analysis or evaluation of existing benchmarks) or practical (for example, by executing evaluation of AI systems) components into the overall work.

About the PhD: Audits and evaluation of AI systems β€” and the broader context that AI systems operate in β€” have become central to conceptualising, quantifying, measuring and understanding the operations, failures, limitations, underlying assumptions, and downstream societal implications of AI systems. Existing AI audit and evaluation efforts are fractured, done in a siloed and ad-hoc manner, and with little deliberation and reflection around conceptual rigour and methodological validity. This PhD is for a candidate that is passionate about exploring what a conceptually cogent, methodologically sound, and well-founded AI evaluation and safety research might look like. This requires grappling with questions such as: What does it mean to represent β€œground truth” in proxies, synthetic data, or computational simulation? How do we reliably measure abstract and complex phenomena? What are the epistemological or methodological implications of quantification and measurement approaches we choose to employ? Particularly, what underlying presuppositions, values, or perspectives do they entail? How do we ensure the lived experiences of impacted communities play a critical role in the development and justification of measurement metrics and proxies? Through exploration of these questions, the candidate is expected to engage with core concepts in the philosophy of science, history of science, Black feminist epistemologies, and similar schools of thought to develop an in-depth understanding of existing practices with the aim of applying it to advance shared standards and best practice in AI evaluation. The candidate is expected to integrate empirical (for example, through analysis or evaluation of existing benchmarks) or practical (for example, by executing evaluation of AI systems) components into the overall work.

are you displeased with today’s AI safety evaluation landscape and curious about what greater conceptual clarity, methodological soundness, and rigour in AI evaluation could look like? if so, consider coming to Dublin to pursue a PhD with me

apply here: aial.ie/hiring/phd-a...

pls repost

15.01.2026 11:55 β€” πŸ‘ 190    πŸ” 140    πŸ’¬ 6    πŸ“Œ 12

Report back afterwards? Your poster looks very interesting!

06.02.2026 17:08 β€” πŸ‘ 2    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Preview
#scientificerror #methodology #statistics #investigativeinterviewing #liedetection #researchmethods | Dr Cody Porter Our poster is almost ready for the Perspectives on Scientific Error (PSE8) conference, taking place next week! The poster focuses on why negative binomial regression models are important for research...

Getting ready for the Perspectives on Scientific Error (PSE8) conference, taking place next week!

@mzloteanu.bsky.social @mattansb.msbstats.info

www.linkedin.com/posts/cody-n...

06.02.2026 15:36 β€” πŸ‘ 3    πŸ” 3    πŸ’¬ 1    πŸ“Œ 0
University to women: be more assertive
Women: are more assertive
University: there’s been a complaint

University to women: be more assertive Women: are more assertive University: there’s been a complaint

A keeper moment from @academic-chatter.bsky.social
#ilooklikeasurgeon

01.02.2026 22:59 β€” πŸ‘ 15    πŸ” 4    πŸ’¬ 1    πŸ“Œ 0