Folks (its me: I’m folks) justifiably dog on academic gatekeeping mechanisms, but here’s a prime example of their utility: protecting the public from cheaters and grifters.
Good luck during your comprehensive exams and dissertation defense if you’ve offloaded your PhD to chat.
I‘ve read 38 books in the last 15 months. Some of my favs:
Autistic and Black by Kala Omeiza
The Power Fantasy by Kieron Gillen
A Psalm for the Wild-Built by Becky Chambers
The Rise and Fall of the Dinosaurs by Steve Brusatte
The Bones Beneath My Skin by T.J. Klune
Read more books!
I made a deliberate decision to use as many em-dashes in this op-ed as possible, because I thought it’d be funny to me and, like, four other people. Thanks to my co-author for always humoring me.
Please consider reading our op-ed in which we argue that generative AI (GAI) is harmful to the mental health fields (well, and people in general).
Mental health professionals can sign our open letter to take a stance against integrating GAI into mental health care.
docs.google.com/document/u/2...
"you need to learn how to use gAI or you'll get left behind!"
counterpoint!
the absolute fastest way to render yourself unemployable is to build your skillset around a tool that does your thinking for you, that everyone has access to, and that one company can change or take away at any time
Damn opening with “What is a “Place” is such a cool academic vibe. I’m excited to read!
Happy Match Day! Congratulations to all who matched; take some time to relish this milestone. For those who didn’t match today, remember that Phase II exists for a reason and that these results don’t have any bearing on your skills in the field. #matchday
no, it simply isnt.
People are talking about capital using gAI as an excuse to conduct mass lay-offs, about students losing learning because theyre being lied to about gAI in education, about the mass theft of the work of artists and authors to train LLMs, about CSAM and nonconsensual sexual images.
Psychotherapists learn a lot of theory and a fair bit of technique. But primarily we are trained to listen - really deeply listen. Only once you’ve listened deeply does the rest come into play.
AI doesn’t listen. AI compiles.
There is no reason in this day & age not to offer hybrid events & the ability to participate in events online as well.
It’s universal design, & we’ve had the tools for years now. Bonus to the organizations holding the events: it means more participation, more excitement, & (cynically) more revenue.
We oppose the implementation of generative AI into mental health care.
We need to have a discussion about the profit-driven forces that have, instead of fixing the cracks in our mental health systems, have convinced clinicians that generative AI is the path forward.
#psychscisky
#academicsky
Congratulations on 10 years! Every hour without a cigarette is a win.
Today is 10 months and 8 days since my last cigarette. Like you, and many of us, this marks my third-ish attempt to stop for good.
Thanks -- what precipitated? inspired? relates to? this more than Guest et al.'s work is Jowsey et al.'s regarding reflexive thematic analysis (doi.org/10.1177/1077...). E.g., both RTA and therapy as meaning-making, reflexive, human practices.
Around that time I had just gotten out of my, like, sixth faculty meeting in 12 months where AI research tools were being propped up as godsends for productivity and I was being inundated with the message that I was going to become obsolete if I didn't use them which is just bullshit marketing
Learning is good, and being wrong leaves an opening for growth. 🙂
Just for clarification, the integration of gen AI into mental health practice has always been a sore point for me, but I was definitely drinking the koolaid when it came to gen AI tools in research -- which I've done a 180 on since my views coalesced more congruently
@olivia.science @irisvanrooij.bsky.social @k2mey.bsky.social @audhd-psychnp.com
Perhaps of interest!
See our open letter above, and our manuscript (below) in which we elaborate at length on the harms of uncritically integrating these tools into mental health care.
We don't blame individual practitioners, we blame billion-dollar technocratic companies for lying to you.
doi.org/10.31234/osf...
Open Letter: Against the Use of Generative AI in Mental Health Care
We cannot sit idly by while market forces drive a deceitful inevitability narrative surrounding AI use in mental health care.
Tools that make people kill themselves are not good for us actually.
docs.google.com/document/d/e...
hey literally everyone in government in both the US and Canada, stop fucking with my trans homies
Led by Grant Bruno, and following from our first Editorial on Global Indigenous Perspectives on autism research, our second Editorial in @journalautism.bsky.social explores integrating Indigenous ways of being, knowing and doing to autism research. Free access: journals.sagepub.com/doi/10.1177/...
i need authors and agents to be much louder with their publishers how much they hate gen ai and make it embarrassing/financially harmful for publishers to be associated with it at every level (incl marketing, publicity, and sales)
Wow, color me shocked, 5000 angry autistic people because autistic Barbie doesn’t do or have the exact same things they do or have.
It’s almost like there is no one specific way to be autistic and autism doesn’t have a look.
I subsidize my clinical practice by being a full-time professor.
100% of my US practice is billed through insurance. $250/hr for testing is about 40% of what practitioners in my licensed areas are charging. Some of whom charge around $5,000 for a differential psych eval — cash pay.
If you want to know why it's so hard to find a testing psychologist who takes insurance, my testing fee is $250/hour and Anthem pays me the equivalent of $32/hour after I subtract material costs.
🙃
Catching up on email. I received an article review request on Dec. 21, a reminder for that request on CHRISTMAS DAY, and notice of request withdrawal due to lack of response on Dec. 29.
Academic folks, please stop doing this. Breaks exist for a reason. Spend time with family. Touch grass (or snow).
When universities tout their partnerships with OpenAI, parents, faculty and everyone really should be asking how they intend to account for the increasing evidence that their technology is hurting people, especially young and vulnerable people. How many instances of harm will be enough?
a big part of the issue surrounding the use of chatgpt/LLMs in academic research comes down to process vs. product. a lot of us know that the process of doing research matters because that's where intellectual work is done; but some "academics" and lots of techies only care about a fast product