Olivier Driessens's Avatar

Olivier Driessens

@odriessens.bsky.social

Media Sociologist; Associate Prof in Media and Communication, Centre for Tracking and Society, University of Copenhagen Media, tech & social change, continuity, digital futures, sustainability

248 Followers  |  300 Following  |  40 Posts  |  Joined: 14.11.2024  |  2.1529

Latest posts by odriessens.bsky.social on Bluesky

Post image Post image

#Denmark politicians wage a #culture #war on #academia โ€”with @roskildeuni.bsky.social as their go-to target.

These attacks are assaults on #freedom, #rights and #democracy. We spoke out:

@berlingske.bsky.social : www.berlingske.dk/synspunkter/...

@politiken.dk : politiken.dk/debat/debati...

08.10.2025 08:19 โ€” ๐Ÿ‘ 8    ๐Ÿ” 10    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

๐Ÿ“ฃ Germany's close to reversing its opposition to mass surveillance & private message scanning, & backing the Chat Control bill. This could end private comms-& Signal-in the EU.

Time's short and they're counting on obscurity: please let German politicians know how horrifying their reversal would be.

06.10.2025 06:46 โ€” ๐Ÿ‘ 2271    ๐Ÿ” 1648    ๐Ÿ’ฌ 31    ๐Ÿ“Œ 45
Cover page of Guest, O., & van Rooij, I. (2025, October 4). Critical Artificial Intelligence Literacy for Psychologists. https://doi.org/10.31234/osf.io/dkrgj_v1

Cover page of Guest, O., & van Rooij, I. (2025, October 4). Critical Artificial Intelligence Literacy for Psychologists. https://doi.org/10.31234/osf.io/dkrgj_v1

Table 1 Guest, O., & van Rooij, I. (2025, October 4). Critical Artificial Intelligence Literacy for Psychologists. https://doi.org/10.31234/osf.io/dkrgj_v1

Table 1 Guest, O., & van Rooij, I. (2025, October 4). Critical Artificial Intelligence Literacy for Psychologists. https://doi.org/10.31234/osf.io/dkrgj_v1

Table 2 Guest, O., & van Rooij, I. (2025, October 4). Critical Artificial Intelligence Literacy for Psychologists. https://doi.org/10.31234/osf.io/dkrgj_v1

Table 2 Guest, O., & van Rooij, I. (2025, October 4). Critical Artificial Intelligence Literacy for Psychologists. https://doi.org/10.31234/osf.io/dkrgj_v1

New preprint ๐ŸŒŸ Psychology is core to cognitive science, and so it is vital we preserve it from harmful frames. @irisvanrooij.bsky.social & I use our psych and computer science expertise to analyse and craft:

Critical Artificial Intelligence Literacy for Psychologists. doi.org/10.31234/osf...

๐Ÿงต 1/

04.10.2025 05:33 โ€” ๐Ÿ‘ 285    ๐Ÿ” 107    ๐Ÿ’ฌ 9    ๐Ÿ“Œ 39
Preview
The UNโ€™s Global Dialogue on AI Must Give Citizens a Real Seat at the Table | TechPolicy.Press Learning from decades of global convening on climate change, AI Governance must place local lived experiences at its heart, write Tim Davies and Anna Colom.

The UN Independent International Scientific Panel on AI must include social, environmental, and public perspectives in its work and membership, and public voices must have a formal, ongoing role in the Global Dialogue on AI Governance, write Tim Davies and Anna Colom.

03.10.2025 11:13 โ€” ๐Ÿ‘ 30    ๐Ÿ” 16    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Preview
The Insatiable Energy Demands of Data Centers Could Increase Fossil Fuel Emissions in California By 2030, the centers could consume the equivalent of adding another city the size of L.A. to the stateโ€™s power grid.

NEW INVESTIGATION

California predicts data centers will consume as much power as adding another LA to grid by 2030

A utility anticipates additional emissions equal to 21 gas plants

Some environmentalists see reducing gas power as โ€œa lot less likelyโ€ due to AI capitalandmain.com/the-insatiab...

02.10.2025 15:25 โ€” ๐Ÿ‘ 134    ๐Ÿ” 96    ๐Ÿ’ฌ 4    ๐Ÿ“Œ 16
Preview
OpenAI's New Data Centers Will Draw More Power Than the Entirety of New York City, Sam Altman Says OpenAI's planned AI data center projects would consume as much as the entire city of New York City and San Diego combined.

That sounds like a lot of power. I hope weโ€™re doing useful things with ChatGPT. ๐Ÿ™ƒ

02.10.2025 10:03 โ€” ๐Ÿ‘ 370    ๐Ÿ” 152    ๐Ÿ’ฌ 33    ๐Ÿ“Œ 19
Preview
Open Media and Communication Research

Have added Global Perspectives in Communication (@gpccomm.bsky.social) to @moritzbuchi.bsky.social's and my list of open access journals in the field of Communication. #openscience #opencomm

01.10.2025 07:25 โ€” ๐Ÿ‘ 19    ๐Ÿ” 8    ๐Ÿ’ฌ 2    ๐Ÿ“Œ 1

Georges Bouchez, pro-Israel leader of the French-speaking Liberal Democrats (MR) in Belgium, also suggested banning 'antifa' recently. His party is part of the coalition of the federal and regional governments.

30.09.2025 20:20 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Preview
AI Data Centers Use a Lot of Energy. You May Be Paying for It AI data centers are pushing up energy costs all over the US. On todayโ€™s Big Take podcast: an investigation into who's footing the bill.

Today, 33% of all the electricity used in Oregon is attributed to data centers.

In Virginia, itโ€™s 37%.

www.bloomberg.com/news/article...

30.09.2025 19:54 โ€” ๐Ÿ‘ 864    ๐Ÿ” 448    ๐Ÿ’ฌ 36    ๐Ÿ“Œ 81

Our new plan is to rob everyone unless you come to us and specifically asked to not be robbed. But if we don't hear from you, it's your fault for not telling us you don't want to be robbed!

29.09.2025 20:09 โ€” ๐Ÿ‘ 74    ๐Ÿ” 25    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 1
Preview
Opinion | A.I.โ€™s Environmental Impact Will Threaten Its Own Supply Chain

This video is really important.

www.nytimes.com/2025/09/26/o...

It connects the dots between A.I. and climate disasters and is just a perfectly crafted piece of investigation and exposition.

@katecrawford.bsky.social

26.09.2025 11:36 โ€” ๐Ÿ‘ 88    ๐Ÿ” 27    ๐Ÿ’ฌ 2    ๐Ÿ“Œ 1
Preview
ChatGPT is blind to bad science - Impact of Social Sciences A new study finds ChatGPT fails to take into account retraction notices across a wide range of research.

Can LLMs distinguish between robust findings and research that has been retracted due to errors, fraud, or other serious concerns? QTWAIN blogs.lse.ac.uk/impactofsoci...

26.09.2025 12:16 โ€” ๐Ÿ‘ 16    ๐Ÿ” 7    ๐Ÿ’ฌ 2    ๐Ÿ“Œ 2
SUBMIT HERE: https://ajph.aphapublications.org/pb-assets/Supporting%20Documents/AJPH%20CFP%20AI%20Use_Full_Final-1758036159977.pdf

Submission Due Date: January 2nd, 2026.

The American Journal of Public Health (AJPH) issues this Call for Papers to invite AI researchers, public health practitioners, ethicists, and policymakers to articulate practical barriers and transformative possibilities pertaining to the use of AI technologies in public health. Papers that discuss research experiences, dissect operability and implementation challenges, and explore the ethical use of AI are desired. 

Our central question is:

How do we efficiently, effectively, and ethically integrate AI into public health practice?

SUBMIT HERE: https://ajph.aphapublications.org/pb-assets/Supporting%20Documents/AJPH%20CFP%20AI%20Use_Full_Final-1758036159977.pdf Submission Due Date: January 2nd, 2026. The American Journal of Public Health (AJPH) issues this Call for Papers to invite AI researchers, public health practitioners, ethicists, and policymakers to articulate practical barriers and transformative possibilities pertaining to the use of AI technologies in public health. Papers that discuss research experiences, dissect operability and implementation challenges, and explore the ethical use of AI are desired. Our central question is: How do we efficiently, effectively, and ethically integrate AI into public health practice?

I am *really* hoping some of my fav critical AI people will contribute to this AJPH call for contributions on "Responsible Artificial Intelligence Use for Advancing Public Health."

Please, please, please, someone write a paper that says โ€œthere is no responsible useโ€ and hereโ€™s whyโ€ฆ.!!?! Please?

26.09.2025 13:33 โ€” ๐Ÿ‘ 12    ๐Ÿ” 4    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0
Post image

Today is publication day!

EXTRACTION: The Frontiers of Green Capitalism is officially out with @wwnorton.com - find it at a bookstore near you or order online๐Ÿ’š๐Ÿ“š wwnorton.com/books/978132...

23.09.2025 12:17 โ€” ๐Ÿ‘ 417    ๐Ÿ” 116    ๐Ÿ’ฌ 22    ๐Ÿ“Œ 11
Post image

๐Ÿ“Š New in Big Data & Society!

Lindsay Weinberg examines how Microsoftโ€™s Power BI is reshaping Danish higher ed governanceโ€”turning students into data points, linking programs to job metrics, and pushing new forms of accountability.

๐Ÿ”— Read here: journals.sagepub.com/doi/10.1177/...

22.09.2025 10:23 โ€” ๐Ÿ‘ 7    ๐Ÿ” 6    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Preview
The Impact Of AI Tools On The Next Decade Of Education Innovation Education technology is more of a commitment to shaping a future where every learner has the tools to succeed.

Claims of the novelty of AI and its potential for innovation in education always make me wince a bit because really it continues a bunch of long-running tendencies in the sector. Itโ€™s an *intensifier* rather than an innovation. Some examplesโ€ฆ www.forbes.com/councils/for...

20.09.2025 18:52 โ€” ๐Ÿ‘ 88    ๐Ÿ” 46    ๐Ÿ’ฌ 6    ๐Ÿ“Œ 8
Preview
Communications Volume 50 Issue 3 Volume 50, issue 3 of the journal Communications was published in 2025.

Great 50th anniversary issue of Communications. @goranbolin.bsky.social, @giovannamas.bsky.social, @blurky.bsky.social and others revisit and assess articles and reviews published in the journal in previous decades. Very stimulating discussions!

www.degruyterbrill.com/journal/key/...

19.09.2025 14:07 โ€” ๐Ÿ‘ 3    ๐Ÿ” 2    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Preview
Billionaires & Guillotines A game for 2-5 aspiring plutocratsโ€ฆ and their enemies

here you go: www.kickstarter.com/projects/plu...

19.09.2025 08:56 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Preview
Kamermeerderheid vindt Antifa terroristische organisatie Een meerderheid in de Tweede Kamer wil dat Nederland, in navolging van de Verenigde Staten, de extreemlinkse beweging Antifa aanmerkt als terroristische organisatie. Een motie daartoe van Lidewij de V...

The far-right majority in the Dutch parliament (BBB-FvD-JA21-PVV-SGP-VVD) has just designated โ€œAntifaโ€ a terrorist organization.

This is a dark day for Dutch democracy and the final nail in the coffin of the VVD as a serious liberal democratic party.

18.09.2025 22:30 โ€” ๐Ÿ‘ 749    ๐Ÿ” 396    ๐Ÿ’ฌ 42    ๐Ÿ“Œ 72
Video thumbnail

Look who came with

18.09.2025 05:28 โ€” ๐Ÿ‘ 7887    ๐Ÿ” 4275    ๐Ÿ’ฌ 1392    ๐Ÿ“Œ 1391
Preview
Nvidia says Britain will have to burn gas to power technology revolution The company chief executive Jensen Huang says the UKโ€™s costly electricity means new data centres will rely on fossil fuel as well as renewable energy

"Britain must burn more fossil fuels and bills must rise in order to power the robot that'll fire everyone. How else will I afford my next private island?"

18.09.2025 08:45 โ€” ๐Ÿ‘ 710    ๐Ÿ” 307    ๐Ÿ’ฌ 39    ๐Ÿ“Œ 52
Settings LinkedIn use of data for AI

Settings LinkedIn use of data for AI

LinkedIn is changing its terms of use and will automatically opt you in to use your data for training their AI models...

Make sure you opt out in the settings..

I guess it's nice of them to warm people? ๐Ÿฅฒ

Great to see all those privacy by design regulations doing its magic ๐Ÿซ 

18.09.2025 11:07 โ€” ๐Ÿ‘ 20    ๐Ÿ” 12    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 7

Andreessen Horowitz, which will be one of three firms to lead the acquisition of TikTok, is headed by Marc Andreessen, a Silicon Valley tech titan who considered himself to be "an unpaid intern" of Elon Musk's DOGE. But he's not the only major Trump ally involved with this deal

16.09.2025 19:33 โ€” ๐Ÿ‘ 30    ๐Ÿ” 17    ๐Ÿ’ฌ 2    ๐Ÿ“Œ 6
Abstract: Under the banner of progress, products have been uncritically adopted or
even imposed on users โ€” in past centuries with tobacco and combustion engines, and in
the 21st with social media. For these collective blunders, we now regret our involvement or
apathy as scientists, and society struggles to put the genie back in the bottle. Currently, we
are similarly entangled with artificial intelligence (AI) technology. For example, software updates are rolled out seamlessly and non-consensually, Microsoft Office is bundled with chatbots, and we, our students, and our employers have had no say, as it is not
considered a valid position to reject AI technologies in our teaching and research. This
is why in June 2025, we co-authored an Open Letter calling on our employers to reverse
and rethink their stance on uncritically adopting AI technologies. In this position piece,
we expound on why universities must take their role seriously toa) counter the technology
industryโ€™s marketing, hype, and harm; and to b) safeguard higher education, critical
thinking, expertise, academic freedom, and scientific integrity. We include pointers to
relevant work to further inform our colleagues.

Abstract: Under the banner of progress, products have been uncritically adopted or even imposed on users โ€” in past centuries with tobacco and combustion engines, and in the 21st with social media. For these collective blunders, we now regret our involvement or apathy as scientists, and society struggles to put the genie back in the bottle. Currently, we are similarly entangled with artificial intelligence (AI) technology. For example, software updates are rolled out seamlessly and non-consensually, Microsoft Office is bundled with chatbots, and we, our students, and our employers have had no say, as it is not considered a valid position to reject AI technologies in our teaching and research. This is why in June 2025, we co-authored an Open Letter calling on our employers to reverse and rethink their stance on uncritically adopting AI technologies. In this position piece, we expound on why universities must take their role seriously toa) counter the technology industryโ€™s marketing, hype, and harm; and to b) safeguard higher education, critical thinking, expertise, academic freedom, and scientific integrity. We include pointers to relevant work to further inform our colleagues.

Figure 1. A cartoon set theoretic view on various terms (see Table 1) used when discussing the superset AI
(black outline, hatched background): LLMs are in orange; ANNs are in magenta; generative models are
in blue; and finally, chatbots are in green. Where these intersect, the colours reflect that, e.g. generative adversarial network (GAN) and Boltzmann machine (BM) models are in the purple subset because they are
both generative and ANNs. In the case of proprietary closed source models, e.g. OpenAIโ€™s ChatGPT and
Appleโ€™s Siri, we cannot verify their implementation and so academics can only make educated guesses (cf.
Dingemanse 2025). Undefined terms used above: BERT (Devlin et al. 2019); AlexNet (Krizhevsky et al.
2017); A.L.I.C.E. (Wallace 2009); ELIZA (Weizenbaum 1966); Jabberwacky (Twist 2003); linear discriminant analysis (LDA); quadratic discriminant analysis (QDA).

Figure 1. A cartoon set theoretic view on various terms (see Table 1) used when discussing the superset AI (black outline, hatched background): LLMs are in orange; ANNs are in magenta; generative models are in blue; and finally, chatbots are in green. Where these intersect, the colours reflect that, e.g. generative adversarial network (GAN) and Boltzmann machine (BM) models are in the purple subset because they are both generative and ANNs. In the case of proprietary closed source models, e.g. OpenAIโ€™s ChatGPT and Appleโ€™s Siri, we cannot verify their implementation and so academics can only make educated guesses (cf. Dingemanse 2025). Undefined terms used above: BERT (Devlin et al. 2019); AlexNet (Krizhevsky et al. 2017); A.L.I.C.E. (Wallace 2009); ELIZA (Weizenbaum 1966); Jabberwacky (Twist 2003); linear discriminant analysis (LDA); quadratic discriminant analysis (QDA).

Table 1. Below some of the typical terminological disarray is untangled. Importantly, none of these terms
are orthogonal nor do they exclusively pick out the types of products we may wish to critique or proscribe.

Table 1. Below some of the typical terminological disarray is untangled. Importantly, none of these terms are orthogonal nor do they exclusively pick out the types of products we may wish to critique or proscribe.

Protecting the Ecosystem of Human Knowledge: Five Principles

Protecting the Ecosystem of Human Knowledge: Five Principles

Finally! ๐Ÿคฉ Our position piece: Against the Uncritical Adoption of 'AI' Technologies in Academia:
doi.org/10.5281/zeno...

We unpick the tech industryโ€™s marketing, hype, & harm; and we argue for safeguarding higher education, critical
thinking, expertise, academic freedom, & scientific integrity.
1/n

06.09.2025 08:13 โ€” ๐Ÿ‘ 3099    ๐Ÿ” 1571    ๐Ÿ’ฌ 99    ๐Ÿ“Œ 249

"We are told that AI is inevitable, that we must adapt or be left behind. But universities are not tech companies. Our role is to foster critical thinking, not to follow industry trends uncritically." www.ru.nl/en/research/...

12.09.2025 10:45 โ€” ๐Ÿ‘ 874    ๐Ÿ” 373    ๐Ÿ’ฌ 11    ๐Ÿ“Œ 20
Preview
Revealed: Apple is teaching its AI to adapt to the Trump era Apple policy documents enacted by a subcontractor show how the company shifted its approach to fine-tuning its AI in March, two months after the U.S. president was inaugurated.

โ€œThe memo โ€” in the form of fresh guidelines on how to talk to and evaluate answers from Appleโ€™s upcoming new artificial intelligence model โ€” appeared to have been retooled following Trumpโ€™s return to the White Houseโ€ฆโ€

The topics include โ€œdiversity,โ€ โ€œelections,โ€ and โ€œvaccines.โ€

11.09.2025 11:12 โ€” ๐Ÿ‘ 114    ๐Ÿ” 80    ๐Ÿ’ฌ 6    ๐Ÿ“Œ 18
Post image

new podcast episode! Why does education keep falling for techno-solutionism, despite the fact that technology does not seem to drastically improve education? Listen to Dr. Ezechiel Thibaud talk through the perennial problem of 'techno-solutionism' in education ... www.buzzsprout.com/1301377/epis...

11.09.2025 05:25 โ€” ๐Ÿ‘ 14    ๐Ÿ” 5    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Preview
M365 Copilot fails to up productivity in UK government trial : AI tech shows promise writing emails or summarizing meetings. Don't bother with anything more complex

UK government report based on internal civil service trial finds Copilot doesnโ€™t increase productivity, and indeed makes Excel take longer and with more errors, and requires Powerpoint users to have โ€˜corrective actionโ€™ applied to their outputs. www.theregister.com/2025/09/04/m...

07.09.2025 21:51 โ€” ๐Ÿ‘ 242    ๐Ÿ” 143    ๐Ÿ’ฌ 10    ๐Ÿ“Œ 26

Something I didn't get to say yesterday:

We heard over and over during the event about "human-centered" approaches to "AI". But if refusal is not on the table (at every level: individual students and teachers right up through UNESCO) then we have in fact centered the technology, not the people.

03.09.2025 10:35 โ€” ๐Ÿ‘ 688    ๐Ÿ” 240    ๐Ÿ’ฌ 6    ๐Ÿ“Œ 15

@odriessens is following 19 prominent accounts