Centre for Media, Technology & Democracy's Avatar

Centre for Media, Technology & Democracy

@mediatechdemocracy.bsky.social

Our organization examines how media and emerging technologies shape democracy. We’re housed at McGill University’s Max Bell School of Public Policy, and led by Taylor Owen.

449 Followers  |  208 Following  |  379 Posts  |  Joined: 17.01.2025
Posts Following

Posts by Centre for Media, Technology & Democracy (@mediatechdemocracy.bsky.social)

Preview
Scoping AI Chatbots into a revised Online Harms Act: The Case for Immediate Action — Centre for Media, Technology and Democracy February 24, 2026 - The Centre’s Founding Director, Taylor Owen, and Helen Hayes, Associate Director of Policy, are calling for immediate action to scope AI chatbots into a revised Online Harms Act.

You can read @taylorowen.bsky.social and Helen Hayes' policy memo on scoping AI chatbots into a revised Online Harms Act and their response to OpenAI's letter to Minister Solomon here: tinyurl.com/39zve3ey

27.02.2026 20:30 — 👍 0    🔁 0    💬 0    📌 0

While OpenAI's voluntary commitments are a good start, they are no substitute for legislation establishing an independent regulator with authority to require risk assessments, set age-appropriate design standards, ensure compliance and enforce consequences when systems fail.

27.02.2026 20:30 — 👍 0    🔁 0    💬 1    📌 0

In a Feb. 26 letter to Minister Solomon, OpenAI disclosed that the Tumbler Ridge shooter created a second ChatGPT account that its detection systems missed, and that under its updated referral protocol it would now report the first banned account to law enforcement.

27.02.2026 20:30 — 👍 0    🔁 0    💬 1    📌 0

Owen and Hayes argue that OpenAI's decision not to contact Canadian law enforcement after the shooter's ChatGPT account was flagged and suspended in June 2025 is another example of real-world harms caused by AI systems.

27.02.2026 20:30 — 👍 0    🔁 0    💬 1    📌 0

In the wake of the Tumbler Ridge mass shooting, the Centre's Founding Director @taylorowen.bsky.social and Associate Director of Policy Helen Hayes published a policy memo calling on the Canadian government to scope AI chatbots into a revised Online Harms Act. 🧵

27.02.2026 20:30 — 👍 0    🔁 0    💬 1    📌 0

@mathieulavigne.bsky.social spoke with @rorywh.bsky.social from @nationalobserver.com about our latest brief on online conspiracy theories and institutional distrust in Canada, from the Centre's Media Ecosystem Observatory (MEO).

26.02.2026 15:01 — 👍 2    🔁 2    💬 0    📌 0

This event is part of the Securing Canada's Digital Sovereignty series, presented by the Centre for Media, Technology and Democracy, MASS LBP, Ronald S. Roadburg Foundation and The Waltons Trust.

25.02.2026 21:33 — 👍 1    🔁 0    💬 0    📌 0
Securing Canada’s Digital Sovereignty: A New Playbook - Youth Online Safety Youth online safety is at a turning point. Learn from experts about AI, platforms, and the state of Canada’s online safety policy.

Register for free to hear from leading voices including: @abridgman.bsky.social, Sally Guy, Helen A. Hayes, Emily Laidlaw, @petermacleod.bsky.social, @taylorowen.bsky.social, Ava Smithing, Tracy Vaillancourt and @ethanz.bsky.social.

tinyurl.com/yzkknsus

25.02.2026 21:33 — 👍 1    🔁 0    💬 1    📌 1
Post image

If you're interested in youth online safety, we've got the perfect event for you! Join us on March 11th in Ottawa to hear from youth advocates, policy experts and leading researchers about the current online harms policy landscape and explore potential solutions. 🧵

25.02.2026 21:33 — 👍 1    🔁 0    💬 1    📌 0
Preview
A ‘Tiny Minority’ of Social Media Accounts Drive Canadian Conspiracy Content | The Tyee Researchers found conspiracy claims spread widely, but only some people believe them.

A new study looking at how conspiracy claims spread on social media found that people who use X, formerly Twitter, are much more likely to both be aware of and believe conspiracy theories. @jenstden.bsky.social reports.

25.02.2026 14:31 — 👍 72    🔁 41    💬 4    📌 2

Thank you to everyone who contributed to this brief: Mika Desblancs-Patel, Esli Chan, @mathieulavigne.bsky.social, Ph.D., @chrispyross.bsky.social, @dhobso.bsky.social, Ben Steel, and Helen A. Hayes.

23.02.2026 20:08 — 👍 0    🔁 0    💬 0    📌 0
Preview
Conspiratorial Claims and Institutional Distrust in Canada’s Online Ecosystem — Media Ecosystem Observatory This research brief examines the rise and spread of anti-institutional conspiracy claims in Canada’s information ecosystem and their impact on democratic trust, public health compliance and confidence...

Read the full brief on anti-institutional conspiratorial claims in the Canadian information ecosystem here: meo.ca/work/conspir...

23.02.2026 20:08 — 👍 0    🔁 1    💬 1    📌 0

🔍 Platform dynamics shape exposure and belief. Frequent X users are significantly more likely to report awareness of and belief in these claims compared to infrequent social media users.

23.02.2026 20:08 — 👍 0    🔁 0    💬 1    📌 0

🔍 A small number of accounts drive most visibility. The top 100 accounts are responsible for 68% of conspiratorial posts and capture nearly 90% of views.
🔍 Influencers drive production and amplification of conspiratorial claims online.

23.02.2026 20:08 — 👍 0    🔁 0    💬 1    📌 0

🔍 Dominant conspiratorial claims challenge the legitimacy of democratic institutions.
🔍 Although awareness is widespread, belief remains limited. Between 29% and 63% of Canadians report hearing about the conspiracies studied, but only a minority endorse them.

23.02.2026 20:08 — 👍 0    🔁 0    💬 1    📌 0

In a new national study drawing on social media and survey data, the Centre's Media Ecosystem Observatory (MEO) finds limited belief in conspiracy theories, but outsized visibility driven by a small number of highly active online accounts.

Here are the key findings:

23.02.2026 20:08 — 👍 0    🔁 0    💬 1    📌 0
Post image

Our newest brief, “Conspiratorial Claims and Institutional Distrust in Canada’s Online Ecosystem,” examines how anti-institutional conspiracy theories circulate online and how widely they resonate with Canadians.🧵 #cdnpoli

23.02.2026 20:08 — 👍 3    🔁 2    💬 1    📌 0
Preview
Canadians’ Perspectives on Governing AI Chatbots A Policy Brief — Centre for Media, Technology and Democracy This report adds to an evidence-based mandate for AI chatbot regulation in Canada, drawing on nationally representative survey data from 1,454 Canadians and mapping public preferences onto established...

Read the full brief here: tinyurl.com/p4spweew

Special thanks to the contributors: @mathieulavigne.bsky.social, Helen Hayes, Esli Chan, @chrispyross.bsky.social

03.02.2026 16:27 — 👍 0    🔁 0    💬 0    📌 0

Key findings of the brief:
1. Canadians understand of the risks that AI chatbots pose to young people.
2. Canadians assign clear responsibility to AI companies.
3. Canadians support specific, operationalizable interventions mapping onto proven regulatory frameworks.

03.02.2026 16:27 — 👍 1    🔁 1    💬 1    📌 0
Post image

How do Canadians feel about governing AI chatbots?

Our new policy brief draws on nationally representative survey data from 1,454 Canadians, demonstrating overwhelming public concern across all surveyed risk categories and clear attribution of risk to AI companies.🧵 #cdnpoli

03.02.2026 16:27 — 👍 0    🔁 1    💬 1    📌 1
Preview
Incident Debrief︱Ripple Effects of the Charlie Kirk Assassination in theCanadian Information Ecosystem — Canadian Digital Media Research Network The Charlie Kirk assassination resulted in an apparent surge of highly charged and polarized online conversations, disinformation and calls for political violence and retribution in the American but a...

While a relatively minor information incident, the event’s visibility highlights both democratic vulnerabilities and the need for stronger preparedness and response.

Read the full incident debrief here: tinyurl.com/4c7nm8wp

28.01.2026 20:31 — 👍 0    🔁 0    💬 0    📌 0

Our final debrief presents 5 lessons on how political violence spreads through the information ecosystem, revealing gaps between online & offline sentiment, lasting democratic impacts, outsized youth exposure, predictable misinformation cycles, and continued foreign exploitation.

28.01.2026 20:31 — 👍 0    🔁 0    💬 1    📌 0
Post image

Charlie Kirk's shooting drove widespread awareness, transforming him into a household name in Canada. As polarized online discussions spread, the CDMRN launched a joint-investigation with MEO, @disinfowatch.bsky.social, @dfrlab.bsky.social & @pearl-uoft.bsky.social 🧵 #cdnpoli

28.01.2026 20:31 — 👍 1    🔁 2    💬 1    📌 0
Post image

🚨 MEO is moving 🚨 Follow @mediatechdemocracy.bsky.social for updates on our information ecosystem research! 🧵

26.01.2026 20:07 — 👍 1    🔁 2    💬 1    📌 0
Preview
5 AI Trends to Watch in 2026 by AI Expert Taylor Owen AI expert Taylor Owen identifies 5 critical AI trends for 2026, from politicization to governance gaps. Learn what's shaping AI's future in society, democracy, and policy.

Read more about @taylorowen.bsky.social's 2026 AI predictions here: tinyurl.com/mr2mmysn

09.01.2026 20:45 — 👍 0    🔁 0    💬 0    📌 0

3. Parents will turn their attention from social media to AI.
4. AI will be both increasingly capable and structurally unreliable.
5. New social norms around appropriate uses of AI will begin to take shape.

09.01.2026 20:45 — 👍 0    🔁 0    💬 1    📌 0

1. AI will get politicized, forcing institutions to confront political questions about distribution, accountability and public consent.
2. Faced with geopolitical competition, low public trust and safety risks, governments will re-enter AI governance.

09.01.2026 20:45 — 👍 0    🔁 0    💬 1    📌 0

2025 was a pivotal year for AI. As attention turns to 2026, Founding Director at the Centre, @taylorowen.bsky.social, says we're approaching a turning point.

As a leading expert on how AI and digital tech are reshaping democracy, here are his top 5 AI predictions for 2026. 🧵

09.01.2026 20:45 — 👍 0    🔁 0    💬 1    📌 0
YouTube Share your videos with friends, family, and the world

Across regions, people are calling for stronger regulation and transparent AI governance, reflecting a broader conclusion that public trust and people-centred design must be central to AI’s future.

Watch Prof. Fen Osler Hampson's full talk here: tinyurl.com/fepuwmpp

05.01.2026 21:17 — 👍 0    🔁 0    💬 0    📌 0

At #AttnGovernOrBeGoverned, Prof. Fen Osler Hampson (@carleton.ca) shared new insights from the Carleton–CIGI–Ipsos Trust in the Internet Global Survey. The data show growing global concern about online privacy and sharply divided trust in AI across regions and types of AI. 🧵

05.01.2026 21:17 — 👍 0    🔁 0    💬 1    📌 0