You can read @taylorowen.bsky.social and Helen Hayes' policy memo on scoping AI chatbots into a revised Online Harms Act and their response to OpenAI's letter to Minister Solomon here: tinyurl.com/39zve3ey
27.02.2026 20:30 — 👍 0 🔁 0 💬 0 📌 0@mediatechdemocracy.bsky.social
Our organization examines how media and emerging technologies shape democracy. We’re housed at McGill University’s Max Bell School of Public Policy, and led by Taylor Owen.
You can read @taylorowen.bsky.social and Helen Hayes' policy memo on scoping AI chatbots into a revised Online Harms Act and their response to OpenAI's letter to Minister Solomon here: tinyurl.com/39zve3ey
27.02.2026 20:30 — 👍 0 🔁 0 💬 0 📌 0While OpenAI's voluntary commitments are a good start, they are no substitute for legislation establishing an independent regulator with authority to require risk assessments, set age-appropriate design standards, ensure compliance and enforce consequences when systems fail.
27.02.2026 20:30 — 👍 0 🔁 0 💬 1 📌 0In a Feb. 26 letter to Minister Solomon, OpenAI disclosed that the Tumbler Ridge shooter created a second ChatGPT account that its detection systems missed, and that under its updated referral protocol it would now report the first banned account to law enforcement.
27.02.2026 20:30 — 👍 0 🔁 0 💬 1 📌 0Owen and Hayes argue that OpenAI's decision not to contact Canadian law enforcement after the shooter's ChatGPT account was flagged and suspended in June 2025 is another example of real-world harms caused by AI systems.
27.02.2026 20:30 — 👍 0 🔁 0 💬 1 📌 0In the wake of the Tumbler Ridge mass shooting, the Centre's Founding Director @taylorowen.bsky.social and Associate Director of Policy Helen Hayes published a policy memo calling on the Canadian government to scope AI chatbots into a revised Online Harms Act. 🧵
27.02.2026 20:30 — 👍 0 🔁 0 💬 1 📌 0@mathieulavigne.bsky.social spoke with @rorywh.bsky.social from @nationalobserver.com about our latest brief on online conspiracy theories and institutional distrust in Canada, from the Centre's Media Ecosystem Observatory (MEO).
26.02.2026 15:01 — 👍 2 🔁 2 💬 0 📌 0This event is part of the Securing Canada's Digital Sovereignty series, presented by the Centre for Media, Technology and Democracy, MASS LBP, Ronald S. Roadburg Foundation and The Waltons Trust.
25.02.2026 21:33 — 👍 1 🔁 0 💬 0 📌 0
Register for free to hear from leading voices including: @abridgman.bsky.social, Sally Guy, Helen A. Hayes, Emily Laidlaw, @petermacleod.bsky.social, @taylorowen.bsky.social, Ava Smithing, Tracy Vaillancourt and @ethanz.bsky.social.
tinyurl.com/yzkknsus
If you're interested in youth online safety, we've got the perfect event for you! Join us on March 11th in Ottawa to hear from youth advocates, policy experts and leading researchers about the current online harms policy landscape and explore potential solutions. 🧵
25.02.2026 21:33 — 👍 1 🔁 0 💬 1 📌 0A new study looking at how conspiracy claims spread on social media found that people who use X, formerly Twitter, are much more likely to both be aware of and believe conspiracy theories. @jenstden.bsky.social reports.
25.02.2026 14:31 — 👍 72 🔁 41 💬 4 📌 2Thank you to everyone who contributed to this brief: Mika Desblancs-Patel, Esli Chan, @mathieulavigne.bsky.social, Ph.D., @chrispyross.bsky.social, @dhobso.bsky.social, Ben Steel, and Helen A. Hayes.
23.02.2026 20:08 — 👍 0 🔁 0 💬 0 📌 0Read the full brief on anti-institutional conspiratorial claims in the Canadian information ecosystem here: meo.ca/work/conspir...
23.02.2026 20:08 — 👍 0 🔁 1 💬 1 📌 0🔍 Platform dynamics shape exposure and belief. Frequent X users are significantly more likely to report awareness of and belief in these claims compared to infrequent social media users.
23.02.2026 20:08 — 👍 0 🔁 0 💬 1 📌 0
🔍 A small number of accounts drive most visibility. The top 100 accounts are responsible for 68% of conspiratorial posts and capture nearly 90% of views.
🔍 Influencers drive production and amplification of conspiratorial claims online.
🔍 Dominant conspiratorial claims challenge the legitimacy of democratic institutions.
🔍 Although awareness is widespread, belief remains limited. Between 29% and 63% of Canadians report hearing about the conspiracies studied, but only a minority endorse them.
In a new national study drawing on social media and survey data, the Centre's Media Ecosystem Observatory (MEO) finds limited belief in conspiracy theories, but outsized visibility driven by a small number of highly active online accounts.
Here are the key findings:
Our newest brief, “Conspiratorial Claims and Institutional Distrust in Canada’s Online Ecosystem,” examines how anti-institutional conspiracy theories circulate online and how widely they resonate with Canadians.🧵 #cdnpoli
23.02.2026 20:08 — 👍 3 🔁 2 💬 1 📌 0
Read the full brief here: tinyurl.com/p4spweew
Special thanks to the contributors: @mathieulavigne.bsky.social, Helen Hayes, Esli Chan, @chrispyross.bsky.social
Key findings of the brief:
1. Canadians understand of the risks that AI chatbots pose to young people.
2. Canadians assign clear responsibility to AI companies.
3. Canadians support specific, operationalizable interventions mapping onto proven regulatory frameworks.
How do Canadians feel about governing AI chatbots?
Our new policy brief draws on nationally representative survey data from 1,454 Canadians, demonstrating overwhelming public concern across all surveyed risk categories and clear attribution of risk to AI companies.🧵 #cdnpoli
While a relatively minor information incident, the event’s visibility highlights both democratic vulnerabilities and the need for stronger preparedness and response.
Read the full incident debrief here: tinyurl.com/4c7nm8wp
Our final debrief presents 5 lessons on how political violence spreads through the information ecosystem, revealing gaps between online & offline sentiment, lasting democratic impacts, outsized youth exposure, predictable misinformation cycles, and continued foreign exploitation.
28.01.2026 20:31 — 👍 0 🔁 0 💬 1 📌 0Charlie Kirk's shooting drove widespread awareness, transforming him into a household name in Canada. As polarized online discussions spread, the CDMRN launched a joint-investigation with MEO, @disinfowatch.bsky.social, @dfrlab.bsky.social & @pearl-uoft.bsky.social 🧵 #cdnpoli
28.01.2026 20:31 — 👍 1 🔁 2 💬 1 📌 0🚨 MEO is moving 🚨 Follow @mediatechdemocracy.bsky.social for updates on our information ecosystem research! 🧵
26.01.2026 20:07 — 👍 1 🔁 2 💬 1 📌 0Read more about @taylorowen.bsky.social's 2026 AI predictions here: tinyurl.com/mr2mmysn
09.01.2026 20:45 — 👍 0 🔁 0 💬 0 📌 0
3. Parents will turn their attention from social media to AI.
4. AI will be both increasingly capable and structurally unreliable.
5. New social norms around appropriate uses of AI will begin to take shape.
1. AI will get politicized, forcing institutions to confront political questions about distribution, accountability and public consent.
2. Faced with geopolitical competition, low public trust and safety risks, governments will re-enter AI governance.
2025 was a pivotal year for AI. As attention turns to 2026, Founding Director at the Centre, @taylorowen.bsky.social, says we're approaching a turning point.
As a leading expert on how AI and digital tech are reshaping democracy, here are his top 5 AI predictions for 2026. 🧵
Across regions, people are calling for stronger regulation and transparent AI governance, reflecting a broader conclusion that public trust and people-centred design must be central to AI’s future.
Watch Prof. Fen Osler Hampson's full talk here: tinyurl.com/fepuwmpp
At #AttnGovernOrBeGoverned, Prof. Fen Osler Hampson (@carleton.ca) shared new insights from the Carleton–CIGI–Ipsos Trust in the Internet Global Survey. The data show growing global concern about online privacy and sharply divided trust in AI across regions and types of AI. 🧵
05.01.2026 21:17 — 👍 0 🔁 0 💬 1 📌 0