Centre for Media, Technology & Democracy

Centre for Media, Technology & Democracy

@mediatechdemocracy.bsky.social

Our organization examines how media and emerging technologies shape democracy. We’re housed at McGill University’s Max Bell School of Public Policy, and led by Taylor Owen.

455 Followers 208 Following 389 Posts Joined Jan 2025
15 hours ago

According to Esli Chan (PhD Candidate, Expert on Extremism & Gender), "Normalization of the underlying ideology is particularly harmful for youth who are viewing Clav's content because it can affirm rigid notions of how masculinity should be performed, reinforcing toxic ideals."

2 0 0 0
15 hours ago

Although many social media users engage with looksmaxxing content ironically, memes can be a pathway toward more extremist or radical subcultures by normalizing this type of discourse as part of everyday online culture.

1 0 1 0
15 hours ago
Source: Media Ecosystem Observatory data from posts shared by Canadian influencers, news outlets and politicians

Term variants show how this terminology is being repurposed into other forms of discourse, demonstrating its growing presence online.

1 0 1 0
15 hours ago
Source: Media Ecosystem Observatory data from posts shared by Canadian influencers, news outlets and politicians

Posts featuring terms like 'maxxing' and 'mogging' have increased substantially in 2026, suggesting growing adoption of language associated with harmful behaviour.

But these terms aren't new — they originated in incel/manosphere online subcultures in the mid-2000s.

1 1 1 0
15 hours ago

Looksmaxxers like Clavicular recommend extreme practices to optimize their appearance, such as 'bonesmashing,' jaw surgery, and steroid use. Bonesmashing refers to striking one's face with a hammer to reshape its bone structure.

1 0 1 0
15 hours ago

Data from the Centre's Media Ecosystem Observatory shows that 'looksmaxxing' is on the rise in Canada's online ecosystem, peaking in February following a viral video of Kick streamer Clavicular "getting brutally frame mogged by an ASU frat leader." 🧵

1 0 1 0
1 day ago

The safety of our speakers and guests is our top priority. We are actively working to reschedule the convening and will share a new date as soon as possible. Thank you to everyone who planned to join us, we look forward to bringing this important conversation together very soon.

0 0 0 0
1 day ago

🚨 Due to the severe ice storm forecast for tomorrow and expected travel disruptions, we’ve made the difficult decision to cancel Securing Canada’s Digital Sovereignty: A New Playbook for Youth Online Safety, scheduled for March 11 in Ottawa.

1 0 1 0
5 days ago
Preview
React and contribute to collective reflection: AI and Privacy A journey to discover to share your point of view.

Are you a Gen Z Canadian (17–23)? We want to hear your thoughts on AI & data privacy!

We just hosted our third #GenZAI forum, where 100 young Canadians drafted policy recommendations on AI data collection. Thanks to Make.Org, you can join the conversation here: tinyurl.com/yv6jz3rt

3 3 0 0
1 week ago
Preview
The Andrew Carter Morning Show (Monday March 2, 2026) - The Andrew Carter Podcast | iHeart <p>Trudie Mason, Esli Chan, John Moore, Dr. Mitch Shulman, Alicia Monette, Dr. Japji Anna Bas</p>

MEO researcher Esli Chan spoke about our latest conspiracy brief on iHeart Radio CA's The Andrew Carter Morning Show! Have a listen here: www.iheart.com/podcast/962-...

1 0 0 0
1 week ago
Preview
Scoping AI Chatbots into a revised Online Harms Act: The Case for Immediate Action — Centre for Media, Technology and Democracy February 24, 2026 - The Centre’s Founding Director, Taylor Owen, and Helen Hayes, Associate Director of Policy, are calling for immediate action to scope AI chatbots into a revised Online Harms Act.

You can read @taylorowen.bsky.social and Helen Hayes' policy memo on scoping AI chatbots into a revised Online Harms Act and their response to OpenAI's letter to Minister Solomon here: tinyurl.com/39zve3ey

0 0 0 0
1 week ago

While OpenAI's voluntary commitments are a good start, they are no substitute for legislation establishing an independent regulator with authority to require risk assessments, set age-appropriate design standards, ensure compliance and enforce consequences when systems fail.

0 0 1 0
1 week ago

In a Feb. 26 letter to Minister Solomon, OpenAI disclosed that the Tumbler Ridge shooter created a second ChatGPT account that its detection systems missed, and that under its updated referral protocol it would now report the first banned account to law enforcement.

0 0 1 0
1 week ago

Owen and Hayes argue that OpenAI's decision not to contact Canadian law enforcement after the shooter's ChatGPT account was flagged and suspended in June 2025 is another example of real-world harms caused by AI systems.

0 0 1 0
1 week ago

In the wake of the Tumbler Ridge mass shooting, the Centre's Founding Director @taylorowen.bsky.social and Associate Director of Policy Helen Hayes published a policy memo calling on the Canadian government to scope AI chatbots into a revised Online Harms Act. 🧵

0 0 1 0
1 week ago

@mathieulavigne.bsky.social spoke with @rorywh.bsky.social from @nationalobserver.com about our latest brief on online conspiracy theories and institutional distrust in Canada, from the Centre's Media Ecosystem Observatory (MEO).

2 2 0 0
2 weeks ago

This event is part of the Securing Canada's Digital Sovereignty series, presented by the Centre for Media, Technology and Democracy, MASS LBP, Ronald S. Roadburg Foundation and The Waltons Trust.

1 0 0 0
2 weeks ago
Securing Canada’s Digital Sovereignty: A New Playbook - Youth Online Safety Youth online safety is at a turning point. Learn from experts about AI, platforms, and the state of Canada’s online safety policy.

Register for free to hear from leading voices including: @abridgman.bsky.social, Sally Guy, Helen A. Hayes, Emily Laidlaw, @petermacleod.bsky.social, @taylorowen.bsky.social, Ava Smithing, Tracy Vaillancourt and @ethanz.bsky.social.

tinyurl.com/yzkknsus

2 0 1 1
2 weeks ago
Post image

If you're interested in youth online safety, we've got the perfect event for you! Join us on March 11th in Ottawa to hear from youth advocates, policy experts and leading researchers about the current online harms policy landscape and explore potential solutions. 🧵

1 0 1 0
2 weeks ago
Preview
A ‘Tiny Minority’ of Social Media Accounts Drive Canadian Conspiracy Content | The Tyee Researchers found conspiracy claims spread widely, but only some people believe them.

A new study looking at how conspiracy claims spread on social media found that people who use X, formerly Twitter, are much more likely to both be aware of and believe conspiracy theories. @jenstden.bsky.social reports.

73 42 4 2
2 weeks ago

Thank you to everyone who contributed to this brief: Mika Desblancs-Patel, Esli Chan, @mathieulavigne.bsky.social, Ph.D., @chrispyross.bsky.social, @dhobso.bsky.social, Ben Steel, and Helen A. Hayes.

0 0 0 0
2 weeks ago
Preview
Conspiratorial Claims and Institutional Distrust in Canada’s Online Ecosystem — Media Ecosystem Observatory This research brief examines the rise and spread of anti-institutional conspiracy claims in Canada’s information ecosystem and their impact on democratic trust, public health compliance and confidence...

Read the full brief on anti-institutional conspiratorial claims in the Canadian information ecosystem here: meo.ca/work/conspir...

0 1 1 0
2 weeks ago

🔍 Platform dynamics shape exposure and belief. Frequent X users are significantly more likely to report awareness of and belief in these claims compared to infrequent social media users.

0 0 1 0
2 weeks ago

🔍 A small number of accounts drive most visibility. The top 100 accounts are responsible for 68% of conspiratorial posts and capture nearly 90% of views.
🔍 Influencers drive production and amplification of conspiratorial claims online.

0 0 1 0
2 weeks ago

🔍 Dominant conspiratorial claims challenge the legitimacy of democratic institutions.
🔍 Although awareness is widespread, belief remains limited. Between 29% and 63% of Canadians report hearing about the conspiracies studied, but only a minority endorse them.

0 0 1 0
2 weeks ago

In a new national study drawing on social media and survey data, the Centre's Media Ecosystem Observatory (MEO) finds limited belief in conspiracy theories, but outsized visibility driven by a small number of highly active online accounts.

Here are the key findings:

0 0 1 0
2 weeks ago
Post image

Our newest brief, “Conspiratorial Claims and Institutional Distrust in Canada’s Online Ecosystem,” examines how anti-institutional conspiracy theories circulate online and how widely they resonate with Canadians.🧵 #cdnpoli

3 2 1 0
1 month ago
Preview
Canadians’ Perspectives on Governing AI Chatbots A Policy Brief — Centre for Media, Technology and Democracy This report adds to an evidence-based mandate for AI chatbot regulation in Canada, drawing on nationally representative survey data from 1,454 Canadians and mapping public preferences onto established...

Read the full brief here: tinyurl.com/p4spweew

Special thanks to the contributors: @mathieulavigne.bsky.social, Helen Hayes, Esli Chan, @chrispyross.bsky.social

0 0 0 0
1 month ago

Key findings of the brief:
1. Canadians understand of the risks that AI chatbots pose to young people.
2. Canadians assign clear responsibility to AI companies.
3. Canadians support specific, operationalizable interventions mapping onto proven regulatory frameworks.

1 1 1 0
1 month ago
Post image

How do Canadians feel about governing AI chatbots?

Our new policy brief draws on nationally representative survey data from 1,454 Canadians, demonstrating overwhelming public concern across all surveyed risk categories and clear attribution of risk to AI companies.🧵 #cdnpoli

0 1 1 1