According to Esli Chan (PhD Candidate, Expert on Extremism & Gender), "Normalization of the underlying ideology is particularly harmful for youth who are viewing Clav's content because it can affirm rigid notions of how masculinity should be performed, reinforcing toxic ideals."
Although many social media users engage with looksmaxxing content ironically, memes can be a pathway toward more extremist or radical subcultures by normalizing this type of discourse as part of everyday online culture.
Term variants show how this terminology is being repurposed into other forms of discourse, demonstrating its growing presence online.
Posts featuring terms like 'maxxing' and 'mogging' have increased substantially in 2026, suggesting growing adoption of language associated with harmful behaviour.
But these terms aren't new — they originated in incel/manosphere online subcultures in the mid-2000s.
Looksmaxxers like Clavicular recommend extreme practices to optimize their appearance, such as 'bonesmashing,' jaw surgery, and steroid use. Bonesmashing refers to striking one's face with a hammer to reshape its bone structure.
Data from the Centre's Media Ecosystem Observatory shows that 'looksmaxxing' is on the rise in Canada's online ecosystem, peaking in February following a viral video of Kick streamer Clavicular "getting brutally frame mogged by an ASU frat leader." 🧵
The safety of our speakers and guests is our top priority. We are actively working to reschedule the convening and will share a new date as soon as possible. Thank you to everyone who planned to join us, we look forward to bringing this important conversation together very soon.
🚨 Due to the severe ice storm forecast for tomorrow and expected travel disruptions, we’ve made the difficult decision to cancel Securing Canada’s Digital Sovereignty: A New Playbook for Youth Online Safety, scheduled for March 11 in Ottawa.
Are you a Gen Z Canadian (17–23)? We want to hear your thoughts on AI & data privacy!
We just hosted our third #GenZAI forum, where 100 young Canadians drafted policy recommendations on AI data collection. Thanks to Make.Org, you can join the conversation here: tinyurl.com/yv6jz3rt
MEO researcher Esli Chan spoke about our latest conspiracy brief on iHeart Radio CA's The Andrew Carter Morning Show! Have a listen here: www.iheart.com/podcast/962-...
You can read @taylorowen.bsky.social and Helen Hayes' policy memo on scoping AI chatbots into a revised Online Harms Act and their response to OpenAI's letter to Minister Solomon here: tinyurl.com/39zve3ey
While OpenAI's voluntary commitments are a good start, they are no substitute for legislation establishing an independent regulator with authority to require risk assessments, set age-appropriate design standards, ensure compliance and enforce consequences when systems fail.
In a Feb. 26 letter to Minister Solomon, OpenAI disclosed that the Tumbler Ridge shooter created a second ChatGPT account that its detection systems missed, and that under its updated referral protocol it would now report the first banned account to law enforcement.
Owen and Hayes argue that OpenAI's decision not to contact Canadian law enforcement after the shooter's ChatGPT account was flagged and suspended in June 2025 is another example of real-world harms caused by AI systems.
In the wake of the Tumbler Ridge mass shooting, the Centre's Founding Director @taylorowen.bsky.social and Associate Director of Policy Helen Hayes published a policy memo calling on the Canadian government to scope AI chatbots into a revised Online Harms Act. 🧵
@mathieulavigne.bsky.social spoke with @rorywh.bsky.social from @nationalobserver.com about our latest brief on online conspiracy theories and institutional distrust in Canada, from the Centre's Media Ecosystem Observatory (MEO).
This event is part of the Securing Canada's Digital Sovereignty series, presented by the Centre for Media, Technology and Democracy, MASS LBP, Ronald S. Roadburg Foundation and The Waltons Trust.
Register for free to hear from leading voices including: @abridgman.bsky.social, Sally Guy, Helen A. Hayes, Emily Laidlaw, @petermacleod.bsky.social, @taylorowen.bsky.social, Ava Smithing, Tracy Vaillancourt and @ethanz.bsky.social.
tinyurl.com/yzkknsus
If you're interested in youth online safety, we've got the perfect event for you! Join us on March 11th in Ottawa to hear from youth advocates, policy experts and leading researchers about the current online harms policy landscape and explore potential solutions. 🧵
A new study looking at how conspiracy claims spread on social media found that people who use X, formerly Twitter, are much more likely to both be aware of and believe conspiracy theories. @jenstden.bsky.social reports.
Thank you to everyone who contributed to this brief: Mika Desblancs-Patel, Esli Chan, @mathieulavigne.bsky.social, Ph.D., @chrispyross.bsky.social, @dhobso.bsky.social, Ben Steel, and Helen A. Hayes.
Read the full brief on anti-institutional conspiratorial claims in the Canadian information ecosystem here: meo.ca/work/conspir...
🔍 Platform dynamics shape exposure and belief. Frequent X users are significantly more likely to report awareness of and belief in these claims compared to infrequent social media users.
🔍 A small number of accounts drive most visibility. The top 100 accounts are responsible for 68% of conspiratorial posts and capture nearly 90% of views.
🔍 Influencers drive production and amplification of conspiratorial claims online.
🔍 Dominant conspiratorial claims challenge the legitimacy of democratic institutions.
🔍 Although awareness is widespread, belief remains limited. Between 29% and 63% of Canadians report hearing about the conspiracies studied, but only a minority endorse them.
In a new national study drawing on social media and survey data, the Centre's Media Ecosystem Observatory (MEO) finds limited belief in conspiracy theories, but outsized visibility driven by a small number of highly active online accounts.
Here are the key findings:
Our newest brief, “Conspiratorial Claims and Institutional Distrust in Canada’s Online Ecosystem,” examines how anti-institutional conspiracy theories circulate online and how widely they resonate with Canadians.🧵 #cdnpoli
Read the full brief here: tinyurl.com/p4spweew
Special thanks to the contributors: @mathieulavigne.bsky.social, Helen Hayes, Esli Chan, @chrispyross.bsky.social
Key findings of the brief:
1. Canadians understand of the risks that AI chatbots pose to young people.
2. Canadians assign clear responsibility to AI companies.
3. Canadians support specific, operationalizable interventions mapping onto proven regulatory frameworks.
How do Canadians feel about governing AI chatbots?
Our new policy brief draws on nationally representative survey data from 1,454 Canadians, demonstrating overwhelming public concern across all surveyed risk categories and clear attribution of risk to AI companies.🧵 #cdnpoli