Social Media Ban for Youth and Children: Protection or Punishment?
Canada is edging toward a social media ban for youth and children, but the cure might come with its own complications.
Canada’s Prime Minister Mark Carney recently admitted “our legislation ... with respect to online harms, the exploitation of children, is lagging,” and signalled that the question of a social media ban for children would be part of the upcoming Online Harms Bill — though he added the issue "merits an open and considered debate."
That debate is long overdue. Canada has been stuck in a false dichotomy for years: online safety framed as either protecting children or protecting Canadians' rights and privacy — as if we can't have both. Now, with AI-era harms accelerating and a global wave of social media bans making the pressure feel urgent, our government is being pushed toward a decision.
The harms are real, urgent, and multiplying. But the most tempting solution — banning children from social media — comes with serious strings attached to the rest of our rights. In this article, we'll unpack what those harms actually look like today, why the global rush to "fix it" with bans and age checks creates problems of its own, and what a better path forward for Canadians might look like.
Online harms are real, and they’re getting worse
If you needed a wake-up call about what AI-era online harms look like in practice, look no further than what happened with the recent Grok AI sexual deepfake scandal. Over the past couple of months, Elon Musk's AI chatbot was updated to allow its users to request edited photos of real people — including women and children, and other X users — into sexualized poses, or remove their clothing, without those people's consent.
Over just nine days, the chatbot flooded X, Musk’s social media platform, with 4.4 million images, of which at least 41% were sexualized images of women. And for a period, they were being generated and published on X itself. Following the global backlash, his company turned the feature off in countries that outlaw deepfakes and limited image generation to paid users, while still permitting its use to generate nonconsensual sexualized content on X. Users have been leaving the platform in droves.
Regulators around the world have taken notice
The UK launched an investigation and is considering a ban on X for failing to comply with its online safety laws. Australia's online safety watchdog is probing the deepfakes. France's Paris Prosecutor's Office expanded its investigation into X to include Grok. The EU ordered X to preserve all Grok-related documents until the end of year. Indonesia and Malaysia temporarily blocked access to Grok altogether.
In Canada? Our AI minister ruled out banning X, while our privacy watchdog expanded its probe into whether X even follows our decades-old federal privacy law in the first place. Our laws weren't written for a world where AI can generate abuse at scale in seconds, and the gap is showing.
This is just one incident in a complex and accelerating AI future. Humanoid robot technologies are on the horizon, and AI agents are increasingly being deployed across industries with minimal oversight, sometimes generating harmful content, sometimes ‘bullying’ people. As AI systems become more independent and agentic, the question of accountability becomes increasingly urgent.
A global trend with complicated results
There's a widespread and growing sense that children, in particular, are being harmed by their time on social media, and many governments are reaching for a familiar blunt tool: bans.
Australia became the world's first country to implement a social media ban for children under 16. Since then, the ripple has spread fast. Spain is looking at requiring age verification systems for under-16s. Norway has been working on its regulation to raise the age limit for social media usage to 15 since last summer. Denmark has also announced a new plan to restrict the use of social media for under-15s last December.
France's National Assembly approved a bill to ban social media for under-15s and phones in high schools. If approved by the Senate, France will be the second country to impose such a ban after Australia. The UK opened a consultation in January 2026 on banning social media for under-16s. Though the proposed ban was rejected by MPs in March, a ban could still be introduced in future after Commons back government bid to give extra powers to secretary of state.
Let’s be real about the challenges
A social media ban is not the same as giving young people a childhood free of social media pressures. It's a blunt instrument, and we should be honest about that. Canada is watching all of this unfold; and the federal government is now exploring whether a social media ban for children under 14 should appear in the upcoming Online Harms Bill.
The instinct to protect children is more than understandable.The challenges young people face online are real and serious: exposure to harmful content, cyberbullying, algorithmic manipulation designed to keep them scrolling. But a ban isn't a silver bullet.
Cyberbullying doesn't disappear, but migrates to private messaging and platforms like Discord. Many determined teenagers will find workarounds. And there's a real risk that teens who run into trouble while circumventing a ban become less likely to reach out to adults for help.
Age checks on platforms
To actually enforce a social media ban for children, you need to know who is and isn't a child. And that means age verification. Checking that users are who they claim to be before they can access platforms, might sound like a simple technical fix. It isn’t.
Look at what's happening right now at the corporate level. Discord recently announced it would rollout global age verification for all users, using a mix of existing account data, facial age estimation, and government ID submission. They claim sensitive data will be deleted promptly — and that the "vast majority" of users won't need to submit a face scan or ID, because they’re either not accessing age-restricted content, or AI interpretation of their account data predicts they’re old enough. They ended up delaying its rollout until the second half of 2026 due to concerns over privacy.
Roblox has already launched mandatory one-time facial age checks to access age-based chats, which limit conversations to people in similar age groups. The rollout demonstrated the problems almost immediately: some users' ages were being misidentified (because they guess-timate your age based on very indirect habits like where you put your emojis in a sentence). And eBay listings began appearing selling age-verified Roblox accounts for minors as young as nine — which predators could theoretically use to access youth spaces.
YouTube has been testing AI-powered systems to identify minors based on their viewing habits. Meta is preparing to roll out passkey-based "AgeKeys" on its platforms. This is the direction the industry is heading — and with it, some serious questions.
Age verification is a leaky sieve
Many age verification systems create a database of sensitive identity information: faces, government IDs, behavioural data. These databases become high-value targets for hackers and identity thieves.
Government IDs aren't accessible to everyone. Estimation systems misidentify people. And even with the best encryption and privacy practices, mandating that people trade their sensitive personal data for access to basic online services is a significant ask, especially when there's no guarantee it actually keeps kids safer.
Most systems are also not rechecking age each time a user logs into a service. What’s being demonstrated is that at some point, someone provided some evidence of someone’s age on the account, not that the current user is who they say they are. That means that in many cases, young people who are motivated to get online will get online and some adults will game the system by posing as minors. It doesn’t guarantee safety, and it introduces significant risks to personal data—even with the best encryption. The question is whether we're actually making them safer, or just making everyone less private.
So what’s the way forward?
Canada has a genuine opportunity to do better than the countries rushing to copy each other's bans without clear evidence any of this works. The core problem isn't really that children are on social media. It's that social media platforms have been designed, deliberately and profitably, to maximize engagement at the expense of everyone’s wellbeing, for children and adults alike. These are design choices. And if companies made them, companies can be held accountable for them.
That accountability has a name: Platform governance. Making companies legally accountable for how their products are designed and what systemic harms that design enables is a stronger and more rights-respecting approach than forcing every user to prove their age with their face. It means requiring platforms to take responsibility for any intentional facilitation of harmful illegal content, like the Grok deepfakes. It means demanding transparency about how AI systems are trained and moderated. It means giving us as users more control over our feeds, our data, and our experience on every platform.
If Canada pursues age verification or social media bans despite a better path forward existing, the ‘how’ will matter enormously. Privacy-preserving approaches exist, but they require political will to demand them rather than accepting whatever Big Tech builds first. Weak age verification designed by companies whose real interest is retaining users doesn't protect children. It just creates the appearance of action, while putting everyone at greater risk.
And there are bigger structural questions too. Canada's current privacy laws are nearly 30 years old. Our AI regulations are largely nonexistent in enforceable form. And our largest trade agreement, CUSMA, imposes real limitations on our ability to set sovereign digital laws. We're dealing with 2026 harms using 1990s legal frameworks. That's not sustainable.
What you can do right now?
Canada doesn't have to follow the global herd blindly on banning social media for children. In the age of AI, shielding them from all the ‘dangers,’ and throwing them right into the water when they turn 16 is impractical. Instead, we can push for laws that take online harms seriously and protect everyone’s rights.
OpenMedia is currently running campaigns on exactly these issues. Our Privacy4Canadians campaign is pushing MPs to modernize Canada's privacy laws so tech giants can't operate with a free pass. Our ‘Stop Canada’s “Buy Now, Regret Later” AI Future’ campaign is calling for real, people-first AI rules, not industry-friendly voluntarism that leaves workers and consumers exposed. And our latest Online Harms campaign focuses on getting this right: accountability for platforms, protection for users, and no sacrificing of civil liberties in the name of safety.
The decisions being made right now will shape what the internet looks like for Canadians for the next decade. The only way to make sure your rights are part of that picture is to make sure your voice is heard.