Image for Beyond Copyright: Is Canada Ready to Protect Our Faces and Voices in the Age of Generative AI?

Beyond Copyright: Is Canada Ready to Protect Our Faces and Voices in the Age of Generative AI?

AI can clone your face and voice in seconds. From deepfake porn to voice fraud, the harms are already here. So why does Canada have almost no law to protect you?

I never consented to this. My voice was cloned and used in a YouTube video spreading misinformation about a war I have nothing to do with.

– Voice actor Paul Skye Lehrman, whose voice was cloned without permission for a geopolitical disinformation video

If you have ever posted a video online, recorded a podcast, or simply appeared in a friend’s photo on social media, there is a real chance that your face or voice has already been harvested to train a Generative AI (GenAI) model, without your knowledge, without your consent, and without any meaningful legal protection in Canada.

Hollywood actress Scarlett Johansson made headlines when OpenAI released a voice assistant eerily similar to her own, despite her explicit refusal to license her voice. Paul Skye Lehrman found his voice cloned and weaponized for a YouTube disinformation video. These are not isolated anomalies. They are a preview of a systemic crisis that Canada’s legal system is woefully unprepared for.

So, what exactly is going wrong and what needs to change?

Your Face And Voice Are Being Treated As Raw Data

GenAI companies build their foundational models by methodically scraping the internet for publicly available audio, video, and images. Billions of personal photos and vocal samples are ingested to train deepfake algorithms and voice-cloning systems, all at negligible marginal cost, and with no authorization from the people whose biometrics are being used.

Research into this field emphasized that vocal and facial features are not just data, they are inherently biometric identifiers. Unlike a password, you cannot change your face or your voice. When tech companies commodify these characteristics into training data, they convert a person’s most intimate physical identity into a tradeable corporate asset. Scholars have highlighted this through the PRAC framework – Privacy, Reputation, Accountability, Consent, Credit, and Compensation, which maps how the AI data economy strips people, especially creative workers like voice actors, of any control over their own biometric signatures.

The outputs of this unregulated data pipeline are what we know as “deepfakes” which are AI-generated audio and video that is practically indistinguishable from the real thing. As Emilio Ferrara has warned, the unchecked proliferation of synthetic media is producing a “GenAI Paradox”, a world where deepfakes become so common that people begin to distrust all digital evidence, including genuine recordings. When fake reality can be manufactured at zero cost, truth itself becomes prohibitively expensive to verify.

The Harms Are Real, Severe, And Already Here

The consequences of Canada’s inaction are not abstract.

  • Surging Financial Fraud: Cybercriminals have used AI voice cloning to impersonate the CEO of a UK energy firm’s parent company, successfully tricking a subordinate into transferring funds to a fraudulent account. Scammers clone the voices of children to extort families into believing a loved one is in danger. As researchers Bao Kham Chau and George He document, audio deepfakes are particularly insidious because they lack the visual cues that might otherwise reveal a forgery.
  • Non-consensual deepfake pornography: Widely accessible software allows perpetrators to map a scraped facial geometry onto an adult film actor’s body. As legal scholars Vasileia Karasavva and Aalia Noorbhai have documented in their review of Canadian policy, victims, overwhelmingly women, face severe psychological trauma while finding almost no effective legal avenues for content removal or accountability.
  • Disinformation: Deepfakes are increasingly used to fabricate statements by politicians, defame private citizens, and manufacture artificial consensus on sensitive issues.

Canada’s Legal System Is Not Built For This

Canada currently has no standalone, statutory right to personality at the federal level. Victims are instead forced to rely on the common law tort of misappropriation of personality, a doctrine built on cases like Krouse v. Chrysler Canada Ltd. (1973) and Athans v. Canadian Adventure Camps Ltd. (1977).

The problem? To succeed, a victim must prove that their identity was used to generate direct commercial gain for the defendant. As legal scholar David Collins has pointed out, this “commercial gain” threshold renders the tort nearly useless against the most common AI harms. When a tech company scrapes your face to train its models internally, there is no single, traceable commercial transaction. When a malicious actor creates deepfake pornography out of revenge or malice rather than financial profit, there is no commercial gain to prove.

The Canadian Copyright Act offers no better shelter. Copyright protects original human-authored works, not biological facts like your facial geometry or vocal signature. If an AI developer scrapes a photo of you, the copyright belongs to whoever took the photo, not to you. And since a deepfake is an algorithmically generated output rather than an exact copy of any single image, the perpetrator could even attempt to claim copyright authorship over the fake.

The anticipated federal fix was Canada’s Artificial Intelligence and Data Act (AIDA) under Bill C-27, which is as of now, effectively dead after the bill stalled in Parliament. Canada’s only available fallback is the existing PIPEDA, provides privacy protection against corporate data misuse, but privacy law prevents unlawful data processing; it does not grant you any positive ownership over your own likeness. Provincial Privacy Acts in British Columbia, Manitoba, Saskatchewan, and Newfoundland restrict use of likeness only in narrow trade and advertising contexts.

The result: Canadians currently have no proprietary control over their own faces and voices.

What Other Countries Are Already Doing

It is important to understand that Canada is not starting from scratch nor is it starting this fight alone. Several jurisdictions are pioneering models worth learning from.

  • The United States, while fragmented, has shown what targeted legislation can look like. Tennessee’s ELVIS Act (2023) explicitly adds a person’s voice, real or simulated, to the list of protected publicity rights, giving artists and ordinary citizens a direct legal claim against AI voice cloning. The landmark 1988 case Midler v. Ford Motor Co. established that deliberately imitating a distinctive voice to sell a product is actionable identity theft, a precedent that legal scholars argue is directly applicable to AI voice cloning today.
  • The European Union takes a data protection approach. The GDPR treats facial images and voice recordings as sensitive biometric data requiring explicit, informed consent before processing. The EU AI Act mandates mandatory watermarking and disclosure labels on all AI-generated content. The Digital Services Act (DSA) forces host platforms to swiftly remove illegal deepfakes upon notification.
  • China has enacted the most stringent consent rules: under its Deep Synthesis Provisions, any provider that edits a person’s face or voice must directly notify the targeted individual and obtain their separate, explicit consent before generating the synthetic output.
  • Denmark is pioneering the boldest model. It is proposing an amendment to its Copyright Act, Section 73a, that would explicitly recognize an individual’s face, voice, and bodily gestures as intellectual property. Legal researchers have already analyzed and highlighted how this approach empowers every person, not just celebrities, to demand the immediate takedown of unauthorized digital imitations. The Danish model also proposes a **50-year post-mortem right**, allowing heirs to control posthumous use of a deceased person’s likeness. As researcher Endang Prihatin has documented, the burden of proof is reversed: the party distributing the deepfake must actively demonstrate they obtained valid consent.

What Canada Must Do

A growing body of legal scholarship, including work by Chopra, Sony AL, and Chopra on comparative personality rights, and Karttunen’s analysis of the Danish model, points toward a clear potential roadmap for Canada. The most viable immediate path is a **hybrid legislative model** that combines Denmark’s copyright expansion with the EU’s transparency mandates.

Here is what that I argue that roadmap should look like:

  • Amend the Copyright Act to explicitly recognize faces and voices as protected intellectual property. This would give victims a direct legal mechanism to issue takedown notices for unauthorized deepfakes, bypassing the impossible “commercial gain” threshold of the misappropriation tort, and reverse the burden of proof onto those distributing synthetic content.
  • Enact a 50-year post-mortem right for digital likeness, following Denmark’s model, giving heirs legal authority over posthumous exploitation of a loved one’s face or voice.
  • Reform PIPEDA to explicitly ban untargeted scraping of biometric data for AI training and prohibit ‘implied consent’ based on public availability as biometric data’s immutability demands a higher legal standard: granular, explicit, actively affirmed opt-in consent.
  • Hold AI “landlords of creativity” liable since the hardware, compute, and foundational models that power deepfakes are owned by a concentrated handful of companies, OpenAI, Microsoft, Google, who lease access via APIs. Canada should place liability for illicit synthetic media firmly on these infrastructure providers as the “cheapest cost avoiders,” requiring them to maintain secure audit trails linking AI-generated outputs to specific user inputs, and to restrict anonymous users from accessing mass-generation capabilities.
  • Mandate EU-style transparency obligations incorporating standardized digital watermarking and content-origin metadata on all AI-generated audio and video, ensuring the public can immediately identify synthetic media.
  • Carve out clear exceptions for imitation, parody, satire, and social commentary, provided the content does not constitute defamation or deliberate disinformation, to protect Charter-guaranteed freedom of expression.

The Bottom Line

When reality can be manufactured at zero marginal cost, the burden of truth becomes unbearable for individuals, institutions, and democracy alike.

– Emilio Ferrara, “The Generative AI Paradox” (2025)

Canada is at a crossroads, and it is critical that it recognizes that GenAI is not slowing down. Every day that passes without meaningful legislative action is another day that Canadians’ faces and voices remain unprotected, allowing for exploitation, biometric cloning and abuse by anyone with access to the right tools.

This elevates it to a level above simply an intellectual property debate. It is an overwhelming crisis of personal dignity, public trust, and democratic integrity. The common law is broken for this problem. PIPEDA alone is not enough whilst waiting for a future federal AI bill that may never pass is not a strategy.

Canadian lawmakers, NGOs, and advocacy groups must act now. The comparative analysis is clear: the legal tools exist. The policy blueprint, modelled on Denmark, fortified by the EU, is ready to be adapted. What is missing is the political will to treat our faces and voices as something worth protecting.

*To learn more about this topic, read the full research paper here.

Key Sources Referenced

Emilio Ferrara, Adam Eric Berkowitz & Miriam Sweeney, Vasileia Karasavva & Aalia Noorbhai, Bao Kham Chau & George He, Pooja Chopra, Reeta Sony AL & Shruti Chopra, Sofia Karttunen, Gabriel Ernesto Melian Pérez & Laura Herrerías Castro, Endang Prihatin, Tanusree Sharma, Yihao Zhou & Visar Berisha, Lambert Hogenhout & Rinzin Wangmo, David Collins
 



Take action now! Sign up to be in the loop Donate to support our work