OpenMedia Survey: Canadians Want Proactive AI Rules for Rights, Privacy, and Sustainability
Canadians are far more worried about AI’s risks than its benefits, with overwhelming concern about misinformation, fraud, and surveillance, yet regulation remains lacking. OpenMedia’s recent community survey found that Canadians want proactive, people-first AI rules.
Forward
Since ChatGPT’s release in 2022, generative AI tools have become more accessible to everyday people and have quietly creeped into almost every part of our daily lives. It’s changing the way we work, learn, and even stay in touch with each other. However, it’s also raising big questions about ethics and who’s responsible for the choices AI makes, and the threats or risks it poses.
Countries like China and the US are taking an “innovation-first” approach and have essentially said no to meaningful regulation of AI for consumers. This leaves countries like Canada, currently home only to relatively smaller AI developers, no choice to act in its own interest while being impacted by decisions made in Silicon Valley, Washington, and Beijing.
As of September 2025, Canada has yet to adopt any new laws regulating this technology. Canada’s first major attempt, the AI and Data Act (AIDA) part of Bill C-27, died when Parliament was prorogued and later dissolved in early 2025. Although Canada welcomed its first AI minister, Evan Solomon, in June 2025, Prime Minister Carney’s “economy first” agenda does not seem to include any clear plan on addressing key problems posed by AI, like its impact on privacy, oversight and equity. In fact, the new Liberal government has made clear their priority is on AI adoption, innovation and industry growth; and Minister Solomon admitted that from “over-indexing on warnings and regulation,” and and focus more on economic benefits.
Now, Solomon is forming a task force with representatives from industry, academia and civil society. Judging by the lineup, it looks like industry will be steering the ship. The government belatedly says it’ll take public feedback to “refresh” the federal AI strategy. But how much will everyday Canadians actually be heard?
Regulating AI is a complex challenge, and Canadians’ perspectives must be central to the process. At OpenMedia, we are committed to making sure their voices are included in these decisions and not left out again, as happened with Bill C-27, which was shaped behind closed doors with zero public consultation. This article highlights how our community views AI and what kind of AI policy they want for Canada.
Our Community Survey on AI
In August 2025, OpenMedia conducted a community survey on artificial intelligence (English) (French), asking supporters across Canada: What kind of AI future should Canada build? What priorities are missing from the government’s agenda? How would you shape these priorities? What should the government protect, promote, or prevent?
Our survey, with 3,020 responses, shows that Canadians are deeply concerned about AI’s impact on creators, the sovereignty of our data, and the environmental footprint of AI infrastructure. Many raised alarm over how AI is fundamentally altering how knowledge and truth flow in society, because of misinformation, deepfakes, and how AI is changing the way people learn, think, and access information.
Our community believes that AI cannot continue to go unregulated, and should not be solely governed to encourage innovation and adoption. Our community members are insisting that Canada’s first AI regulation bill pay attention to critical areas like rights, privacy, sustainability and public accountability.
Key Findings
*This section highlights the main numbers and trends from the questionnaire responses.
AI in Everyday Life
- Our community members are careful and cautious adopters of this technology. Close to half (49%) of respondents do not use AI on an everyday basis. Over 70% of respondents only use AI occasionally. When respondents do use AI tools, 31% use it as an alternative to a search engine, 23% for work, and 15% for education.
- Close to 60% of respondents are more worried about risks than excited about its potential benefits; and only 5% of respondents were more hopeful for its potential benefits than worried about its risks.
- Our community is concerned about many areas of AI, but the top worries are AI-generated misinformation, deepfakes, and fake content (89%); criminal uses such as scams, fraud, cyberattacks, stalking, and identity theft (76%); and the use of AI to expand government or corporate surveillance (71%).
Governance and Accountability: What AI Future Do You Trust?
- Our community wants proactive AI regulation: 64% wants EU-style regulation (such as risk-based tiered system, with different AI models regulated to a greater or lesser degree depending on the uses they’re being put to), 61% wants Ex-ante regulation (such as precautionary legal responses before new AI models are released) and Regulation sandboxes (such as legally safe test zones for proof of concept innovation, followed by significant regulation of commercial products).
- 95% of respondents believe that high risk industries should face stricter regulations than others on how they develop or use AI. While our community believes that some sort of regulation is needed for all these industries, the survey result shows that Government and law enforcement (83%), News and Media (84%), and Healthcare (71%) are the top ones that need to face stricter regulations on AI tool usage.
- 77% of respondents believe AI systems should not be allowed to train on copyrighted materials, with many expressing concerns about the impact on creative industries. Fewer than 20% think AI should be allowed to use copyrighted works – 12% only if fair compensation is provided, and 2% only if sensitive cultural knowledge is excluded.
- 84% believe that “innovation” is not a valid reason to let AI train on copyrighted material. Even though limiting access for models to rights-holder owned material could slow AI development in Canada, a strong majority still hold this stance.
- Respondents generally favour tighter rules for commercially used AI content than AI used for other purposes. 57% believe AI-generated content created for commercial purposes should be subject to stricter regulations and copyright rules than content used for non-commercial purposes, while 43% disagree.
- There is strong support for treating the following acts as criminal offences, consistent with the former AIDA framework: developing AI to defraud (97%), using stolen data (96%), using AI to cause serious harm (95%), and creating deepfakes (90%).
- Our community strongly agrees with only one of Minister Solomon’s AI priorities: Maintaining Canadian sovereignty over AI (81%). The other priorities, growing the sector (23%), adoption (10%), and public trust (34%), received much lower support.
Key Concerns from Ordinary Canadians’ Comments
* This section captures some of the most common insights shared in the written comments.
Control and rights for creators and users
Our community believes that artists and creators should have control over whether their work is used to train AI, with fair credit and compensation for commercial use. Individuals should be able to opt out of having personal data used for AI, remove non-consensual deepfakes, and limit identifiable data collection.
“... The normalization of AI in creative fields is an existential threat to the career I worked hard toward building at a time when the job opportunities are extremely low, not to mention the fact that genAI models such as chatGPT are already built off of stolen materials --- not just copyrighted materials, but social media posts by creatives sharing their work online…”
– C. Olivera (Ontario)
“Les artistes ont passé leur vie à se former et développer leur art, souvent avec de très faibles contreparties monétaires. Le vol de l’IA de leurs œuvres par de grosses compagnies qui génèreront par la suite de très gros profits est littéralement du vol à grande échelle des plus pauvres par les plus riches. Non seulement ils devraient pouvoir donner leur consentement, mais aussi devraient pouvoir générer un revenu s’ils acceptent. Et même des compensations pour des pertes potentielles de marchés futurs causés par l’utilisation de l’IA.”
“Artists spend their lives training and developing their craft, often for very little pay. When AI steals their work for use by big companies that then generate massive profits, it’s literally large-scale theft—taking from the poor to enrich the wealthy. Artists should not only be able to give or withhold consent, but also receive income if they agree, and even compensation for potential future market losses caused by AI use.” (AI-assisted Translation)
– C. Dallaire-Dupont (Quebec)
Privacy and data protection
Our community expects AI companies to be accountable and transparent about how they use data and develop their AI models. Any sensitive information used by AI, especially in areas like health and finance, must be protected, stored temporarily, and used only with user consent.
“[The] government needs transparency from AI developers… They know everything about their users - but share nothing. We can do nothing meaningful about regulating these types of industries without understanding what they know about their users and the harms and risks they are facing… Without the data we are blind and any attempt at regulation will be shooting in the dark.”
– C. Murphy (British Columbia)
“... I don't see anything in the minister's list involving respect for personal privacy, informed consent. It is farcical for deep-pocketed (the amount of speculative funds devoted to AI projects is astronomical) AI corporations to require the Government 'to grow' the AI industry. Having transparency (when and how AI is used for various interactions, if such interactions can be controlled by a user, can people opt out of not using AI for services, if material is AI generated, etc. ) is also a priority.”
– B. Brenken (Alberta)
Responsible governance and regulation
One-size-fits-all government approaches do not earn our community’s trust. AI policies should distinguish between types of AI and types of AI use, and handle their distinct ethical, legal, and societal impacts differently. Our community wants a collaborative and participatory process in shaping the national strategy for AI adoption, innovation, and regulation, ensuring that privacy, rights, and democracy are protected. Clear accountability is needed for AI misuse, including laws governing AI behaviour and copyright use.
“In a world of monopolies, and greed. The rich may often be motivated by profits at the detriment of the people. And there isn't always a 1 for 1 loss vs gain.. And we need limits to protect against AI influence over people in this regard. We need to shield our children from AI. And balance who gets how much influence of AI. Like an internet search [engine] optimization, it is often the rich who get their influence across. Does that sound like a democracy?"
– E. Daoust (Alberta)
“I completely distrust [the] government's inclination of one size fits all solution to AI. I want to see [the] government discern between automation, robotics, narrow AI and general AGI developments, and the risks and rewards of each in terms of their application to various commercial and non-commercial goals…”
– G. Dutch (Quebec)
“... The future of AI governance must use a multifaceted and adaptive strategy. Research is needed to understand the social, ethical, and legal ramifications and to develop effective strategies. The approach must be collaborative, bringing together experts from diverse fields. Focusing our regulatory efforts on high-risk AI applications can help maximize the benefits of AI while minimizing potential harms. Canadian governance frameworks need to allow ongoing experimentation, evaluation, and adaptation and prioritize human-centred values, such as fairness, transparency, accountability, and respect for human rights. Education about AI is key. We need an informed public so continuous learning is necessary…”
– K. Hamilton (Ontario)
Social and environmental impact
Many community members are concerned over job loss, workforce deskilling, corporate responsibility when AI replaces workers,as well as AI’s impact on children. They are also concerned about how AI is reshaping education, learning, and the flow of information in society. Our community believes that AI adoption should account for environmental costs such as energy and water use, consider its impact on education and information, and examine how Canada’s economy can adapt to AI’s effects on the workforce.
“The government must focus on job loss as a result of AI integration. If people lose their jobs this will only increase the need for government assistance… as those same people will be unable to purchase good[s] or pay into the system (infrastructure, education and healthcare)... This cannot be ignored. Large companies need to be punished or pay more (taxes) when they let go of workers or outsource jobs based on AI integration.”
– R. Cote (Ontario)
“... mass use of AI is woefully unsustainable and does nothing but accelerate the rate of climate change for the sake of a bit of convenience (if it even manages to accomplish that), which was already a huge problem even before this technology started taking off… The issue should be approached with careful consideration and a lot of involved discussion, not a bandwagon trend to jump on just because everyone seems to be using it.”
– M. Kwan (Alberta)
“[I] am worried about the increasing harm to our political landscape thru deepfakes and the spread of misinformation and disinformation, the harm to students from their own abuse of AI-generated content to pass exams without doing the work and learning how to research properly… the consequent loss of skill/research skills… the reliance on it for news, health info especially, that may put people at risk…”
– M. Dixon (Nova Scotia)
Public education and Canadian sovereignty
Our community believes that the public must be informed about AI’s capabilities, costs, and risks, and Canada should not make use of foreign-controlled AI for working with our most sensitive data and purposes. Our community input showed a strong preference for self-hosted, locally developed AI, and AI governance that aligns with Canadian laws, human rights, and values. Many community members wished to see Canada work more closely with Europe and follow a European-style regulatory model, rather than adopting the U.S.’s laissez-faire approach.
“I think the most important thing we need to do is ensure whatever AI we use is self-hosted, whether the models themselves are home-grown or not (though prefer if they are of course!)”
– E. Moeller (British Columbia)
“In order to have power and sovereignty in this matter, I believe that Canada will need to partner/alliances with other countries… It seems like it will be challenging to keep cyber security intact without alliances and agreements spanning the globe… My belief is that the Canadian government should continue to make alliances with European cyber-crime specialists in this matter…”
– S. Kelly (British Columbia)
Epilogue
The decisions being made in Ottawa today about the adoption of AI will shape our rights, jobs, economy, and culture for decades to come. We hope these survey findings help amplify the voices of ordinary Canadians to those in power.
Meaningful regulation means understanding how Canadians want to balance innovation with responsible AI adoption and ensuring those priorities are reflected in future laws. Right now, many of our community’s priorities are not on the government’s agenda, and that needs to change.
Canadians do not want AI to grow unchecked and unregulated in ways that threaten our democracy and everyday lives. To give Canada real leverage instead of simply following the U.S., we need to carefully design a people-first approach to AI; and Canada needs to do it now, before AI systems are distributed throughout our daily lives.
*This survey result has also informed our asks and recommendations to the Canada-EU Digital Trade Agreement consultation. Our community urges urgent action to protect accountability and healthy public debate by aligning our regulatory frameworks and approaches to innovation more closely and strategically with the EU. For more information about our submission, please visit our previous article.