Image for Canada’s Consultation for AI Strategy Sidelines Public Interest: OpenMedia’s Recommendations
Avatar image of Jenna Fung

Canada’s Consultation for AI Strategy Sidelines Public Interest: OpenMedia’s Recommendations

Canada needs a strong, accountable, and competitive AI ecosystem to protect Canadians’ choices and digital autonomy. But our leaders are so caught up in encouraging innovation and adoption that they’re leaving rights, privacy, sustainability, and public accountability behind.

Canada's last attempt to regulate Artificial Intelligence (AI) through Artificial Intelligence and Data Act (AIDA)––part of Bill C-27 which died on paper when the Trudeau government dissolved in early 2025. That’s left Canada as one of the few G7 countries without a comprehensive national AI regulatory framework in force.

The new government is now trying to play catch-up on both the AI innovation and regulation fronts. Since the start of the new Liberal government, Prime Minister Mark Carney appointed Canada’s first AI Minister, Evan Solomon, and Canada’s new leaders never stop chanting their “AI adoption” slogan and they have been clear they’re prioritizing on economic benefits over regulation; all for making an environment to scale up the AI industry in Canada.

A national AI strategy is needed, but Canada’s been doing it wrong so far.

At OpenMedia, we believe a refreshed strategy is both necessary and timely. Developing a robust, competitive, and accountable AI ecosystem is critical to ensuring meaningful choices for Canadians and to advancing our individual digital autonomy as part of a broader vision for Canada’s digital sovereignty in the era of AI.

However, the government’s approach on developing a national AI strategy is far from what we expect for the democratic country that Canada is supposed to be. From the announcement of the establishment of a 26-member AI Strategy Task Force to the 30-Day National Sprint on AI Strategy, the process has been framed primarily through an industry lens and the government has ignored the core AI issues Canadians most want addressed: human rights, privacy, environmental impacts, and democratic integrity. The condensed timeline and the limited format and consultation materials has made public participation feel like a checkbox exercise rather than a meaningful dialogue.

​​Any national AI strategy should start by understanding how Canadians are actually using and experiencing AI, not treat it as an afterthought. Yet the government’s 30-Day National Sprint missed an important opportunity to hear directly from Canadians about their top concerns, asking almost nothing about the harms and risks side of the AI equation.

Along with over 160 civil society and human rights organizations, academics, experts, and advocates across Canada, OpenMedia signed an open letter to voice our opposition to this hasty 30-day national consultation. Yet, a meaningful conversation with stakeholders other than the industry is still a “TBD” as a status.

OpenMedia’s community has clearly voiced its AI concerns and policy priorities.

In August 2025, OpenMedia surveyed our national community, asking: What kind of AI future should Canada build? What priorities are missing from the government’s agenda? What should be protected, promoted, or prevented?

From 3,020 responses, we heard that Canadians are deeply concerned about AI’s impact on creators, the sovereignty of our data, and the environmental footprint of AI infrastructure. Many raised alarm over how AI is fundamentally altering how knowledge and truth flow in society, because of misinformation, deepfakes, and how AI is changing the way people learn, think, and access information.

We found that Canadians are more concerned about AI’s risks than its benefits, and most want proactive, people-first AI rules. Our community believes AI cannot remain unregulated or be governed solely to drive innovation and adoption. Canadians want a framework where rights, privacy, sustainability, and public accountability are not afterthoughts, but central pillars of our government’s strategy.

Our takes and recommendations inspired by our community survey

On November 5, 2025, we shared written input with Minister of Industry Mélanie Joly, AI Minister Evan Solomon, and the AI Strategy Task Force to ensure that public-interest perspectives, not just industry priorities, are considered in shaping Canada’s AI strategy.

In our submission, we highlighted Canadians’ sentiments towards AI, raising the deep public unease about AI’s risk to the integrity of information, personal privacy, and fundamental rights in Canada. We emphasized that this is not a rejection of innovation; Canadians are demanding AI that is responsible, transparent, and respects our rights.

We also shared key priorities and recommendations inspired by our community. We urged the government to regulate AI in ways Canadians would trust, like using ex-ante regulations (precautionary legal measures before new AI models are released) and EU-style risk-based rules that apply stricter oversight depending on how AI is used, rather than following the U.S.-style laissez-faire approach.

Here’s what we asked the government to do to address Canadians’ concerns and guide a human-centric, rights-respecting AI strategy, instead of one focused solely on industry and innovation:

  • Ensure Canada’s AI policy is proactive, inclusive, and adaptive, starting with a new broad, collaborative public consultation that fairly assesses harms and risks now, and using regulatory sandboxes to test and evaluate AI accountability measures;
  • Protect privacy and creative work by requiring meaningful consent, fair compensation for creators, and clear legal safeguards against misuse of data and AI-generated content;
  • Strengthen public trust by requiring clear labelling of AI-generated content and developing a permanent support system for independent, local, and fact-based journalism;
  • Make AI development environmentally sustainable by regulating energy and water use, conducting impact assessments before approving new data centres, and incentivizing low-impact technologies;
  • Safeguard workers by tracking AI adoption and funding retraining and upskilling programs;
  • Promote AI literacy through school curricula, community programs, and public campaigns that empower Canadians to understand and responsibly use AI, and help our education system adapt to its impacts.

Alongside these recommendations on governance, oversight, and international coordination; protection of privacy, creativity, and consent; misinformation, democratic integrity, and media accountability; workforce, employment, and economic stability; and education, digital literacy, and public awareness, we included the full AI survey we conducted in August 2025 and charts of all survey results at the end of the submission.

You can read more about our submission here, available in English.

What’s next for Canada and Canadians?

No one is 100% sure what Canada’s next move in this AI chess game will be, but last month, AI Minister Evan Solomon has signaled plans to introduce rules on data privacy, deepfakes, and the handling of sensitive information, including data about children. The government says it will take a “light, tight, right” regulatory approach to balance innovation with accountability.

But it is a bad sign that the process for shaping a national AI strategy has been so heavily industry-focused, and with so little attention in good faith to critics of AI’s challenges. We know this fight won’t be easy, but for the good of all Canadians, we have to keep speaking up and pushing for a human-centered, rights-respecting AI strategy.

OpenMedia will continue amplifying Canadians’ perspectives, holding leaders accountable, and advocating for the policies our community has called for. We’re also preparing to launch a campaign urging Canadian leaders to act on the recommendations we proposed—and to make that happen, we need your support. If you can, please help us keep this work going.



Take action now! Sign up to be in the loop Donate to support our work