Image for Five Questions Minister Evan Solomon Forgot to Ask About AI

Five Questions Minister Evan Solomon Forgot to Ask About AI

Canadians were asked to weigh in on the country’s AI future, but the government’s consultation missed the mark. The questions focus on boosting innovation and industry, not protecting people, creativity, or the planet. Here are five big things missing from Canada’s AI conversation, and why they matter.

Last year, the world was thrown into a frenzy when Dr. Geoffrey Hinton, Canada’s own “godfather of AI”, resigned from Google to warn about the very technology he helped create. He warned that AI could flood the world with misinformation, disrupt entire industries, and, if left unchecked, slip beyond our ability to control. His message was clear: innovation must be balanced with accountability and oversight.

Canadians echoed those fears in OpenMedia’s recent national AI survey that received over 3,000 responses. The majority expressed concerns about copyright and creative rights, data sovereignty, environmental sustainability and AI’s impact on education. Their message was clear: without regulation, there is no innovation.

And yet, in Minister Evan Solomon’s new “30-Day National Sprint” to define Canada’s next AI chapter, the questions showcase a deep disconnect between what matters to everyday Canadians and what the government chooses to focus on. Nearly every prompt in the national sprint centres on accelerating AI adoption, commercializing research, and scaling industry capacity. Only privacy advocates are recognized as having a stake in this rushed consultation, while NGOs that could speak to human rights, environmental impacts, and other crucial issues are sidelined. The process was not designed to be inclusive or representative of the broader public. 

This is a blatant disregard of nearly every concern Canadians have urgently and repeatedly voiced, and it reveals a deeper misconception within government thinking: the false assumption that innovation takes precedence over regulation. This “innovate first, regulate later” approach is the same as putting cars on the road without brakes or seatbelts and calling the wreckage progress. No one would call that progress. Yet this is precisely how Canada is approaching AI; building speed without safety, mistaking motion for direction. The reality is we will not benefit from AI unless we protect against its shortcomings. 

This blog is a call to action for the federal government and Minister Solomon to face these realities head-on. Canada’s AI roadmap has 5 critical blind spots that cannot be ignored any longer:

  1. Environmental sustainability, where “progress” is coming at a steep ecological cost;
  2. Copyright and artist rights, where creative ownership and livelihood are under threat;
  3. Employment and economic stability, where automation threatens jobs with no alternative;
  4. Education and digital literacy, where the next generation is relying increasingly on these tools without fully understanding their limits or consequences; 
  5. Misinformation and democratic integrity, where AI-driven content is eroding trust in public institutions and truth itself. 

Each section draws on evidence and public voices to expose what Canada’s AI plan is missing and what our leaders need to address now.

What’s the Environmental Cost of AI?

“I'm worried about the environmental impact of AI...”  

Raymonde B. (Manitoba)

Behind every model trained and every chatbot query lies a material cost: land, water, and electricity drawn from real people’s lives.

Researchers estimate that a single AI search query can use roughly 10 times more energy than a standard Google search. Behind every chatbot conversation and image generator lies an infrastructure powered by fossil fuels, cooled by millions of liters of freshwater, and built from critical minerals extracted at significant environmental cost. By 2030, AI systems are projected to account for nearly 20% of global electricity consumption and use six times more water than the entire country of Denmark.

These costs are largely driven by the rapid expansion of “cloud” and AI-dedicated data centres. These facilities host the servers and high-performance processors required for training and running AI-models, and their electricity and water demands are a major driver of the environmental burden. But, not all data centres are created equally. 

For example, Google’s facility in Finland runs on 97% renewable energy efficiency, compared to averages of 4-18% efficiency for data centres across parts of Asia. This uneven geography exposes an uncomfortable truth: the environmental cost of AI is not shared evenly. Rather, it reflects the same patterns of environmental racism seen across other industries, where pollution and resource extraction disproportionately affect communities with the least ability to resist them. Left unchecked, these imbalances will compound, turning already overburdened regions into environmental sacrifice zones for the digital economy. 

To make matters worse, AI systems inherit these same inequalities. When trained on biased data, AI often misattributes environmental degradation, blaming local governments or populations, while overlooking systemic factors such as corporate pollution or foreign infrastructure investment, like oil extraction, mining projects, and data centre expansion. In this sense, AI does not just contribute to environmental harm, but also replicates and amplifies existing social inequalities, shaping who bears the impact and who is blamed for it.

Can Canada’s Environment Afford AI’s Growth?

In Newton County, USA, resident taps went dry soon after Meta bulldozed an oak forest and began building a $750-million data centre approximately 300 meters from their homes. Sediment clogged their wells, appliances broke and brown water filled their sinks.  Despite spending thousands on repairs, Meta denies responsibility.

Local officials warn the county could face a water deficit by 2030. Meta’s facility alone uses about 500,000 gallons of water a day, and new AI-driven centres are slated to use millions more. What’s happening in Georgia is a small-scale glimpse of a much larger problem. 

With the federal government planning to expand data centre construction across Canada, the same concerns are now reaching Canada. 

In drought-stricken northern Alberta, Kevin O’Leary, the SharkTank and Dragon’s Den multi-millionaire, has proposed building Wonder Valley – a $70-billion data centre campus more than 32 times larger than the biggest in the world. The project, which will be located on Treaty 8 territory, has moved forward without consultation from Sturgeon Lake Cree Nation, whose Chief learned of the plan through a press release. Experts warn that the facility could require millions of litres of water a day in a region already under agricultural disaster declarations due to worsening drought. It’s a striking echo of the Georgia story, only this time, it’s happening on Canadian soil.

Canadians deserve full transparency and accountability in every AI project, whether public or private. We need answers: Where will these data centres be built? How efficient will they be? How much water will they consume? And who will bear the cost when local ecosystems reach their limits?

Who Protects the Artist and the Audiences in the Age of AI?

“We shouldn't let fear of constant competition in capitalism compromise the rights of content creators and existing copyright laws. People, actual humans, have worked hard and intelligently to create products that people pay for to consume. Why should AI be able to learn off of that content without some sort of payment or explicit permission?” – Donald P. (New Brunswick)

"Art is one of the things that makes us uniquely human - how will we protect creative expression?” – Kathryn K. (Ontario)

Art has always been how we express what it means to be human. Now, machines are learning to do it faster, cheaper and without asking permission. 

In Nova Scotia, independent musician Ian James opened his Spotify profile to find an entire album released under his name. James never wrote, recorded or approved this album. The track was entirely AI-generated, uploaded by an unknown distributor that was collecting royalties in his place. After requesting the fake album be removed, it reappeared shortly after under a new profile with the same name. 

Cases like this reveal the growing tension between creativity and automation. With a singular prompt, generative AI can produce songs, images, and even write full-length books in minutes, tailored to match the desired style. 

These systems work by training on massive datasets of human-made content learning the patterns, styles and relationships that define artistic expression. While the contribution of any individual work to an AI’s output may be small, models are often trained on many millions of copyrighted works whose authors expected to earn revenue from their works.

Under current copyright laws, AI developers are generally permitted to train on lawfully accessed copyrighted material for purposes such as research, development, or what’s often framed as “fair dealing” or “authorized use.” In practice, this means that creative works can be copied and analyzed to improve commercial AI systems without the artist’s knowledge, consent or compensation.

What began as a legal exception intended for limited research and commonsense everyday use has quietly become a loophole for large-scale data extraction. AI companies are treating human culture as free training data and creativity as a raw material for profit. And it's not just AI companies cashing in. Spotify, for example, has been accused of publishing songs under the names of deceased artists without permission. 

An artist’s livelihood depends on their ability to create and be recognized for their work. But that livelihood is being stolen by machines. AI can now mimic, replace, and profit from human creativity with little to no accountability. Instead of empowering creators, these systems profit from their work. Even consumers are often unaware that the songs, images, or videos they enjoy may have been generated by machines rather than people, believing their money supports real artists when it does not. For artists, this is not innovation – it’s large-scale theft. 

Can Canadian Copyright Law Catch Up to AI?

Existing copyright laws were not designed for machines that can ingest millions of creative works and reproduce them without consent or compensation. Artists’ names are being hijacked, their work diluted, and income from their works and likeness siphoned off by AI systems that depend on their labour to function. 

Canadian artists are now asking what it means to own their work in a work where algorithms can reproduce without consent or credit. Can they continue to make a livelihood from their art? Can they be compensated for commercialized AI that is trained on their data? Can they opt out of having their materials trained on AI? Canadian policymakers remain silent on how these harms will be prevented or redressed. As Scott B. from Ontario asks, is “theft is a crime unless an AI company does it?”

Who’s left behind in the Age of AI Automation?

“...The loss of jobs to AI is going to be harmful to many Canadians.” – Lesley T. (Alberta)

Nearly everyone has, at some point, asked the same uneasy question: will AI take my job? That uncertainty demands answers. 

Earlier this year, Prime Minister Mark Carney announced a three-year plan to reduce federal operating budgets by 15%, describing it as a part of a broader effort to modernize government and improve efficiency. In practice, that “efficiency” is expected to come from increased automation and AI adoption across departments. These changes are likely to come at the expense of public service jobs. Already, the Canada Revenue Agency is piloting a generative AI chatbot to handle routine tax and account inquiries. And Immigration, Refugees and Citizenship Canada currently uses data analytics and automated tools to route and pre-screen applications, freeing officers to focus on more complex cases. 

Many Canadians are voicing concern that AI adoption will not make work more efficient, but will instead be used to justify cuts, heavier workloads, and shrinking human oversight in the workforce. Others worry that automation may be introduced before its benefits are proven, leaving staff to manage new systems they never asked for, without clear oversight or recourse when errors occur.

This shift reflects a growing misconception that numerical efficiency and output are the same as human skill and the qualities it produces. Companies and governments alike treat AI as a cost-saving measure – something that can do more work, faster and without breaks. But what’s lost in that equation is the human contribution that cannot easily be quantified: judgement, creativity and adaptability. In this new economic order, originality and quality are at risk of being replaced by nominal optimization. 

So, are we really improving productivity, or just replacing people with machines regardless of whether outcomes improve? If AI is filling those roles, which systems are being used, who oversees them, and who will be held accountable when they fail? What roles should remain in human hands, and where is AI appropriate? How will oversight, accountability, and workers evolve as automation spreads? 

If Ottawa truly wants modernization, it must show that it is studying these questions openly and proactively, rather than letting the market decide and trying to pick up the pieces at enormous social cost later on.

What Does Critical Thinking Mean in the Age of Chatbots?

“The arrival of AI as an infinite information source (hopefully true and accurate) totally changes the meaning of education and the development of adult reasoning and decision making… In a world swarming with clever disinformation, how can we be sure AI will not lead an entire generation into intellectual lethargy and then total mental paralysis?” – Donald C. (Ontario)

If we want education to keep pace with technology, we need to rethink what learning actually means.

In Ontario, a 16-year-old student named Marissa was accused of using AI to cheat on her school assignment after Turnitin’s detection software flagged her work as 98% AI-generated. She insisted the essay was entirely her own, but received a zero anyway. The case sparked outrage across Canada, igniting a debate over the reliability of AI detection tools and the ethics of punishing students based on opaque algorithms. 

Marissa’s story illustrates how schools are struggling to adapt to a world transformed by AI. Nearly 60% of students now use AI to help with their schoolwork, mainly for brainstorming, researching and editing assignments. Yet, many say they feel guilty about it. This guilt suggests that students are not turning to AI to cheat, but because they do not feel safe making mistakes, asking questions or admitting when they don’t know something. 

We have built an education system that rewards perfection over curiosity. Being on the honour roll has become a measure of worth, while struggling in one subject can feel like failure. In that environment, AI becomes a coping mechanism – a quiet, judgement-free space where students can ask anything. Unlike the classroom, it doesn't grade, interrupt, or embarrass. It simply responds. 

The question is not whether students should use AI, it’s why do they need to rely on AI to do their work? What’s happening in our education system that makes students feel dependent on AI rather than empowered by it? Is it the pressure to produce perfect work? The lack of time to finish assignments? The fear of not being “smart enough” on their own? 

If we want AI to strengthen education rather than erode it, we must bring these tools into classrooms intentionally in ways that support, not replace, critical thinking. How can teachers integrate AI in a way that deepens curiosity rather than shortcuts it? How do we teach students to question AI’s answers, challenge its logic, and recognize its mistakes? How will schools, universities, and policymakers define academic integrity in an age of collaboration with machines?

Who Decides What’s True in the Age of AI?

“In order to trust AI, AI must be trustworthy. There are still too many AI hallucinations in the data it provides.” – Ruth A. (British Columbia)

What’s “true” in one AI system may be false, censored or rewritten in another. 

In March 2024, a deepfake video of former Prime Minister Justin Trudeau appeared on Youtube, showing him endorsing a fake “robot trader” investment scheme that promised viewers $10,000 a month in passive income. The ad used AI-generated visuals and fabricated audio of Trudeau’s voice, claiming, “I am confident in the robot trader and guarantee financial results to every investor.” Within hours, the video had spread widely before Google removed it and suspended the advertiser’s account. 

Deepfakes threaten public discourse, journalism, and even basic evidence. AI systems can now generate misinformation faster than humans can fact-check it. They can create fake political ads, fabricate news articles, and impersonate public figures within seconds, spreading their content across social media in minutes. During elections, these tools are capable of manipulating public opinion at scale, making it increasingly difficult for Canadians to know what is real and what is not. 

And that’s only part of the story. Elon Musk’s Grok, for instance, when asked about what the greatest threat to western civilization was, initially responded to misinformation and disinformation, then was edited by its owner to say low fertility rates. Moreover, Google’s Gemini’s early image generation generated multi-racial Nazi soldiers in response to neutral prompts. 

But the risks go beyond politics. Deepfakes and generative models are already being used in scams, identity theft, and non-consensual sexual content. Our survey reports that nearly half of Canadians report they are not sure whether they can tell real news from AI generated content. 

As more and more people and students turn to AI to get their facts, inclinations and biases of these models matter a great deal. AI hallucinations are finding their way into expensive textbooks, court filings, company reports, and even medical records. And of course, these are only the places where they have been caught. Even simple factual tasks can go wrong. AI sometimes miscalculates percentages or generates made-up, yet convincing numbers.

The more trust we place on these systems to inform us about the world, the more we face a fundamental dilemma: when the truth itself becomes automated, who gets to decide what’s credible?

When Chatbots Shape Reality

AI companies can alter a chatbot’s worldview through system prompts. These prompts are neither code nor sophisticated programming; they are simple commands like “be politically incorrect” or “don’t trust mainstream media.” Because these edits can be made instantly, they allow executives or engineers to quietly steer what a chatbot says, amplifying certain perspectives while muting others. 

Grok stands as a particularly cautionary example of this. Elon Musk’s efforts to engineer the model to be “less woke” have ultimately resulted in a product that appears to mirror his own political leanings rather than objective truth. To make matters more complex, the system also faces internal vulnerabilities, with reported incidents of rogue employees inserting lines of code that produces biased outputs. 

If AI systems are increasingly used to answer questions, summarize news and explain world events, whose version of the truth will they reflect and who is accountable when they get it wrong?

The Fragility of Truth

Once false information spreads, truth rarely catches on. Yet addressing this problem is extraordinarily difficult. Misinformation can be in the eye of the beholder, with some seeing attacks on truth where others see comedy, political satire, or simple expression. Government attempts to define misinformation can easily backfire, being misused for borderline content and degrading trust in our information ecosystem as a whole. 

Canadians want answers. How will the government ensure that future political ads are checked for deepfake content before they go live? How will citizens know when an image, video or article has been generated or altered by AI? Who will be held accountable when these tools are used to deceive or harm? Who controls what AI systems can and cannot say?

Conclusion

Each year, nearly 8,000 forest fires scorch across Canada, consuming land equal to one and a half times the size of Prince Edward Island. Entire communities are displaced, wildlife habitats vanish overnight, and the air we breathe turns thick with smoke. For those on the frontlines, every minute counts.

That’s where new technologies are beginning to make a difference. AI, trained on decades of fire data, weather patterns, and human behaviour, can now predict where a fire is most likely to start and spread. These tools help firefighters plan more strategically, move resources where they are needed most, and save precious time, often the difference between a contained blaze and a community lost. In moments like these, AI can be a lifeline.

But lifelines require trust, and trust requires oversight. AI is a tool, much like the printing press or the internet once were. It’s powerful and transformative, with lots of potential. Yet, without safety, fairness, and accountability built in from the start, the very technology designed to help us could just as easily harm us.

Minister Solomon and the Government of Canada must act now to close the widening gap between innovation and regulation. Canadians have made it clear that we want progress that protects people, not profits, systems that serve the public good, not private interests. The time for underfunded pilot projects and empty policy promises has passed. 

Canada’s next AI chapter must be one that puts human well-being, creativity, and environmental sustainability at its core. Anything less is not innovation – it’s neglect.



Take action now! Sign up to be in the loop Donate to support our work