Source: ForeignAffairs4
Source: The Conversation – Canada – By Abbas Yazdinejad, Postdoctoral Research Fellow, Artificial Intelligence, University of Toronto
Imagine receiving a robocall, but instead of a real person, it’s the voice of a political leader telling you not to vote. You share it with your friends, your family — only to find out it was a hyper-realistic AI voice clone. This is not a hypothetical.
In January 2024, a fake Joe Biden robocall reached New Hampshire Democrats urging them to “stay home” ahead of the state primary. The voice may have been synthetic, but the panic was real — and it’s a preview of the threats facing democracies around the world as elections become the most valuable targets for AI‑driven disinformation.
AI‑generated content — whether deepfakes, synthetic voices or artificial images — is becoming shockingly simple to create and near‑impossible to detect.
Left unchecked, the harms posed by this new disinformation threat are myriad, with the potential to erode public trust in our political system, depress voter turnout and destabilize our democratic institutions. Canada is not immune.
Read more:
The use of deepfakes can sow doubt, creating confusion and distrust in viewers
The danger is already here
Deepfakes are artificially generated media — video, audio or images — that use AI to realistically impersonate real people. The benign applications (movies, education) are well understood, but the malicious applications are quickly catching up.
Open-source generative AI tools like ElevenLabs and OpenAI’s Voice Engine can produce high-quality cloned voices with just a few seconds of audio. Apps like Synthesia and DeepFaceLab put video manipulation in the hands of anyone with a laptop.
These tools have already been weaponized. Beyond the Biden robocall, Trump’s campaign shared an AI‑generated image of Taylor Swift endorsing him — an obvious hoax, but one that nonetheless circulated widely.
Meanwhile, state‑backed entities have deployed deepfakes in co-ordinated disinformation campaigns targeting democracies, according to the Knight First Amendment Institute, a free speech advocacy organization.
Why it matters for Canada
Canada recently concluded its 2025 federal election — conducted without robust legal safeguards against AI‑enabled disinformation.
Unlike the European Union, where the AI Act mandating clear labelling of AI‑generated text, images, and videos has been enacted, Canada has no binding regulations requiring transparency in political advertising or synthetic media.
Instead, it relies on voluntary codes of conduct and platform‑based moderation, both of which have proven inconsistent. This regulatory gap leaves the Canadian information ecosystem vulnerable to manipulation, particularly in a minority‑government situation where another election could be called at any time.
Alarm is mounting around the world. A September 2024 Pew Research Center survey found 57 per cent of Americans were “very” or “extremely” worried that AI would be used to generate fake election information; Canadian polls show a similar level of concern.
Closer to home, researchers recently discovered deepfake clips — some mimicking CBC and CTV bulletins — circulating in the run-up to Canada’s 2025 vote, including one purported news item that quoted Mark Carney, showing how fast AI‑powered scams can show up in our feeds.
What we can do
No single solution will be a panacea, but Canada could take the following key steps:
-
Content-labelling laws: Emulate the European Union and mandate labels for AI-generated political media. The EU requires content creators to label manufactured content.
-
Detection tools: Invest in Canadian deepfake detection research and development. Some Canadian researchers are already advancing this work, and the resulting tools should be integrated into platforms, newsrooms and fact-checking systems.
-
Media literacy: Expand public programs to teach AI literacy and how to spot deepfakes.
-
Election safeguards: Equip Elections Canada with rapid-response guidance for AI-driven disinformation.
-
Platform accountability: Hold platforms responsible for failing to act on verified deepfakes and require transparent reporting on removals and detection methods for AI-generated content.
Empowering voters in the AI age
Democracies are built on trust in elected officials, in institutions and in the information voters consume. If they can’t trust what they read or hear, that trust erodes and the very fabric of civil society begins to unravel.
AI can also be part of the solution. Researchers are working on digital‑watermarking
schemes to trace manufactured content and media outlets are deploying real‑time, machine-learning‑powered fact checks. Staying ahead of AI‑powered disinformation will take both smart regulation and an alert public.
The political future of Canada’s minority government is uncertain. We cannot wait for a crisis to act. Taking action now by modernizing legislation and building proactive infrastructure will help ensure democracy isn’t another casualty of the AI era.
Abbas Yazdinejad is a postdoctoral research fellow in artificial intelligence and cybersecurity in the AIMML at the University of Toronto.
Jude Kong receives funding from NSERC, NFRF, IDRC, FCDO and SIDA. He is affiliated with Artificial Intelligence and Mathematical Modelling Lab (AIMMLab), Africa-Canada Artificial Intelligence and Data Innovation Consortium (ACADIC), Global South Artificial Intelligence for Pandemic and Epidemic Preparedness and Response Network (AI4PEP), Canadian Black Scientist Network (CBSN).
– ref. Battling deepfakes: How AI threatens democracy and what we can do about it – https://theconversation.com/battling-deepfakes-how-ai-threatens-democracy-and-what-we-can-do-about-it-262262