Canada needs a secure-coding policy — and AI is making that more urgent
Software development is changing, and the government needs to respond
This article originally appeared in Digital Journal here.
Feature your business in Canadian Cyber in Context through sponsorship.
The federal government spends approximately $6.8 billion on information and communications technology every year.
It contracts extensively with the private sector for software development, database administration, cybersecurity, and more, handling core Canadian services that include sensitive financial and health information. Despite the scale of that investment and the criticality of those systems, Canada does not have a secure-coding policy. That gap is getting harder to ignore.
Secure coding refers to a set of practices designed to instill security into software development from the start. Security educator Tanya Janca describes it as “fostering a proactive, security-minded culture in software development teams”. The goal is to eliminate bugs and exploits that expose sensitive data or allow threat actors into an application or network.
The stakes are real. On average, Canadian businesses lose nearly $7 million per data breach. Total recovery costs from cybersecurity incidents exceeded $1.2 billion in 2023. Secure coding is not yet standard practice across the industry, but the case for it is becoming more difficult to dismiss.
AI is a big reason why.
PwC has found that AI is already automating tasks previously performed by developers, driving labour reductions, and enabling smaller teams to deliver software-as-a-service models. The Information and Communications Technology Council finds that many junior-level tasks, including programming, are increasingly automated. As AI accelerates through the industry, the need for a clear market signal around secure development is growing.
That signal has not come.
AI is increasingly used in programming and operations despite ongoing debate about its reliability. Anthropic, the creator of the Claude programming model, has acknowledged that the model “frequently overstated findings and occasionally fabricated data during autonomous operations.” AI can be productive and transformative, but it is not infallible. In some cases, poorly developed models can obscure their own errors. Human-in-the-loop oversight is not optional; it is a necessary condition for responsible deployment.
The Government of Canada is the largest ICT client in the country. Adopting a secure-coding policy would be a significant market lever, establishing strict requirements for secure software development across all government contracts, not just IT contracts.
That matters not just for security, but for digital sovereignty. A secure-coding policy can help ensure that Canadian data used in software development is handled in accordance with Canadian law without cross-border data transfers that could compromise sovereignty when US infrastructure is involved.
This is not about constraining AI or slowing innovation. It is about ensuring that adoption meets a security and safety standard, one that allows the federal government to tell Canadians their data is protected.
Such a policy also fits squarely within Canada’s National Cyber Security Strategy. Pillar 2 seeks to make Canada a global cybersecurity industry leader by prioritizing trusted innovation and building a foundational workforce. Fostering secure-coding and secure-AI practitioners advances all three of those objectives.
Janca, a Canadian information security leader and secure-coding advocate, has initiated a petition to the Government of Canada calling on the federal government to adopt a secure-coding policy for all custom software systems. It is one of the clearest signals yet that the practitioner community sees this as urgent. Whether Ottawa is paying attention is another question.


