AI on the Horizon: March 16, 2026

Innovation & Technology     

Canada’s latest AI news on the economy, society, and policy. From the creation of guardrails to concerns about protecting youth, the ethical implications of AI and its regulation have dominated recent media coverage. In this issue, we run down the headlines and review the guidelines in motion around AI safety, ethics, and regulation.

In Canada, educators continue to balance the use of AI amongst students of all levels. Teachers are developing their own skills to build learning plans, assignments, and lessons with AI. While prohibiting classroom use has not worked, when and where it’s being used is a growing part of the learning process. A recent research report suggests AI tools that teach, not tell, are necessary to enrich student learning.

While a recent US survey reports that half of teens use AI for homework, what’s different about AI is how they use it outside the classroom. 12 per cent of US teens report seeking emotional support or personal advice. The latter is gaining attention among mental health professionals, who are concerned that chatbots may increase social isolation. Canadian research recommends that youth exposure to AI needs clear governance and transparency in AI systems. Canada’s public consultation on AI strategy suggests youth protections will be a pillar of future regulations.

The mental health impacts of AI use are still in early stages but are a top priority for Canada’s Mila Institute. They are developing metrics and guardrails to better contend with cases of AI-driven psychosis. Yoshua Bengio, one of Canada’s leading AI safety experts, has called the impact on adolescents a particular hot-button issue for policymakers. AI companionship apps have reached 10 million users globally, with early evidence suggesting they can cause more mental health harm than good.

Protecting ourselves from AI

Chatbot interactions and real-life tragedy are on the rise, but who decides when it’s time to step in? Legal and policy researchers have spotlighted that current regulatory frameworks are not designed for AI systems that autonomously and conversationally engage with users. Canadian officials are investigating whether AI firms have the safeguards and procedures in place to properly identify harmful conversations, potentially inciting violent behaviour.

Even OpenAI is now partnering with mental health and behavioural experts to establish criteria on when to engage police enforcement, committing to stronger protocols going forward. Some experts argue that stronger guardrails are necessary, while critics argue that increased surveillance undermines personal privacy rights.

A US lawsuit alleges a chatbot has contributed to fatal consequences, reinforcing how quickly these technologies can perpetuate harmful impacts on society. Yet the challenge remains: is the user, the firm, or the regulation that allows it at fault?

Terms of Military Service

Switching to institutional uses of AI, we’re going to pick up where we left off in our last issue, where Anthropic was standing off with the Pentagon. Ultimately, Anthropic refused the Pentagon’s request for unrestricted access to its systems (i.e. removing all guardrails). In response, the Pentagon designated Anthropic as a supply-chain risk. Anthropic has since announced a plan to challenge this designation in court.

And despite initial support for Anthropic and internal dissent from employees, OpenAI negotiated an agreement allowing the use of its AI models in the Pentagon, later stating that the company does not control how the Pentagon can use its services in military operations.

On the Horizon: Everything AI at Once

Recently, AI has posed another existential threat: the ability to identify, survey, and act on military commands. While news swirled around the capture of the Venezuelan president, powered by AI, substantial US Defence contracts were put into use despite pushback from the firm.

The use of AI in military operations is not new.  The orchestrated, end-to-end operation of AI in warfare questions the regulatory guardrails within which AI can be ethically positioned. 

AI’s influence on national security and democracy remains at the forefront of AI ethics amid international conflicts and debates over national sovereignty. Canada has always played a leading role in AI innovation. Our leadership in safety is being tested as we provide policy recommendations for responsible use. Canadian research suggests AI systems have already reshaped public discourse through social media. AI-generated content, the spread of misinformation, and biased political messaging are key concerns among legislators, yet no official legislation has been passed in Canada.

In case you missed our last issue and want to learn more about how Canada’s AI strategy is shaping up, check out our special issue on the Defence Industrial Strategy.

Comments