Why Canada’s Cultural Sector is the
Perfect Playground to Figure Out The Country’s
Emerging Technology Infrastructure Strategy

Between Microsoft, Alphabet, Amazon, and Meta, 2026 “AI infrastructure” spending is tracking toward $875 billion which clocks in around roughly 3% of US GDP. That’s not a startup bet. That’s the spending signature of companies becoming regulated utilities. This arc is the same one that turned AT&T from a growth story into a telephone company has its origins from the previous time when railroads evolved from speculative frontier expansion into regulated infrastructure. History shows us that the economics for the winner are grand yet with one fatal flaw hanging over it: It better work. The investments the market is making is betting that the market will use big like the world wide web delivered. If that bet doesn’t deliver, then failure will be devastating.
Right on cue, the market is beginning to show a fundamental truth to counter this play that no one wants to acknowledge. That the canary in that particular coal mine is already on life support as it’s getting tougher to avoid the underlying fact that the market is already moving in direct opposition. Small. The prevailing winds have recently shifted towards the top models in the world being the leaner Chinese ones. Their key advantage being that they are more effective by doing a whole lot with literally a fraction of the compute.
Canada is about to spend $2 billion on a Sovereign AI Compute Strategy. The Globe and Mail, the stalwart mouthpiece of Corporate Canada, is running op-eds for the need of a nationalized public AI. The AI Minister seems to be taking the bait while simultaneously taking meetings with Sam Altman who himself has been busy of late. He’s lobbying Ottawa for a piece of that $2 billion while negotiating guardrails with the Canadian AI Minister on the product his company failed to govern when it mattered. All while lifting those same guardrails entirely for his new partner, the US Department of Defence.
But that’s apparently not relevant to the sovereignty conversation.
And somehow the entire national discourse around artificial intelligence has been swallowed whole by a tragedy that if you look at it honestly, has almost nothing to do with artificial intelligence.
Eight people were killed in Tumbler Ridge. A young woman was in enough pain to tell a chatbot what she was going to do. The company’s reaction was ban her account. Nobody called a health specialist. Nobody called the authorities. She made a second account and kept going. Banning someone from software doesn’t treat the underlying condition.
The national response was to blame a technology that was about 55 seconds old at the time and to haul its American CEO to Ottawa and demand better surveillance from the same centralized architecture that already failed. Better detection. Better flagging. More monitoring. How do you take a tragedy rooted in decades of chronic underfunding of mental health, community services, and the humanities for the sake of building pipelines, mines and convention centres, and turn it into a technology regulation problem?
The same way you’ve always done it. By continuing to leave human beings out of the technology conversation entirely. Given Government’s ability to build literally nothing in the last bunch of decades, the collective disinterest amongst those actually doing the building in the trenches is palpable. That’s the pattern, and it’s been this way since the beginning. Technology gets built for scale, extraction & efficiency. Humans get managed as a variable. A necessary bug that will always be there. And when the consequences of these decisions arrive as they always do, the response is to regulate the tool instead of rethinking the relationship between the tool and the people it was supposed to serve.
A Way Through?
A recently published position paper from Scarborough-based Cultural Design Foundry Sprnova PopWorks offers food for thought. Built to Forget: The Importance of Consent Infrastructure for the Post-Keyboard Era successfully lays out a structural argument that extends well beyond interface design and into an effort to reclaim our humanity within the technology that serves us. What started as an exploration of voice eventually replacing the keyboard as the primary interface layer evolved into a blueprint that could possibly map our way back to humanity using technology for its betterment. Not the other way around.
The issue is prescient. Voice carries diagnostic information that text never will. Depression, cardiac stress, cognitive load, early-stage neurological conditions all leave measurable acoustic signatures. The clinical research is more advanced than most people outside the field realize. But to accommodate this expansion of data collection for LLM usage, the applications that will matter require something the current platforms are architecturally incapable of providing: consent. Where each person’s data is explicitly traceable. Not layered deep in a Terms of Service. Not in mass authorization bundled into a signup flow. Actual consent at the structural level to allow for a new paradigm to take hold. One where there is an embedded fidelity of data.
And a generation is waiting.
Fifteen to twenty million pandemic-era students didn’t fall behind in text-based communication. They opted out of it. Voice memos instead of texts. Screen recordings instead of email. They aren’t stranded because they can’t adapt to keyboards. They’re stranded because keyboards refuse to adapt to them.
Which brings the conversation back to the $2 billion. Canada’s Sovereign AI Compute Strategy is investing in supercomputing infrastructure, compute access funds, and private sector capacity. The commitment is profound and generational.
The question nobody is asking: Compute for what exactly?
Building a CanCon version of OpenAI is not a strategy. It’s a reflex. It’s the same institutional muscle memory that produced decades of content regulation favouring corporate incumbents and middlemen disguised as Canadian Corporations simply recycling Manifest Destiny content over the artists and creators the system claimed to protect. The country that produced Marshall McLuhan should be capable of a more interesting answer than “build the American thing, but smaller and with government funding.”
Looking Past Our Collective Noses
Meanwhile, the sector that actually has the operational DNA for what comes next keeps getting positioned as the thing that needs protecting. From algorithms. From American platforms. From the LLM technology itself. Canada’s cultural institutions have been building consent infrastructure for fifty years without using that language. Rights management that tracks to actual usage. Licensing agreements with real accountability mechanisms. Compensation structures that distinguish between “we can legally use this” and “the creator explicitly authorized this specific use.” That distinction is the entire game for the next generation of LLM applications and the tech platforms have never had to learn it and they’re about to need it. The learning curve is not a software update.
The cultural sector runs on constraints that forced a kind of operational discipline the Canadian technology industry has never developed. TIFF turns a $30 million operating budget into over $200 million in economic impact. The Canada Council generates documented multiples on every dollar distributed. Per federal dollar invested, the return on Canadian cultural infrastructure is approximately $29. These aren’t charities being sustained by goodwill. These are institutions that figured out how to do more with less because the alternative was not existing.
That discipline is the substrate. Not the content it produces but the operational knowledge underneath it. How meaning works. How context determines value. How you build systems that respect the humans passing through them. The technology industry has never needed to know any of this. Truth be told, it didn’t care.
It’s about to.
And here’s the part of the conversation that isn’t happening at all where the paper not only makes the case for, it provides the roadmap for it. The $2 billion sovereign compute discussion assumes the current web is the one we’re building for. It isn’t.
The current webpage document designed for human eyes, arranged in pixels, navigated by scrolling is currently zombie tech waiting patiently for what’s coming to put it out of its misery. When an LLM agent acts on behalf of a user, ninety percent of a modern webpage is noise. The hero image, the navigation bar, the tracking pixel? An agent doesn’t consume any of it. It needs to evaluate content against user intent and take action.
The entire visual web is collapsing into structured documents optimized for machine comprehension.
Current metadata standards that are powering every recommendation engine and search result you’ve ever seen were built to answer one question: “what is this thing?” Artist, title, year, genre. Enough for a human to find an item on a shelf. That’s good old Dewey Decimal system level of ones and zeroes from the library card era digitized but not reimagined.
The agentic web needs metadata that answers fundamentally different questions. How does this thing relate to other things? When should it be surfaced? What does choosing it communicate about the person choosing it? What are the consent parameters around an agent acting on it? Who gets paid when a transaction happens? That’s not a search problem. That’s a semantic infrastructure problem. And nobody is building for it on this side of the pond.
The Built to Forget paper lays out what this agent-native infrastructure actually looks like in code: structured manifests where there is no HTML, no CSS, no JavaScript โ only meaning, consent graphs, and action endpoints.
The front-end doesn’t get redesigned. It gets replaced by the metadata itself. The institutions that understand how culture functions as meaning rather than content are the only ones positioned to build this layer correctly.
That’s the nerd version of why cultural infrastructure matters for AI. Not because artists need protection. Because the agentic web needs a semantic layer that only people who understand meaning can build.
A Hoser Play Already In Action
It’s worth noting that an existing full scale solution to Small already exists within Canadian borders as Toronto-based Two Small Fish Ventures led by Eva Lau and Allen Lau have built a portfolio of companies that reads like the hardware manifest for everything this paper predicts will come true. Zinite is building true 3D chip architecture that solves the distance tax between memory and compute. Applied Brain Research just shipped a chip that runs full vocabulary voice AI on-device at under 30 milliwatts โ no cloud required.
The hardware for consent-native, on-device voice infrastructure is being built in Canada right now. What it needs is the application layer that holds the semantic intelligence, the consent architecture as well as the understanding of how meaning actually works between humans and turn those chips into something that can finally ‘speak’ to humans. That knowledge lives in the cultural sector. It always has. The $2 billion should be introducing them.
The instinct to protect culture from technology is understandable. It’s also the wrong move at the wrong moment. The technology transition is creating a void that utilities cannot fill. The consent expertise, semantic depth, and operational discipline required to fill it correctly already exists in institutions that learned to serve the souls of humans under constraints the technology industry has never faced.
Canada doesn’t need a smaller, government-funded version of Silicon Valley’s stack. It needs to recognize that the most valuable layer of the coming infrastructure is something this country accidentally spent fifty years building.
The $2 billion is real. The question is whether it gets spent on someone else’s plumbing or our own.
Built to Forget: The Importance of Consent Infrastructure for the Post-Keyboard Era is available here: