The global race to govern artificial intelligence has already produced its first significant failure, and it has done so quietly. The European Union, the US and China are each constructing competing governance frameworks that reflect their own economic and strategic interests, while the technology these frameworks are designed to regulate continues to advance faster than any of them were built to handle. Meanwhile, the majority of the world’s population remains excluded from a conversation whose consequences they will disproportionately bear.
AI systems are already embedded in the infrastructure of daily life, determining who receives a loan, which job applications are reviewed, which patients are prioritised for care and which content shapes how people understand political events. The absence of coherent, inclusive governance for these systems is not a temporary gap. It reflects a structural failure to treat AI governance as a global public good rather than a competitive instrument.
AI governance is the architecture of rules, institutions, and accountability mechanisms that determines how AI systems are built, by whom, under what constraints, and with what consequences for those affected by their decisions. Left to market incentives alone, technology companies optimise for what their business models reward.
The history of social media platforms illustrates this: deployed globally without adequate accountability mechanisms, they produced harm to democratic discourse, to mental health, to the information environment, not because their designers intended it, but because the incentive structure rewarded scale and engagement over social wellbeing. Advanced AI is being built under the same structural conditions. The consequences are likely to be larger and harder to reverse.
What makes the current moment particularly pressing is the speed of the technology itself. Researchers and engineers at the most prominent AI laboratories, including those who have built these systems, have stated publicly that AI capable of surpassing human cognitive ability across most domains could arrive within years, not decades. The governance frameworks being finalised in Brussels, Washington and Beijing were designed to manage the social impact of AI as it existed in 2022. They will govern systems that are already several generations beyond that.
The EU AI Act is the most comprehensive of the three frameworks. It establishes risk-based compliance requirements for any company serving European clients, regardless of where it operates, thereby extending its regulatory reach well beyond European borders. Carnegie Europe has documented the geopolitical dimension embedded in this design. The Brussels Effect describes the tendency for European standards to become global standards, because the cost of maintaining separate compliance systems for European and non-European markets exceeds the cost of simply adopting EU rules everywhere. European standards, therefore, spread irrespective of whether non-European governments or companies had any say in designing them.
The US has pursued a more explicitly strategic posture. The Trump administration’s AI Action Plan, published in July 2025, states that it is the policy of the US to export its full AI technology stack to the world, encompassing hardware, cloud infrastructure, and the governance norms that accompany them. The Stargate initiative commits $500 billion to American AI infrastructure over five years. Governance, in this framework, is bundled with market access.
China operates through different but parallel means. Through algorithmic regulations that prioritise state oversight and digital infrastructure extended across Asia and Africa via the Belt and Road Initiative, Beijing embeds its governance norms into the networks of countries that have adopted its technology. In effect, the governance model arrives before the recipient country has developed its own.
Three competing frameworks yield a single consistent outcome: the countries with the least influence over the rules bear the greatest costs of their consequences. The Stanford HAI 2025 AI Index reports that 75 per cent of global AI legislation has been produced by a small number of advanced economies. Countries representing the majority of the world’s population have produced almost none.
When the International Network of AI Safety Institutes was inaugurated in November 2024, its membership was almost entirely drawn from the global north. The Global South is not a participant in this governance conversation. It is the population upon which the outcomes will be imposed, whether in the form of compliance costs exported by the Brussels Effect, infrastructure dependencies created through Belt and Road digital agreements, or automated decision-making systems designed without reference to the conditions of the people they affect.
A global governance model with genuine inclusive participation from the countries most exposed to AI’s social consequences is the only proportionate response. The UN Global Digital Compact provides existing institutional scaffolding. The AI Safety Institute network could be meaningfully expanded. Existing multilateral institutions have governed cross-border technology with significant economic consequences before. The three powers currently dominating the conversation, however, each gains more from maintaining its own framework than from building a shared one. That is why no shared one exists.
However, that calculation is not permanent. AI systems are arriving faster than any regional framework was designed to anticipate, and the people building those systems are among the first to acknowledge it. When the next generation of AI arrives, and the timeline is shorter than most governments have understood, the question of who was at the table when the rules were written will carry consequences that no regional compliance framework was built to manage. As the window changes, it is getting narrower by the day.
The writer is a political science – tech management graduate.
He can be reached at: [email protected]