Table of Contents
- What You Need to Know
- The Economic Barrier: AI Has Become an Industrial-Scale Enterprise
- The Security Barrier: Export Controls and Geopolitical Weaponization
- The AI Middle Powers Trap
- AI Safety Regulation: Transparency Without Enforcement
- Latest Developments (2025 to 2026)
- Key Implications and Outlook
What You Need to Know
Access to frontier artificial intelligence is increasingly concentrated among a small number of actors — primarily the United States and China — driven by two compounding forces: soaring economic costs (compute, energy, talent, infrastructure) and tightening security constraints (export controls, safety regulations, geopolitical weaponization). The result is a growing "compute divide" that threatens to lock most of the world out of shaping the AI systems that will reshape their economies, militaries, and societies.
The Economic Barrier: AI Has Become an Industrial-Scale Enterprise
The Cost of Frontier Training
Training a frontier AI model now requires:
- Tens of thousands of advanced GPUs (Nvidia H100/H200/B200, each costing $25,000–$50,000+)
- Years of secured hardware supply — clusters are pre-allocated far in advance
- Megawatt-scale datacenters with dedicated power plants (OpenAI's Stargate project in Texas: ~$60 billion, larger than Central Park, with its own natural gas plant)
- Hundreds of millions to billions of dollars per training run; projections suggest costs could reach several billion within a few years
OpenAI's latest funding round (2026): $122 billion at an $852 billion valuation. The company is generating $2 billion/month in revenue and has more than 900 million weekly active users.
Global Investment Concentration
- US tech giants pledged over $300 billion on AI infrastructure in 2025 alone
- Chinese companies approaching $100 billion
- Just 100 companies (mostly US and China) account for 40% of global AI investment (UNCTAD 2025)
- Only 32 countries (16% of all nations) host AI-specialized datacenters (Oxford University)
- US + China operate 90%+ of the world's AI compute (US: ~75%, China: ~15%, Europe: ~5%)
The Compute Divide Deepens
| Region | AI Compute Share | Frontier Model Training Capability |
|---|---|---|
| United States | ~75% | Yes, multiple labs |
| China | ~15% | Yes, rapidly scaling |
| Europe | ~5% | Limited (Mistral in France, some national efforts) |
| Rest of World | ~5% | Near-zero sovereign capability |
The structural dynamic: Scaling laws mean that those who can afford to scale pull away steadily. Compute becomes akin to oil or steel in earlier industrial eras — a foundational resource that compounds advantage.
The Security Barrier: Export Controls and Geopolitical Weaponization
The US "AI Diffusion" Framework (January 2025)
The Biden administration's Framework for Artificial Intelligence Diffusion (BIS, Jan 13, 2025) created a three-tier global licensing system:
| Tier | Countries | Access Level |
|---|---|---|
| Tier 1 | US + 18 allies (Australia, Japan, UK, Germany, France, etc.) | Near-unrestricted access to chips, model weights, compute |
| Tier 2 | Most other countries (incl. NATO allies not in Tier 1, Israel, Singapore, Saudi Arabia, UAE) | Capped compute access (up to 1,700 H100-equiv without license); additional access requires security certifications |
| Tier 3 | China, Russia, Iran, D:5 countries | Effectively embargoed |
Key mechanisms:
- Universal VEU (UVEU) for Tier-1-headquartered companies — allows unlimited chip exports, but US UVEUs must keep 50% of AI compute in the US and ≤7% in any single Tier 2 country
- National VEU (NVEU) for Tier 2 — per-country, per-company caps (≈320,000 advanced chips over 2 years, ≈one generation behind frontier)
- New ECCN 4E091 — controls on closed-weight AI model weights trained with ≥10²⁶ FLOP
- Foreign Direct Product Rule (FDP) — claims jurisdiction over closed-weight AI models trained anywhere using US-origin chips
- Open-weight models explicitly excluded — though powerful open models (DeepSeek R1) increasingly challenge this distinction
The Trump Administration's Pivot: Export Promotion + Aggressive Enforcement
President Trump's Executive Order 14320 (July 23, 2025) established the American AI Exports Program, promoting full-stack US AI exports (chips, datacenters, models, applications) abroad.
The dual-track approach:
- Promotion: Full-stack AI export packages incentivized through federal financing (EXIM Bank, DFC loans, political risk insurance)
- Enforcement: BIS budget increased by $44M; workforce set to double to 1,077 positions by FY2027
Smuggling and Enforcement Failures (May 2026)
Despite strict controls, smuggling remains rampant:
- Supermicro co-founder arrest (March 2026): Alleged $2.5B scheme routing servers to China via Southeast Asian shell companies
- Applied Materials: $252M civil penalty for shipping $126M of semiconductor equipment to China via Korean subsidiary
- Cadence Design Systems: $95M penalty for transferring chip design tech to a Chinese military-tied university
- Florida GPU smuggling ring: 400 Nvidia A100 GPUs shipped to China via fake realty company
- BIS total penalties in past 12 months: ~$420M
"The money is just too good." — Greg Thomas, ChainSentry CEO
The AI "Middle Powers" Trap
Foreign Affairs (2026) identifies a structural trap for "AI middle powers" — countries like France, India, UK, Germany, South Korea:
- Access dependency: Every query to a frontier AI system depends on real-time access to US- or China-controlled infrastructure
- Exposure without benefit: AI harms (cybercrime, labor displacement, social disruption) are borderless; benefits are concentrated
- No leverage: Middle powers lack policy tools to shape AI development decisions
Three Strategic Responses:
| Strategy | Example | Risk |
|---|---|---|
| Bandwagoning | UK aligning closely with US AI ecosystem | Deepens one-sided reliance |
| Hedging | Malaysia/Southeast Asia courting both US and China investment | Vulnerable to bloc formation |
| Sovereignty | France (Mistral), Canada (Cohere) — domestic frontier efforts | Underfunded efforts strand countries in unprofitable second tier |
OECD/Oxford blueprint (2025) proposes a third path: multinational AI cooperation among mid-sized economies, pooling compute, talent and training costs while maintaining independent inference and data sovereignty. Models include CERN-like structures, federated learning, and shared infrastructure frameworks.
AI Safety Regulation: Transparency Without Enforcement
California SB 53 (Effective Jan 1, 2026)
The first US state-level frontier AI law requires:
- Mandatory safety frameworks for "large developers" of "frontier models"
- Public disclosure of risk thresholds and mitigation plans
- Notification of California Office of Emergency Services on detection of specified harms
Critical critique (Economy Research, April 2026): SB 53 creates transparency without enforcement — a principal-agent problem where regulated firms set their own risk thresholds, evaluation criteria, and compliance definitions.
"SB 53 is best characterized as a symbol of intent, like the Statue of Liberty for AI safety, rather than a workable mechanism for the lasting creation of safety."
Corporate Safety Frameworks Under Scrutiny
Analysis of published safety frameworks from OpenAI, Google DeepMind, xAI, and Anthropic reveals:
| Company | Framework | Key Structural Issue |
|---|---|---|
| OpenAI | Preparedness Framework v2 | "Severe harm" threshold set at "death or grave injury of thousands" — far above SB 53's statutory definition (50+ deaths) |
| Google DeepMind | Frontier Safety Framework 3.0 | "Material" or "meaningful" capability thresholds determined internally |
| xAI | Risk Management Framework | Allows 50% dishonesty rate on MASK benchmark; thresholds labeled "provisional" |
| Anthropic | RSP → Compliance Framework | Voluntary safety commitments legally separated from compliance minimums, enabling retreat under pressure |
CAISI Testing Agreements (May 2026)
Microsoft, xAI, and Google DeepMind signed voluntary safety testing agreements with the California AI Standards Institute (CAISI), closely resembling previous Biden-era deals with OpenAI and Anthropic. These have been "renegotiated" to reflect Trump administration priorities.
Latest Developments (2025 to 2026)
Timeline
| Date | Event |
|---|---|
| Jan 2025 | BIS publishes AI Diffusion Framework (3-tier system) |
| Mar 2025 | 12 companies now have published frontier AI safety policies (METR) |
| Jun 2025 | NYT maps the global compute divide — only 32 nations have AI datacenters |
| Jul 2025 | Trump EO 14320: American AI Exports Program |
| Oct 2025 | US controls ~75% of global AI compute (Epoch AI) |
| Dec 2025 | METR update: Common elements of frontier safety policies |
| Jan 2026 | California SB 53 takes effect |
| Feb 2026 | Microsoft pledges $50B for Global South AI (India AI Summit) |
| Mar 2026 | US considers new AI chip export rules requiring foreign investments; Supermicro co-founder arrested for $2.5B smuggling scheme |
| Apr 2026 | WEF: "Frontier tech enters its geopolitical era"; Economy Research: SB 53 structural critique |
| May 2026 | Fortune: surge in chip smuggling cases; Politico: CAISI testing deals with Microsoft/xAI/Google; White House considering EO for formal government AI model review |
Key Implications and Outlook
For the Global Majority (150+ countries with no AI compute):
- Dependency on foreign AI services becomes structural — intelligence becomes an imported service
- FDI from US/Chinese tech giants carries sovereignty risks (data control, geopolitical leverage)
- Microsoft's $50B pledge and initiatives like LINGUA Africa, Project Gecko, and Elevate skilling offer partial mitigation but do not close the sovereign capability gap
For AI Middle Powers:
- The window for multinational cooperation (OECD blueprint) may close as costs escalate
- Specialization in upstream (ASML-like) or downstream (robotics, manufacturing, pharma) niches may offer durable leverage
- Hard decisions between bandwagoning, hedging, and sovereignty are imminent
For the US:
- The dual-track strategy (export promotion + enforcement) attempts to maximize US market share while denying adversaries access
- Key tension: Export controls create artificial scarcity that incentivizes smuggling; enforcement actions are increasing but diversion persists
- The Trump administration's reliability as a partner is questioned by allies (UK technology deal reportedly suspended over food standards)
For AI Safety Governance:
- Self-regulatory models face structural credibility problems — transparency without enforcement is insufficient
- Proposed solutions: Institutional separation of standard-setting, evaluation, and deployment; regulatory markets; independent verification capacity
- The discrepancy between corporate safety thresholds and statutory definitions creates a governance gap
The Core Dynamic:
"In a world governed by scaling laws, power accrues quietly and compounds relentlessly. The question is not whether AI will shape the future, but whether most countries will have any meaningful role in shaping the intelligence that increasingly governs their choices."