International Governance

Treaties, agreements, and cross-border institutions for frontier AI — what's tractable and what's necessary.

Governance·Exploring·Last reviewed May 1, 2026

This page is a stub. I’ve marked the territory but haven’t written my views here yet. The headings below are placeholders — the actual beliefs, uncertainties, and evidence are still in my notes. If you want my current take on this topic before it lands here, get in touch.

Where I currently stand

<Headline view on international AI governance: which institutional models (IAEA-style, ICAO-style, voluntary clubs) plausibly transfer to AI, and which fail. The realistic short-term ceiling is probably narrow agreements on specific risks rather than a comprehensive regime.>

Current beliefs

  • Narrow risk-specific agreements (e.g., on bio uplift, or autonomous-systems incidents) are tractable; comprehensive frontier-AI treaties are not, on a 5-year horizon. ~XX%<why>.
  • Compute governance is the most credible enforcement mechanism for any near-term international agreement. ~XX%<why>.
  • <Claim about which existing institution should host frontier-AI work.> ~XX%<why>.

Uncertainties

  • Will the US–China relationship permit any meaningful cooperation on frontier-AI risk in the 2026–2028 window? Why it matters: existential to most international-governance designs.
  • Can private-sector commitments substitute for treaty-level agreements in the short term? Why it matters: changes where energy should be spent.

What would update me

  • A binding bilateral or multilateral agreement on a narrow frontier-AI risk would reset the tractability picture.
  • A serious incident that drove convergent national responses would strengthen the case that crisis is the path to coordination.

Recent reading

  • <date><title><takeaway>.

Related writing

No essays tagged with this topic yet.

Related regions