Power Concentration and Democratic Governance

The risk that frontier-AI capability becomes concentrated in a few actors, and what democratic legitimacy looks like for transformative technology.

Governance·Exploring·Last reviewed May 1, 2026

This page is a stub. I’ve marked the territory but haven’t written my views here yet. The headings below are placeholders — the actual beliefs, uncertainties, and evidence are still in my notes. If you want my current take on this topic before it lands here, get in touch.

Where I currently stand

<Headline view: power-concentration risk is structurally important and underweighted in the safety conversation; it cuts across both "misuse by a small group" and "decisive AI capability changes the bargaining position of states or labs". The interesting questions are about institutional design rather than technical mitigation.>

Current beliefs

  • Power-concentration risk is at least comparable in expected harm to misalignment risk on a 10-year horizon. ~XX%<why>.
  • Most current safety governance reduces misalignment risk while modestly increasing concentration risk. ~XX%<why>.
  • <Claim about democratic legitimacy mechanisms for frontier development.> ~XX%<why>.

Uncertainties

  • Do RSP-style commitments structurally favour incumbents in a way that increases concentration risk? Why it matters: bears on which governance designs are net-positive.
  • What concrete institutional designs would meaningfully reduce concentration risk without weakening safety governance? Why it matters: this is the under-developed edge of the field.

What would update me

  • A clean institutional-design proposal that demonstrably reduces both misalignment and concentration risk would shift the field.
  • A real-world case of safety regulation entrenching incumbents would sharpen the empirical picture.

Recent reading

  • <date><title><takeaway>.

Related writing

No essays tagged with this topic yet.

Related regions