Funding Ecosystem

Who funds AI safety, what they fund, and how dependent the field is on a small number of philanthropic actors.

AI Safety Strategy·Exploring·Last reviewed May 1, 2026

This page is a stub. I’ve marked the territory but haven’t written my views here yet. The headings below are placeholders — the actual beliefs, uncertainties, and evidence are still in my notes. If you want my current take on this topic before it lands here, get in touch.

Where I currently stand

<Headline view: how I read the funding landscape today — how concentrated it is, where government money is or isn't showing up, and whether the field's research agenda is being shaped by funder preferences in ways worth flagging. 3–4 sentences.>

Current beliefs

  • <e.g. The field is uncomfortably concentrated on a small number of philanthropic funders, and this distorts research agendas more than people acknowledge.> ~XX%<one-line why>.
  • <Claim about whether government safety funding (UK AISI, US AISI, EU programmes) is large enough to meaningfully diversify the base.> ~XX%<why>.
  • <Claim about whether the field has too much money chasing too few good projects, or the reverse.> ~XX%<why>.

Uncertainties

  • What does a healthy, diversified safety-funding base look like? Why it matters: hard to advocate for diversification without a target picture.
  • Are research agendas materially shaped by what funders find legible, and how would we tell? Why it matters: this is the steelman of the "philanthropic capture" worry.

What would update me

  • Sustained government safety funding at >$1B/yr in a single jurisdiction would push me toward thinking concentration risk is on its way to resolving.
  • Documented cases of funder preferences killing or quietly retargeting otherwise-promising agendas would push me toward stronger views on capture.

Recent reading

  • <date><title><takeaway>.

Related writing

No essays tagged with this topic yet.

Related regions