Field Building

How fast AI safety is growing as a field, who's joining it, and whether the bottlenecks are people, money, or research taste.

AI Safety Strategy·Exploring·Last reviewed May 1, 2026

This page is a stub. I’ve marked the territory but haven’t written my views here yet. The headings below are placeholders — the actual beliefs, uncertainties, and evidence are still in my notes. If you want my current take on this topic before it lands here, get in touch.

Where I currently stand

<Headline view: where I think the field is on its growth curve, what's actually scarce, and what the next bottleneck looks like. Likely 3–4 sentences.>

Current beliefs

  • <e.g. The binding constraint on AI safety is research-taste senior mentorship, not entry-level talent or money.> ~XX%<one-line why>.
  • <Claim about the rate at which the field can absorb new people without diluting research quality.> ~XX%<why>.
  • <Claim about whether structured programmes (MATS, ERA, ASB) are doing the right kind of pipeline work.> ~XX%<why>.

Uncertainties

  • What does a 10x larger AI safety field actually look like, and is that the right target? Why it matters: framing the goal changes which interventions look high-leverage.
  • Are we training people for problems that will still exist in 2 years, or for problems that scaling will dissolve? Why it matters: changes how durable any pipeline investment is.

What would update me

  • A clean demonstration that fellowship alumni produce qualitatively different research after the programme (not just more papers) would push me toward higher confidence in structured pipelines.
  • Evidence that mid-career transitions are dominating new senior hires would push me away from undergrad-focused pipeline strategies.

Recent reading

  • <date><title><takeaway>.

Related writing

No essays tagged with this topic yet.

Related regions