AI Safety Strategy·Exploring·Last reviewed May 1, 2026
This page is a stub. I’ve marked the territory but haven’t written my views here yet. The headings below are placeholders — the actual beliefs, uncertainties, and evidence are still in my notes. If you want my current take on this topic before it lands here, get in touch.
Where I currently stand
<Headline view: how I see the division of labour between frontier labs, academia, and independent / third-party orgs (METR, Apollo, AISI, etc.) — what each is actually best positioned to do, and where the gaps are. 3–4 sentences.>
Current beliefs
- <e.g. Frontier labs do the highest-impact safety research today because access to frontier models is the binding constraint, and academia structurally cannot keep up.> ~XX% — <one-line why>.
- <Claim about whether structured access (model APIs, AISI access) is closing the gap fast enough.> ~XX% — <why>.
- <Claim about which kinds of safety questions academia is uniquely positioned to answer that labs systematically can't or won't.> ~XX% — <why>.
Uncertainties
- Does the work that "needs frontier access" actually need it, or is that a story labs tell? Why it matters: a wrong answer here distorts where to put marginal funding and effort.
- What's the right mix of intra-lab, third-party, and academic safety effort, and is the current mix anywhere near it? Why it matters: implies different recommendations for governments and funders.
What would update me
- A clear case study of academic-led safety work that materially changed lab practice would push me toward higher confidence in independent research leverage.
- Evidence that structured-access agreements are producing public artefacts of comparable quality to internal work would push me toward thinking access can substitute for employment.
Recent reading
- <date> — <title> — <takeaway>.
Related writing
No essays tagged with this topic yet.