AI Safety Strategy·Exploring·Last reviewed May 1, 2026
This page is a stub. I’ve marked the territory but haven’t written my views here yet. The headings below are placeholders — the actual beliefs, uncertainties, and evidence are still in my notes. If you want my current take on this topic before it lands here, get in touch.
Where I currently stand
<Headline view: my current take on how the field talks to the public, what's working, what's miscalibrated, and where the responsibility sits between researchers, labs, and journalists. 3–4 sentences.>
Current beliefs
- <e.g. The "doom" frame has done more to mobilise resources than to build durable public understanding, and the trade-off is now negative on the margin.> ~XX% — <one-line why>.
- <Claim about whether technical accuracy and public legibility are actually in tension or whether that's a self-serving story researchers tell.> ~XX% — <why>.
- <Claim about the role of demos and capability surprises in shifting public views relative to careful argument.> ~XX% — <why>.
Uncertainties
- What does public understanding of AI risk look like when it goes well, concretely? Why it matters: without a target picture, communication strategy is reactive.
- How much of current public attitude is downstream of media incentives versus actual capability progress? Why it matters: changes whether better communication can move the needle at all.
What would update me
- A clean natural experiment where two comparable populations were exposed to different framings and tracked over time would push me toward firmer views on which frames travel.
- Sustained evidence that researchers' public statements measurably shift policy outcomes would push me toward thinking communication is high-leverage versus mostly noise.
Recent reading
- <date> — <title> — <takeaway>.
Related writing
No essays tagged with this topic yet.