Governance·Exploring·Last reviewed May 1, 2026
This page is a stub. I’ve marked the territory but haven’t written my views here yet. The headings below are placeholders — the actual beliefs, uncertainties, and evidence are still in my notes. If you want my current take on this topic before it lands here, get in touch.
Where I currently stand
<Headline view: liability is one of the few governance levers that doesn't require a regulator to keep up with the technology; insurance pricing is one of the few signals that can plausibly aggregate diffuse harm. The interesting question is whether these tools can be made to work for low-frequency / high-severity AI risks at all.>
Current beliefs
- Strict liability for frontier-model harms is more enforceable than negligence-based liability. ~XX% — <why>.
- Insurance markets can price ordinary AI deployment risk; they cannot meaningfully price catastrophic risk. ~XX% — <why>.
- <Claim about joint-and-several liability across the supply chain.> ~XX% — <why>.
Uncertainties
- Does liability move developer behaviour earlier in the pipeline, or does it just price risk after the fact? Why it matters: determines whether liability is a safety lever or only a redress lever.
- What does proof of causation look like for diffuse algorithmic harms? Why it matters: liability without provable causation is symbolic.
What would update me
- A successful AI-harm tort case establishing a workable causation standard would change the picture.
- A clear demonstration that insurance underwriting requirements drove safer engineering practice would strengthen the case for insurance as a lever.
Recent reading
- <date> — <title> — <takeaway>.
Related writing
No essays tagged with this topic yet.