This page is a stub. I’ve marked the territory but haven’t written my views here yet. The headings below are placeholders — the actual beliefs, uncertainties, and evidence are still in my notes. If you want my current take on this topic before it lands here, get in touch.
Where I currently stand
<Headline view: technical standards are the most under-invested governance layer relative to their importance; the work is mostly translation between policy intent and verifiable engineering practice; the failure mode is standards that are either too vague to bind or too prescriptive to survive. Current US/EU/UK efforts are early but moving in the right direction.>
Current beliefs
- Standards have to be co-developed with the labs and the AISIs simultaneously, not handed down by either side, or they don't get followed. ~XX% — <why>.
- Process-based standards (do these activities) work better than outcome-based standards (achieve these properties) for current-frontier AI. ~XX% — <why>.
- <Claim about ISO 42001 / NIST AI RMF specifically.> ~XX% — <why>.
Uncertainties
- Will standards bodies move fast enough to produce binding standards before the technology shifts under them? Why it matters: defines whether this layer can be load-bearing.
- Should there be a single international standards forum for frontier AI, or should national ones compete? Why it matters: shapes the institutional design conversation.
What would update me
- A high-quality standard adopted by both labs and a regulator would prove the model can work.
- A standard being adopted, ignored in practice, and not enforced would push me toward pessimism about this lever.
Recent reading
- <date> — <title> — <takeaway>.
Related writing
No essays tagged with this topic yet.