Real-Time Monitoring

Standing infrastructure for monitoring AI behaviour and incidents in deployment — the Loss of Control Observatory and adjacent ideas.

Governance·Developing·Last reviewed May 1, 2026

This page is a stub. I’ve marked the territory but haven’t written my views here yet. The headings below are placeholders — the actual beliefs, uncertainties, and evidence are still in my notes. If you want my current take on this topic before it lands here, get in touch.

Where I currently stand

<Headline view, drawing on your CLTR work: the field has rich theoretical literature on what AI risks look like and almost no standing infrastructure to observe them in deployment; the Loss of Control Observatory is an attempt to fix that. The core argument is that you cannot govern what you cannot see.>

Current beliefs

  • There is currently no standing public-good monitoring infrastructure for frontier AI deployment, and there should be. ~XX% — direct rationale for the LoCO project.
  • Real-time monitoring infrastructure is a prerequisite for credible incident response, not just a "nice-to-have" reporting layer. ~XX%<why>.
  • <Claim about who should host such infrastructure: AISI, third-party non-profit, intergovernmental.> ~XX%<why>.

Uncertainties

  • What is the minimum viable signal set that a useful real-time monitor needs? Why it matters: tractability of the whole project depends on this being small.
  • Can monitoring infrastructure be built without privileged lab access, or does it require structured access agreements first? Why it matters: changes the order of governance work.

What would update me

  • A successful pilot deployment of real-time monitoring against a single lab would meaningfully advance the policy case.
  • A serious incident going undetected by all monitoring channels would strengthen the urgency case.

Recent reading

  • <date><title><takeaway>.

Related writing

No essays tagged with this topic yet.

Related regions