A recent SiliconANGLE headline stopped me in my tracks - provocative, yes, but also uncomfortably accurate in capturing a pressure point AI governance teams have been wrestling with for years.
In“Human in the loop has hit the wall. It’s time for AI to oversee AI” by Emre Kazim (linked), the article’s core claim is straightforward: Traditional human‑in‑the‑loop governance simply can’t keep pace with modern AI. That model was built for an era when algorithms made discrete, high‑stakes decisions, the kind a person could review with time, context, and a manageable amount of information. That world is gone.
Today’s systems are continuous and high‑velocity. Fraud models evaluate millions of transactions an hour. Recommendation engines shape billions of interactions a day. Agentic systems chain tools, models, and APIs together without waiting for human prompts or checkpoints.
Per the article, much of our oversight remains manual, slow, and retrospective. Governance frameworks often gesture toward “a mix of human and automated oversight,” but rarely explain how that mix is supposed to function at machine speed and scale.
The article is quite blunt about the implication: the idea that humans can meaningfully supervise AI one decision at a time is a comforting fiction. And for technology leaders, that reality forces a hard question: If humans can’t track AI at operational velocity, should AI govern AI? That’s an interesting question.
I unpack all of this in the video - it’s a fascinating topic and well worth the dive.










