Last week, I asked a room of 25 senior tech leaders a simple question: what would happen if you cut your AI governance in half?

The objections came before the answers did. Not because the idea was absurd (well, maybe a little), but because nobody could immediately say which half they'd keep and why. That told me more than any governance maturity assessment ever could.

The question isn't about cutting

Let me be clear: I'm not arguing for gutting your governance framework. The point of the exercise isn't the answer. It's the reasoning you're forced to do to get there.

When answering "which half would you remove?", the real question underneath is: can you explain why each piece of your governance exists? Not what it does. Why it's there. What specific risk it mitigates, what specific value it unlocks, what specific bottleneck it fixes.

Most governance frameworks can't survive that question. They grew by accumulation. Someone raised a concern, so a control was added. A regulator published guidance, so a new process appeared. An industry framework arrived, so every requirement was adopted line by line, regardless of fit. Each addition made sense in isolation. Nobody ever went back to ask whether the whole still made sense together.

Complexity is multiplicative, not additive

This is where the exercise reveals something most governance teams underestimate. Every requirement you add doesn't just create one unit of work. It interacts with every other requirement already in place.

I opened the workshop with an example that makes this concrete. Some organizations regulate AI emissions under their AI governance framework, creating strict deployment limits for sustainability reasons. Sounds responsible. But when that rule prevents deploying a compute-intensive AI system for logistics route optimization, where the CO2 savings from optimized routes far exceed the AI system's carbon footprint, the sustainability rule produces worse sustainability outcomes. That's not a failure of the individual rule. It's what Donella Meadows described about systems: the behaviour comes from the relationships between the parts, not from the parts themselves. Add one requirement and you don't just add one effect. You change how everything else relates to each other.

Governance frameworks are systems. I wrote previously about how organizations respond to data and AI problems by adding more structure, and how that approach consistently backfires. The "cut in half" question makes that dynamic visible. And the interaction isn't limited to policies. Governance includes change management, literacy, cultural alignment, maturity development. A chain that starts with a risk assessment and ends with a training rollout six weeks later isn't one requirement. It's five teams managing the consequences of one decision.

That's where innovation quietly suffocates. Not in the policy document. In the accumulated friction of requirements managing other requirements.

What the exercise actually reveals

The first objections came quickly. Some pointed out they don't have AI governance in place yet. Others said their frameworks are too new to evaluate. Several described the challenge of governing a moving target, where the technology outpaces the rules before the ink is dry.

Each of those felt like a reason not to engage with the question. To me, they were exactly the reason to engage with it now.

These are the moments in the governance lifecycle where the risk of adding requirements for completeness, or without real scrutiny, is highest. When you're building from scratch, the pressure is to cover everything. When you're early, the instinct is to adopt an industry framework wholesale. When you're chasing the pace of AI, the temptation is to add controls reactively, just to keep up.

The problem is that removing governance later is rarely as clean as adding it. From experience, cutting requirements that don't work is painful and expensive. It risks unintended consequences, and it erodes the authority and trust of the team that introduced them in the first place. The organization is better served when you don't push for added complexity just to remove it after twelve months.

That's what makes the question powerful at every stage. Instead of continuously adapting governance to the latest capabilities, it forces you to identify the fundamentals that hold regardless of what the technology can do tomorrow. Clear ownership. Defined decision authority. Controls tied to a specific, significant risk. Those don't change when the next model drops.

The pattern was consistent: the things people wanted to keep were the things they could explain. The things they couldn't explain were the things already quietly slowing them down.

Governance that can explain itself is governance that scales. Everything else just slows you down while looking responsible.

Cut Your AI Governance in Half. Then Ask Why.