Does your data & AI governance make a measurable impact or is it creating a expensive illusion?
After nearly twenty years working with data, I've seen the same pattern play out dozens of times. A company discovers data quality issues, inconsistent customer records, conflicting revenue figures, or teams using different definitions for the same metric. Leaders respond with what feels responsible: standardization initiatives, new policies, approval workflows, stricter controls.
It never works. The problems don't get better. They just get hidden under bureaucracy.
Here's what actually causes most data issues: unclear ownership, inconsistent processes, and the absence of shared definitions. These are organizational problems, not technical ones. And when we respond by adding more rules and requirements, we don't solve the root causes. We just increase complexity and overhead while the dysfunction continues underneath.
Yves Morieux and Peter Tollman demonstrated this in "Six Simple Rules: How to Manage Complexity without Getting Complicated". Organizations facing complexity typically add more structure: new rules, procedures, coordination mechanisms. This approach consistently backfires. More rules create more overhead without addressing why people behave the way they do in the first place.
What This Looks Like in Practice
Sales and finance define "customer" differently, creating reporting chaos. The response? Mandate a standard definition, create approval workflows, establish a data stewardship committee. Now both teams attend alignment meetings, submit exception requests, and navigate new governance processes. Meanwhile, the actual issue, nobody clarified who owns customer data or why the definitions diverged, remains untouched. We've just made it more expensive to work around, but the forces that created the differences still push for workarounds.
The same pattern appears with data quality. Organizations implement validation checkpoints at every pipeline stage. Now teams need approval to modify processes, data flows slower, exceptions pile up. But if you never addressed why quality suffered. Perhaps data entry teams don't understand downstream usage, or because there's no clear accountability, your validation rules just slow everything down without improving quality.
This extends to policy frameworks. Yes, some standardization is necessary, legislative requirements demand compliance controls. But does each additional rule, control, governing body, or requirement serve a clear organizational objective? When you create hoops for the business to jump through just to mitigate minor or unlikely risks, you've crossed into over-regulation.
And we're about to do exactly this with AI.
The Same Mistake, Higher Stakes
We're repeating this pattern with AI, with more severe consequences.
Organizations are already creating restrictive AI policies disconnected from broader objectives. Take sustainability as an example: AI systems consume energy, so some organizations tightly regulate AI energy use. But in isolation, this creates perverse outcomes. Strictly limiting AI deployment for logistics route optimization because of energy concerns, even when the CO2 savings from optimized routes dwarf the AI system's footprint, means using a sustainability rule to create worse sustainability outcomes.
This happens when we treat AI (or data) as special categories requiring isolated rulebooks. AI sustainability belongs in sustainability frameworks where trade-offs can be evaluated properly. AI bias concerns should integrate with discrimination and ethics policies. AI security should align with information security standards.
The principle: add rules where specifically needed to fulfill a business purpose: mitigating genuine risk or unlocking real value. Rules must be applicable to context, not imposed categorically because something involves data or AI.
The Cost of Getting This Wrong
Over-regulation creates a predictable pattern: people stop genuinely following rules. They tick boxes for audits while finding workarounds for actual work. Governance becomes theater, performed for compliance reviews but ignored in daily operations.
I've watched data teams spend more time documenting compliance with quality procedures than improving quality. I've seen AI projects structured to navigate approval gates rather than solve business problems. The bureaucratic burden prevents real fixes. When teams spend energy on governance theater, they're not building relationships, clarifying ownership, or having conversations that would resolve the underlying confusion.
This is a balancing act between risk mitigation and bureaucratic burden. Get it wrong, and you don't just waste resources, you lose visibility into the very risks you're trying to manage.
What Actually Works
Fix organizational issues first. Clarify who owns what. Establish if and why different groups need different approaches. Create direct accountability rather than complex coordination mechanisms. Only then consider whether additional standardization or controls are needed.
This is simple to say but not easy to do. It requires resisting the urge to "do something" by adding structure. It means uncomfortable conversations about responsibility and decision rights. It demands that leaders understand root causes rather than look for quick procedural fixes.
But it's the only approach that works. You can't standardize your way out of organizational dysfunction. You have two choices: fix the root causes or bury them under policy, compliance requirements, and approval workflows.
The dangerous part? The second option often looks like success. Boxes get ticked. Audits pass. Leadership sees documented controls and feels reassured. Meanwhile, the actual dysfunction continues underneath—just hidden from view and more expensive to address. You haven't solved anything. You've just made the problem harder to see while giving everyone an incentive to pretend it doesn't exist.
Fix what's actually broken, or accept that you're building an expensive illusion around it.