Data management has a predictable failure mode. I have watched it play out across organizations for years. Lack of urgency at the levels that set strategy. Lack of clarity at the levels that coordinate execution. Lack of mandate with the people actually trying to fix things. None of that is new. The inability to fix it across the field is not new either. But taking that track record into the AI era is a different proposition entirely.

Every model, every automated decision, every agentic workflow depends on data being available, trustworthy, and fit for purpose at a level most organizations have never had to deliver consistently. When a human analyst encounters questionable data, they apply judgment. When an AI agent encounters it, it scales the error.

That alone would be enough to rethink how we manage data. But there is a second pressure that changes the equation entirely. Agentic AI does not just consume data. It creates new use cases for data continuously. An agent optimizing supply chain logistics today may need customer sentiment data tomorrow and supplier contract terms the day after. Each new use case raises questions that current governance models expect a human to answer. Is this data available? Is it fit for this purpose? Is it allowed for this purpose? Does the quality meet the threshold for this decision?

When new use cases emerge every hour, governance that depends on human validation becomes the bottleneck. Not because the people are slow. Because the model was never designed for this speed.

The instinct will be to add capacity. More data stewards, more review boards, more approval workflows. That instinct is wrong. Rereading some specific parts of Piethein Strengholt's Data Management at Scale made it click. The answer is not making data management more technical. Everyone agrees with that. The answer is fully committing to it: automating away the entire middle layer that organizations have built between the decision and the execution—the steering committees, the roles assigned to review and approve, the status meetings that exist only to report on other meetings. But that only works if you first fix who decides what matters. Automation without direction is just faster overhead.

Focus comes from pushing decision authority fully to the business. Not shared accountability with data teams. Not ownership titles assigned to senior leaders who delegate everything back down. Actual decision rights: which data is strategically important, what level of investment it warrants, and where the organization accepts imperfection versus where it demands precision. Most organizations define this authority clearly for financial decisions. For data decisions of equivalent business impact, it often depends on who happens to care most. That is not governance. That is informal structure mistaken for it.

I learned this the hard way. When I first designed data ownership following the DAMA DM-BoK framework, I placed the Data Owner role at senior level to ensure mandate. That role combined everything: the strategic decisions about what data matters and why, and the operational accountability for quality, documentation, and access. One role carrying two fundamentally different responsibilities. Senior leaders accepted the title but had neither the time nor the inclination to drive the operational side. So they delegated. Not just the tasks, but the accountability. The result was teams accountable for everything and authorized to decide nothing. They could manage data. They could not prioritize it. They could identify quality issues, not mandate the solution. Assigning the role at the level supposed to ensure mandate had produced the opposite: delegation of accountability to teams lacking the power to force a fix.

This has always created friction in data management. Organizations have lived with it because the cost of the dysfunction was tolerable. The effects, mostly invisible. But as I argued last week, data needs to drive value, not just be a well-managed product. Agentic AI makes that shift urgent.

In a world where AI demands governance at machine speed, that structural confusion is fatal. The business must own the decision directly. No delegation through data stewards acting on behalf of leaders who are too senior to be involved. No accountability assigned to technology teams who lack the organizational authority to set priorities. The person closest to the business outcome needs to decide.

Speed comes from pushing the act of managing data to technology and steering hard toward automation. Quality monitoring, cataloging, lineage tracking, consent validation, access provisioning: these are technical problems with technical solutions. When they sit with business teams or governance offices, they become manual processes. When they sit with technology, they become engineering challenges. And engineering challenges get automated.

But speed without strategic connection is just efficient waste. This is where the incentive structure works in your favor. Every governance task that technology automates reduces their operational cost and frees capacity. That saving funds the next round of automation. It is a flywheel: the more you push to tech, the cheaper and faster governance becomes, which justifies pushing more. In the process unlocking more and more use cases that require governance at machine speed. The business gets speed. Technology gets efficiency. Both get what they need without asking the other to do work they are not incentivized for.

Focus in who decides. Speed in how it gets done. That is the redesign.

Most organizations will spend the next two years adding governance to keep up with AI. The ones that pull ahead will be the ones that realized the problem was never too little governance. It was too many people in the loop.

The Governance Bottleneck AI Is About to Expose