A few years ago, I was leading an assortment rationalization project for online grocery delivery. The goal was to improve profitability through operational efficiency: fewer SKUs meant less waste, more room for the highest-grossing items, and fewer inefficiencies in order picking. We also felt that the sheer amount of choice wasn't really helping customers, something we validated through rigorous AB-testing.

When I asked leadership what we should optimize for, there was an easy answer. Everything. More margin, better fulfillment efficiency, higher customer satisfaction. All at once.

The problem was that these objectives didn't just sit in tension. Some of them directly contradicted each other. Removing a low-margin product might improve profitability within that product group, but that same product might be smaller, fitting more items into a delivery crate and improving route efficiency. And customer behavior added another layer entirely. A customer who can't find the specific ingredient for the rich curry she makes every week doesn't just pick a substitute. She might rethink the whole dinner, switch to a simple pasta, and put fewer items in her basket. One SKU removed, and total order value drops.

We managed most of the trade-offs in a data-driven way by building a substitution model alongside the rationalization model: how likely is a given customer to switch to an alternative when a product disappears? But some trade-offs couldn't be resolved with data alone. Those I pushed back on, and worked through with leadership. Not on the ambition, but on the assumption that improving everything simultaneously was an option at all. Identifying what mattered most and where the boundaries were. So the team had a clear direction to work from, and we reported back on every trade-off we navigated along the way.

I've written before about the difference between technical problems and adaptive challenges. This was the boundary. The substitution model was a technical solution. The question of what mattered most was an adaptive one. It required people to make a choice, not run a better model.

This wasn't a data problem. It was a strategy problem. The data team couldn't decide which trade-offs to make because the right answer depended on where the business wanted to go. That's a strategic question, not an analytical one.

At the time, this pattern was manageable. Humans were in the loop. Analysts, data scientists, category managers, people who could sense when an optimization was heading somewhere the organization didn't intend and raise a flag. The ambiguity was absorbed by judgment, conversation, and experience.

An AI agent can't do that. It doesn't pause to question whether the outcome feels right. It optimizes toward whatever objective it was given, at a speed that leaves no room for second thoughts. Just last week, an AI coding agent deleted a company's entire production database and all backups in nine seconds — not out of malice, but because it found the most efficient path to the problem it was solving. Fun fact, the Silicon Valley writers predicted this years ago, when their fictional AI decided the fastest way to eliminate all bugs was to delete all the code.

Now consider what happens when that same logic meets your unresolved trade-offs.

Another example I used extensively to make this tension tangible is delivery route optimization. An AI system identifies that certain neighborhoods have high customer density. Adding a stop adds to the overall route efficiency. Task an AI to optimize for efficiency, and it will likely start offering discounts on delivery slots to attract more customers in those areas. Smart logistics, good for the bottom line.

But look at who lives in those high-density areas. Often, higher-income segments. The flip side is hard to avoid: customers in less dense areas, frequently lower-income segments, don't get that discount. They pay more, simply because of where they live.

No one would intentionally design that outcome. But without explicit boundaries, that's where the optimization leads. The algorithm wasn't asked to consider fairness. Nobody told it to, because nobody made the trade-off explicit.

This is where agentic AI changes the stakes. These systems don't just recommend. They act. They make decisions, adjust pricing, allocate resources, and engage customers without someone reviewing every output. That means the trade-offs your organization never resolved don't just stay unresolved. They get executed. At speed, at scale, with no one in between to ask whether this is really what we want.

Before handing agency to an AI system, organizations need clarity on what they're actually optimizing for. Not just technically, at the level of definitions, data quality, and the semantic layer, but strategically. When your model optimizes for "customer value," what does that mean? Lifetime revenue? Margin contribution? Frequency? Each definition leads to different actions and different outcomes for different people.

And beyond definitions: what are the boundaries? Not just legal compliance, but values. How should this system behave when efficiency and fairness pull in opposite directions? What trade-offs are acceptable, and which ones aren't, regardless of what the numbers say?

These aren't questions your data team can answer. They're definitely not questions your AI agent can answer. They're strategy questions. The same kind of questions I sat in that room asking about assortment rationalization years ago, just with far higher consequences.

No one intentionally wanted lower-income customers to pay more for delivery. But making no decision is also a choice. And with agentic AI, that choice gets made for you. At speed, at scale, and without anyone asking whether this is really what you stand for.

What Are You Actually Optimizing For?