Every conversation about AI seems to be dominated by prediction. What's the killer app? Who will win in agentic commerce? Which protocol will dominate? Most thought leadership falls into two camps: hype ("AI will transform everything") or prediction ("here's who wins"). I want to offer a third frame, one I believe is more relevant: the economics of viability.
Those prediction questions aren't irrelevant. But the honest answer is: we don't know. And predictions turn out to be very difficult, especially when they're about the future.* Even if we could predict the winners accurately, what would we learn? Which stock to buy, more Nvidia maybe. But it wouldn't tell us what to do differently in our own organizations.
A more useful question is: what's currently too expensive to do in your industry that becomes a real option when AI changes the cost equation?
This connects to the Jevons paradox. When something becomes dramatically cheaper, you rarely just keep doing things the way you did. You might achieve the same results at lower cost. You might do more for the same money. Or you might do more for more money, because the ROI has fundamentally changed. The answer depends on the sector and the specific economics. But in every case, the shift creates possibilities that didn't exist before. Benedict Evans recently explored this in the context of AI tokens, and it sparked the line of thinking behind this essay.
I've seen this play out in retail. About eight years ago, I was discussing electronic shelf labels for our grocery stores. The case was straightforward: the amount of manual labor needed to update paper price labels throughout a store was significant and added no real value. But the unit economics of ESLs were still roughly double what a food retailer with razor-thin margins could justify. The technology was ready. The business case wasn't.
Years later, a different team had built a prediction engine that could identify which products had a high likelihood of not selling before their expiry date. The insight was clear: if you could offer more gradual, targeted discounts throughout the day instead of applying a generic 35% markdown sticker, you could dramatically reduce food waste, and a more gradual approach to discounting was also better for the bottom line. The technology to predict was ready. But without a way to update prices dynamically in-store, there was no way to act on it. That's when ESLs came back into the conversation, not as a cost-saving measure, but as the missing piece that made dynamic markdown possible.
The real impact wasn't efficiency by replacing paper labels with something that didn't require manual labor to update. It was the opportunity ESLs unlocked once they were in place. Experimentation showed the dramatic effect of matching supply and demand by updating discounts gradually throughout the day. The ESL technology hadn't meaningfully changed. What changed was what it enabled us to do.
Another example is Nutella. They used generative AI to create seven million unique jar designs for their Nutella Unica campaign. Previously impossible, not because the idea was new, but because the scale required to design seven million variations simply didn't work before. And AI alone didn't make it happen. It also took decades of advances in digital printing. I know, because one of my first side jobs was at a printing company that installed one of the first presses capable of printing individually different items one after another at scale. That was over twenty-five years ago. Neither technology alone was enough. Together, they crossed a threshold that turned a previously impossible idea into seven million unique products on shelves.
The ESL case and the Nutella campaign are very different examples, but they share the same pattern. And that pattern points to a different kind of strategic question than the one most organizations are asking.
Not "what's the killer app." Not "which agentic commerce standard will win." Those questions feel strategic but they're actually passive. You're waiting for someone else to define the future and then reacting.
The economic threshold question is proactive. It forces you to look at your own cost structures, to understand how value actually moves through your organization, and to identify where a shift in economics would unlock something genuinely new.
These thresholds won't show up in a conference talk or a vendor pitch. They're specific to your operations, your cost structures, your value chain. Finding them takes real work that generic blanket predictions about the future of AI can't replace.
The strategic value of AI isn't in what it makes cheaper. It's in the next question: what does that unlock that nobody is discussing yet?
* The first written instance of the quote quote "It’s Difficult to Make Predictions, Especially About the Future" dates back to 1948. The author remains unknown.