Agentic AI is positioned to transform supply chain management. But the biggest challenge, says UVA Darden professor Timothy Laseter, is moving from pilots to production across complex, real-world supply chains. His new white paper offers practical guidance for business leaders aiming to scale agentic AI in supply chain operations.
Insights from
Written by
Agentic AI is positioned to transform supply chain management — not merely by automating tasks, but by building intelligent networks capable of learning from disruption, adapting in real time and reconfiguring operations autonomously. The business case is compelling. The execution, however, is where most organizations stumble.
A recent survey of 180 senior supply chain executives, conducted by Tim Laseter, a professor of practice at the University of Virginia Darden School of Business and an authority on operations strategy, and Mike DuVall, global head of strategy at GEP, reveals a striking gap between ambition and results. Of all respondents, fewer than a dozen had broadly scaled AI across their supply chain operations.
“In procurement and sourcing , arguably the most AI-ready function,” says DuVall, “just 4 percent reported operating at scale, despite nearly 90% having pilots or plans underway.”
The conclusion is unambiguous: interest has crossed the chasm, but execution has not. “The biggest challenge,” says Laseter, “is moving from pilots to production across complex, real-world supply chains.”
Laseter and DuVall provide practical guidance for business leaders aiming to scale agentic AI in supply chain operations in their recent publication, “The Supply Chain AI Readiness Report: Why Operational Discipline Determines Agentic AI Success.” Below are key highlights.
Why Most AI Efforts Stall
The pattern is familiar. Executive mandates push organizations to “embrace AI,” triggering a wave of disconnected experiments in procurement, forecasting, risk planning and warehouse management. While pilots proliferate, results do not.
Without governance and operational redesign, pilots remain isolated showcases rather than engines of value. Organizations that do scale follow a different sequence: they clarify strategic intent, define decision rights and re-architect workflows before delegating judgment to machines.
What the Performance Elite Do Differently
Laseter and DuVall identified a small cohort they call the Performance Elite — organizations that have moved beyond pilot purgatory to measurable results: doubled productivity, lower error rates and compressed response times. Their advantage does not come from better algorithms alone. It comes from redesigning work itself.
- Redesign before automating
Leading organizations apply classic Lean logic: document the process, identify waste, then automate. As one executive in the study put it bluntly: “Don’t use AI to fix a problem. Fix the process — then use AI to make it efficient.” This often means rethinking objectives entirely: shifting from monthly to real-time planning cycles, moving demand planning from prediction to influence, or converting exception management from reactive to preventive. Amazon’s shift from two-day to one-day delivery illustrates the principle — the operational promise changed first, forcing a full redesign of inventory logic. Only then did automation scale.
- Govern AI like a portfolio
Organizations that scale are far more likely to have a formal AI steering committee operating with a portfolio and agile approach. Rather than chasing hype-driven use cases, they source ideas bottom-up from frontline experts, align initiatives with business strategy and tie funding to demonstrated operational value. This hybrid model — bottom-up intelligence with top-down governance — prevents fragmentation. Scaling AI is less about experimentation volume and more about disciplined prioritization.
- Treat data as infrastructure
In a human system, performance improves through coaching. In an agentic system, it improves through data refinement. Organizations that have successfully scaled AI are three times more likely to use automated data cleaning, four times more likely to deploy real-time dashboards, and nearly seven times more likely to maintain digital audit trails documenting AI logic. Leading organizations understand that data discipline is not hygiene. It is infrastructure.
Rethinking the Human-in-the-Loop
A recurring trap in AI deployment is “accuracy obsession” — a preference for more accurate models even when the real question is asymmetric cost. The right framing is not “Is the model accurate?” but “What is the cost of being wrong?” In fraud detection, false positives are tolerable. In labor planning, unnecessary staffing is expensive. In demand forecasting, missed signals erode margin.
Effective organizations adjust intervention thresholds based on risk asymmetry. Early in deployment, experienced humans monitor for edge cases much like mentoring a new employee. Over time, as confidence grows and models improve, guardrails evolve. While static governance fails, notes Laseter, adaptive governance scales.
Designing Intelligent Value Streams
The most powerful practical insight from leading organizations is segmentation. Rather than automating entire workflows uniformly, they divide transactions by complexity and risk. One global electronics manufacturer managing 60,000 purchasing transactions structured its work into four tiers: a zero-touch autopilot tier for low-value transactions; an “agentic sweet spot” for standardized mid-range cases; a complexity tier for high-spend or multi-document situations earmarked for iterative AI expansion; and a human exception tier for true edge cases requiring root-cause analysis.
This approach sidesteps two common failure modes: oversimplifying complex work and overengineering edge cases prematurely. Laseter and DuVall coined the term intelligent value streams for the deliberate matching of autonomy levels to risk profiles in agentic process flows.
The Hidden Friction: Stakeholders and Talent
Laseter and DuVall’s study revealed that even leading companies underinvest in two areas. The first is stakeholder engagement. Many AI initiatives fail not because of technical flaws, but because affected employees are unaware or unsupportive. Cross-functional engagement improves solution quality and accelerates adoption — and converted skeptics often become the strongest advocates.
The second area is talent redesign. Agentic AI shifts work from routine analysis toward judgment, orchestration and exception management. Spreadsheet preparation can be automated. Running a sales and operations planning (S&OP) meeting cannot. In some environments, 100 traditional inventory analysts may become 20 AI-enabled analysts. Arriving at the right configuration — and evolving it over time — is a strategic decision, not an HR afterthought.
A Practical Path Forward
Organizations that successfully scale agentic AI follow a repeatable sequence: stabilize and clarify processes, invest in data fidelity and governance, define decision rights and guardrails, segment workflows by complexity and risk, pilot within a structured portfolio framework, and evolve talent models alongside automation.
They do not launch AI into chaos. They engineer the environment for autonomy. The difference between pilots and performance is rarely technical, notes Laseter. It is operational readiness: clear workflows, disciplined governance, reliable data and thoughtful human integration.
The Leadership Imperative
Agentic AI will matter not because it generates better answers, but because it can take better actions — repeatedly, safely and at scale — inside real workflows. Organizations that chase algorithms will remain stuck in pilot mode. Those that build intelligent value streams will redesign how work flows end-to-end and earn durable competitive advantage.
The winners will not be those with the flashiest models. They will be the ones with the discipline to fix the process first — and the courage to redesign it for an autonomous future.
These insights are based on the 2026 white paper "The Supply Chain AI Readiness Report: Why Operational Discipline Determines Agentic AI Success" by Michael DuVall and Timothy Laseter. The unabridged paper is available here.
Laseter’s purview includes operations strategy, innovation, emerging technology and internet retailing. In addition to teaching at Darden, he serves as a managing director at PwC’s global strategy consulting firm, Strategy&, and contributing editor for management magazine strategy+business. He is co-author of four books, papers in leading academic journals and nearly 50 articles in strategy+business.
Prior to joining the Darden faculty, Laseter was a partner at Booz Allen Hamilton, helping global businesses with supply chain management, strategic sourcing and operations strategy. He has also taught at a number of business schools, including Dartmouth’s Tuck School of Business, IESE Business School, NYU Stern School of Business and London Business School.
B.S., Georgia Institute of Technology; MBA, Ph.D., University of Virginia
From Pilot Purgatory to Autonomous Supply Chains: The Path to Scaling Agentic AI