Insights

Agentic AI Will Reshape Everything. Human Accountability Will Determine Success.

AI Is the Railroad of the 21st Century. The Track We Lay Now Determines Where It Leads.

Jeffrey McMillan  ·  Founder & CEO, McMillanAI  ·  March 2026

In the early days of the American railroad system, expansion was chaotic.

Regional railroad companies built tracks with different gauges. Safety standards were inconsistent. Interoperability was limited. Trains could not move seamlessly across networks. Accidents were frequent. Capital was fragmented. Growth was real — but unstable.

What followed was not simply a matter of standardizing track widths. As Alfred Chandler documented in Strategy and Structure, the railroads required entirely new ways of organizing and managing enterprises. Standard gauges were necessary, but so were new management hierarchies, tight adherence to scheduling, new approaches to coordination across geography, and fundamentally different organizational structures. The technology demanded that institutions reinvent themselves — not just their infrastructure.

We are at a similar moment with agentic AI, and the analogy extends further than most people realize.

Why Agentic AI Is Different

For the past decade, AI functioned primarily as a tool — predicting, summarizing, optimizing. Generative AI expanded those capabilities dramatically. But we are now entering a phase where AI systems plan, execute, coordinate, and act with increasing autonomy. This shift from tool to actor is consequential in ways that go beyond any single capability improvement.

As the former Head of Firmwide Artificial Intelligence at Morgan Stanley, where I led the deployment of hundreds of AI use cases into production, and now teaching AI strategy and governance at Columbia Business School, I have spent years building AI infrastructure at scale and advising organizations navigating this transition.

Since starting my own firm focused on the effective and responsible deployment of AI, I have talked to hundreds of business leaders around the globe, and the pattern I see is consistent: the technology is accelerating. Organizational readiness is not.

But let me be specific about what excites me, because predictions that AI will change everything are a dime a dozen.

Agentic AI matters because it can orchestrate entire workflows — not just individual tasks. A single agent can retrieve documents, analyze their contents, generate recommendations, execute actions across systems, and iterate based on feedback. Chains of agents can coordinate across business functions the way departments do today. This is not about a better chatbot. It is about fundamentally reimagining how work gets done.

The potential is enormous. Organizations that deploy agentic AI well can dramatically reduce cycle times for complex processes, free skilled professionals from repetitive cognitive work, improve decision quality by synthesizing information at a scale no human team can match, and create entirely new services and business models that were not previously possible. These are not theoretical benefits. I see them emerging in practice across industries — from financial services to healthcare to professional services.

But realizing that potential requires something most organizations have not yet built: the governance structures, organizational capabilities, and leadership capacity to manage autonomous systems responsibly.

The Structural Risks

The risks of agentic AI are real and structural. They stem from two root causes that amplify every other problem.

The first root cause is diffused accountability. When AI recommends, humans decide. When AI acts, responsibility blurs. Without defined ownership, "the model decided" becomes an institutional gap. Every organization I have worked with struggles with accountability long before AI enters the picture — AI simply makes the consequences of ambiguity faster and more severe.

The second root cause is organizational lag. Technology evolves exponentially; organizations evolve incrementally. Our management structures, oversight frameworks, and decision-rights models were designed for a world where humans executed workflows and software was static. Agentic AI operates at machine speed, across systems, in ways those frameworks were never built to govern.

These two root causes — unclear ownership and outdated organizational structures — drive the concrete risks that keep me up at night:

Workforce disruption.

Agentic AI does more than automate individual tasks — it can execute multi-step workflows once performed by skilled professionals. Some roles will be augmented. Others will be redefined. Some will be displaced. Railroads reshaped labor markets as profoundly as they reshaped transportation. New industries emerged. Others faded. The difference between prosperity and unrest was preparation — and preparation requires specific action by specific leaders, not vague calls for "reskilling."

Systemic and security risk.

Autonomous systems introduce new threat vectors. They can probe vulnerabilities continuously, adapt in real time, and interact with other systems at speeds that outpace human monitoring. The difference from prior technology risks is not just probability — it is the compounding speed at which cascading failures can propagate when autonomous agents interact with one another across networks.

And the one I believe is most underestimated — the gradual erosion of human judgment.

I want to be careful here, because people have worried about the erosion of human judgment since the ancient Greeks decried writing for weakening memory. London taxi drivers derided GPS. And they were largely wrong — we adapted, and the tools made us more capable.

But agentic AI is qualitatively different. A GPS gives you a recommendation. You can look out the window, consult your experience, and override it. An agentic system executes multi-step workflows, interacts with other systems, and produces outcomes that may be difficult to reverse by the time a human reviews them. If the leaders responsible for oversight no longer understand the logic chain well enough to challenge it, the "human-in-the-loop" becomes ceremonial. The signature is there. The judgment is not.

I concede the counterargument: human judgment is not infallible. Self-driving cars appear safer than human drivers. Companies are already remarkably good at diffusing accountability without AI's help. And in domains requiring millisecond decisions, human judgment is not just slow — it is irrelevant.

But here is what concerns me. It is not that AI will seize authority. It is that leaders will cede it — one comfortable delegation at a time. Not because the technology fails, but because it succeeds so consistently that oversight feels unnecessary. Until it isn't.

The solution is not to slow adoption. It is to build organizational muscle alongside the technology — training leaders not just to use AI, but to maintain the capacity for informed oversight. Organizations that fail to do this will retain the appearance of human governance while hollowing out its substance.

None of these risks are inevitable. But none resolve themselves. And history suggests that standards built after crisis are always more painful than standards built before it.

A fair challenge to that claim: Can we actually prepare in advance? Or do standards only emerge after cargo has to be unloaded and reloaded, after trains crash, after very public disasters force action? As a species, we do seem to wait until something goes wrong. But the cost of waiting — when the systems in question operate autonomously, at scale, across critical infrastructure — is materially higher than it was in the railroad era. The speed at which agentic failures can cascade makes proactive governance not just preferable but necessary.

Five Actions — And Who Should Take Them

Agentic AI offers the potential to improve organizational performance dramatically — accelerating complex workflows, enhancing decision quality, and enabling new business models. But realizing those benefits requires deliberate action by specific leaders, not vague aspirations.

1. CEOs and Boards: Mandate Clear Accountability

Organizations deploying agentic systems in high-impact domains should maintain a documented inventory of autonomous systems, assign named executive ownership for each, and establish board-level oversight with regular reporting. Specifically: the CEO should designate an accountable executive (not a committee), and the board should require periodic disclosure of what autonomous systems are deployed, what decisions they make, and who is responsible when they fail. Accountability must be visible and enforceable.

2. CROs and CTOs: Implement Agentic Stress Testing

Before scaling deployment in finance, healthcare, infrastructure, or public services, chief risk officers and chief technology officers should jointly conduct structured scenario simulations — specifically testing what happens when agents interact with other agents in unexpected ways, when data inputs degrade, and when edge cases compound. Independent evaluation should be engaged where systemic risk exists. This is the agentic equivalent of a bank stress test: What breaks when conditions change?

3. Government, Corporate, and University Leaders: Coordinate Workforce Transition

Workforce transition requires coordinated action across sectors. Corporate CHROs should map which roles face task displacement and fund credible reskilling pathways — not generic training, but role-specific programs embedded in actual workflows. University presidents and deans should embed AI literacy into core curricula across disciplines, not just computer science departments. And policymakers should create incentive structures that reward retraining over reactive layoffs. Agentic AI will create new industries. But transitions do not manage themselves.

4. Industry Consortia and Policymakers: Develop Interoperability Standards

When multiple agents interact — across vendors, across companies, across borders — they need common protocols, the same way the internet works because of TCP/IP and the telecom industry works because of interface standards. Industry consortia should accelerate technical standards for how agents communicate with one another, how audit trails are maintained across autonomous workflows, and how governance frameworks translate across jurisdictions. Left to their own devices, AI systems will develop communication patterns optimized for efficiency, not transparency. Interface standards that preserve human interpretability should be established before that opacity becomes entrenched. Who should lead this? The same kinds of standards bodies that built the internet — but with representation from regulators, civil society, and the companies deploying these systems.

5. Every Leader Deploying AI: Preserve Human Authority Where It Matters

In financial stability, healthcare, legal systems, and national security, final authority over consequential decisions must remain clearly human. This requires defined sign-off thresholds, escalation protocols, and — critically — ongoing leadership training that builds and maintains the knowledge required for informed oversight. Not just the right to override. The capacity to know when to. And beyond approving individual decisions, leaders should monitor how well their agentic systems are performing overall — not just whether each output is correct, but whether the system's behavior patterns are aligned with organizational values and objectives.

Human-in-the-loop is not friction. It is legitimacy.

That said, I recognize that calls for human-in-the-loop can become lazy thinking — a blanket requirement applied without regard for context. The goal is not to insert humans into every decision. It is to ensure that humans remain capable of governing the systems that make decisions on their behalf. That is a harder problem, and a more important one.

A Leadership Choice

Railroads became transformative not because tracks were laid quickly — but because new organizational structures, standards, and management capabilities made the network reliable, scalable, and trusted. As Chandler showed, the technology demanded that institutions reinvent how they operated. The companies that resisted organizational change were left behind. The ones that embraced it built the modern corporation.

Agentic AI presents the same inflection point. Standards are necessary, but not sufficient. Organizations will need new structures, new roles, new decision-rights frameworks, and new cultural norms around human-AI collaboration. I do not think most companies can succeed with AI using today's org charts.

We can allow fragmentation and reactive governance to shape the next decade. Or we can deliberately build accountability, workforce readiness, organizational capability, and technical standards that enable confident scaling.

This is not a call to slow innovation. It is a call to strengthen it. Stable systems attract capital. Trusted systems endure. Prepared societies prosper.

Agentic AI has the potential to define economic competitiveness in the coming decade — by accelerating how organizations operate, improving the quality of decisions at every level, and enabling entirely new approaches to problems we have struggled with for generations. But that potential is only realized when the organizational foundations are as strong as the technology. The future will not be determined solely by model capability. It will be determined by whether leaders choose to lay the right track.

We do not need alarmism. We need foresight, and we need to act now.

I welcome you to join me in this call to action.

Our future depends on it.