Most organizations I advise are not lacking AI experiments. They are lacking AI strategy. The experiments are everywhere — a proof of concept in marketing, a chatbot in customer service, a coding assistant in engineering. What is missing is the connective tissue that links these efforts to the organization's actual strategic priorities. Without intentional strategy, AI becomes a tax on the organization rather than an accelerant.
Without that connection, AI becomes a collection of interesting projects that consume resources without delivering measurable business impact. I have seen organizations with dozens of AI experiments and zero strategic value to show for it — not because the technology failed, but because no one asked whether these experiments connected to anything that mattered.
Strategy Comes First
Every AI initiative should begin with a deceptively simple question: What are your organization's top strategic priorities? Growth? Efficiency? Risk reduction? Client experience? AI should serve those priorities — not the other way around.
The best AI strategy is not about AI. It is about your business objectives — and whether AI is the right tool to advance them.
Once strategic priorities are clear, the next step is identifying which specific initiatives AI can meaningfully improve. Not every problem needs AI. And not every AI-solvable problem is worth solving. The ones that matter should be prioritized ruthlessly based on impact, feasibility, and alignment with what matters most to your organization.
This requires honest conversations across the executive team about where AI can drive incremental improvement to existing goals — and, equally important, whether AI could enable entirely new processes or business models that were not previously possible.
The Development Lifecycle
Successful GenAI projects follow a disciplined lifecycle. It starts with clearly scoping the project — what are the objectives, deliverables, and desired business impact? Then identifying risks, because GenAI risks may be different from what your organization has historically managed.
Next comes assessing your inputs. Is the data accessible? Is it accurate and fit for purpose? Do not proceed until quality issues are addressed. Then you build the solution, develop an evaluation framework, and deploy with ongoing monitoring.
Each of these steps requires real work and real discipline. But this is not bureaucracy. It is how you avoid the pattern I see repeatedly: organizations that skip evaluation, deploy too quickly, and spend months cleaning up problems that could have been caught in weeks. The development lifecycle is not a barrier to speed. It is the reason some organizations scale while others stall.
The best AI strategy is not about AI. It is about your business objectives — and whether AI is the right tool to advance them.
The Evaluation Gap
Evaluation is the most overlooked and most important aspect of GenAI deployment. Without a structured way to assess whether a solution is fit for purpose, safe to deploy, and reliable at scale, you are making decisions based on demos, anecdotes, and vendor claims.
Strong evaluation frameworks use multiple approaches: golden source comparisons where you compare AI outputs to authoritative answers, subject matter expert review where domain specialists judge accuracy and quality, A/B testing for side-by-side comparison, model-as-judge approaches where alternative models score outputs, adversarial red-teaming to test safety and edge cases, and post-deployment impact analysis to measure real-world value.
The scope should match the risk — lightweight for low-risk internal use cases, rigorous for anything customer-facing, regulated, or high-stakes. But every use case deserves some form of evaluation. The ones that skip it are the ones that embarrass you later.
What Successful Projects Have in Common
After leading the deployment of hundreds of AI use cases, the pattern is consistent. Successful projects share eight characteristics: alignment to business strategy, a well-understood problem, people who can prompt effectively, access to quality data, access to domain experts, a mature risk framework, supportive leadership, and measurable outcomes.
Miss any one of these, and the probability of failure rises sharply. Get them all right, and the technology almost takes care of itself. In my experience, the organizations that struggle with AI rarely have a technology problem. They have a readiness problem.
Lessons From the Field
Having led hundreds of GenAI deployments, the failure patterns are remarkably consistent. Lack of senior-level support and understanding — executives who sign off but do not engage. Poorly defined processes that make it impossible to know what AI is actually improving. Data and content that is inaccessible or uncurated. Evaluation frameworks that are underfunded or nonexistent. And perhaps most common of all: trying to solve too many problems at once instead of going deep on a few that truly matter.
AI does not fail because the models are not good enough. It fails because the organizational foundations are not in place.
McMillanAI helps business leaders navigate AI with clarity and confidence.
Start the Conversation