March 10, 2026
by
AI Expert Team

McKinsey Spent Millions on Agentic AI Deployment Lessons. Here's What UK SMEs Actually Need to Know

Agentic AI deployment lessons

Agentic AI deployment lessons from McKinsey's 50+ enterprise implementations reveal a harsh reality: it's harder than it looks and many companies are failing badly enough to rehire the people they replaced.

These lessons come from enterprise-scale deployments - think global banks, insurance companies and legal service providers with unlimited budgets and dedicated AI teams. But the core principles apply to businesses of any size and the mistakes are universal.

Here's what McKinsey learned spending millions on trial and error, translated into language UK SMEs can actually use - and more importantly, how to avoid making the same expensive mistakes yourself.

The Reality: Agentic AI Is Failing More Often Than It's Succeeding

Before getting stuck into the lesson, let's acknowledge what McKinsey found: many organisations are "retrenching - rehiring people where agents have failed". That's consultant-speak for, "We screwed this up badly and had to undo it".

This isn't a minor issue. Companies invested in agents, deployed them, watched them underperform or create chaos and then had to bring humans back to fix the mess. That's not just failed technology, it's damaged operations, wasted budget and lost credibility with your team.

The pattern is familiar - impressive demos but terrible production performance. Agents that work brilliantly in controlled tests but frustrate actual users. Systems that automate one task while creating three new problems. This is "AI slop" – low-quality outputs that erode trust faster than they deliver value.

For businesses evaluating agentic AI, this should be a warning sign. Jumping in without understanding what actually works is expensive. Here's what McKinsey learned the hard way.

Agentic AI Deployment Lessons That Actually Matter for Business

Lesson 1: It's the Workflow, Not the Agent

McKinsey's first lesson was that companies focus too much on the agent itself and not enough on redesigning the entire workflow. You can't just drop an agent into an existing process and expect magic.

What this means in practice is that before you deploy any agent, map the complete workflow. Every step that involves people, processes and technology. Identify where agents can genuinely improve things and where they'll just add complexity. That’s exactly what AI Expert does in our AI Workshop.

A legal service provider McKinsey worked with designed agents to learn within the workflow. Every edit a lawyer made was logged and used to improve the agent over time. The agent didn't replace the workflow; it became part of a redesigned process where humans and agents collaborated effectively.

The lesson here is don't buy agent technology and figure out how to use it later. Start with the workflow problem you're trying to solve, then determine if agents are actually the right solution for specific steps. Sometimes the answer is no.

Lesson 2: Agents Aren't Always the Answer

This is the lesson most companies ignore. Agents are powerful but not always necessary. McKinsey found that many tasks are better handled by simpler automation – rules-based systems, predictive analytics or basic LLM prompting.

The framework is straightforward:

• Repetitive, rule-based tasks with structured input? Use rule-based automation.

• Unstructured input (like documents) but extractive tasks? Use gen AI or NLP.

• Tasks requiring classification or forecasting? Use predictive analytics.

• Multistep decision-making with highly variable inputs? Now you need agents.

An investor onboarding workflow, for example, is tightly governed and predictable. Agents add complexity without adding value. By contrast, extracting complex financial information from varied documents benefits from agents because the task demands synthesis and judgment.

The lesson here is that before committing to agents, ask whether a simpler tool would accomplish the same goal with less risk and lower cost. Most businesses don't need cutting-edge agentic AI, they need the right tool for the specific job.

Lesson 3: The Most Important of All Agentic AI Deployment Lessons – Invest Heavily in Evaluation

One of the most critical insights McKinsey emphasised is, "Onboarding agents is more like hiring a new employee versus deploying software".

Agents need clear job descriptions, training and ongoing feedback. Companies that skip this step end up with agents producing "AI slop" – the outputs that look impressive but don't actually work when users try to rely on them.

A global bank transformed its credit-risk-analysis process by testing agents rigorously. Whenever the agent's recommendation differed from human judgment, the team identified logic gaps, refined decision criteria and retested. They didn't just launch and hope, they treated agent development like employee training.

The lesson here is that businesses must budget time and resources for evaluation. You need domain experts to test agents, provide feedback and refine performance. This isn't a one-time deployment; it's ongoing development work. If you can't commit to that, you're not ready for agents.

Lesson 4: Build Observability Into Every Step

As companies scale from a few agents to hundreds, tracking performance becomes critical. McKinsey found that organisations tracking only outcomes struggle to diagnose problems when agents make mistakes.

The solution is to monitor and verify agent performance at each workflow step, not just the final output. When one legal service provider noticed accuracy drops, they quickly identified the issue because they'd built observability tools tracking every process step. Certain users were submitting low-quality data, which agents misinterpreted.

With that insight, the team improved data collection practices and adjusted parsing logic. Agent performance rebounded immediately.

The lesson here is if you can't track what your agents are doing at each step, you can't improve them. Build logging and monitoring from the start, or you'll be flying blind when things go wrong.

Lesson 5: Reuse Agents Wherever Possible

McKinsey's teams kept seeing companies create unique agents for every single task, leading to massive redundancy. Many different tasks share common actions – ingesting documents, extracting information, searching databases, analysing data, etc.

The smarter approach is to build reusable agent components and make them easy for developers to access. This eliminates 30% – 50% of nonessential work.

The lesson here is if you're deploying multiple agents, think about what they have in common. Building modular, reusable components saves time and money compared to starting from scratch every time you identify a new use case.

Lesson 6: Humans Remain Essential (But Roles Change)

The final lesson addresses the question everyone's asking: what happens to jobs? McKinsey's answer: humans remain essential but the type of work changes.

People still need to oversee accuracy, ensure compliance, use judgment and handle edge cases. The number of people in a workflow will likely decrease but the work itself doesn't disappear – it transforms.

One legal service provider designed workflows where agents organised claims and amounts with high accuracy but lawyers still reviewed and approved them. Agents recommended case approaches but people adjusted the recommendations and signed off on final decisions.

Critically, the team designed simple visual interfaces making it easy for lawyers to interact with agents. When someone clicked an insight, the system scrolled to the correct page and highlighted relevant text. This focus on user experience led to 95% acceptance rates.

The lesson here is don't assume agents replace people entirely. Design workflows where humans and agents collaborate, with clear handoffs and approval points. If you ignore the human side of implementation, even the best agents will fail.

What This Actually Means for Your Business

McKinsey's lessons boil down to one core insight: agentic AI deployment requires as much focus on people, processes and workflows as it does on technology. Companies failing with agents aren't struggling because the technology doesn't work – they're struggling because they're treating agents like plug-and-play software instead of transformational change.

For UK SMEs, this has practical implications:

You need workflow expertise before agent expertise. Understanding your current processes, identifying bottlenecks and mapping pain points matters more than knowing which LLM to use. An AI Workshop can map your operations and identify where agents might genuinely help - versus where they'll just create expensive complexity.

You need evaluation capacity. If you can't dedicate resources to training and testing agents, you're not ready to deploy them. This requires domain experts who can provide feedback, refine performance and ensure agents actually deliver value.

You need observability infrastructure. Tracking agent performance at each workflow step isn't optional at scale. Without it, you can't diagnose problems or improve outcomes. This requires technical capability most SMEs don't have in-house.

You need a realistic assessment of whether agents are even necessary. Many workflow improvements come from simpler automation or better processes, not cutting-edge agentic AI. An AI Readiness Assessment helps determine what you actually need versus what's being hyped in the market.

How to Avoid Expensive Agentic AI Mistakes

McKinsey's clients could afford to learn by trial and error. Most businesses can't. Here's how to avoid their mistakes:

Start with workflow mapping, not technology selection. Use an AI Workshop to document current processes, identify bottlenecks and determine where agents could genuinely improve outcomes. If simpler tools accomplish the same goal, use those instead.

Evaluate your technical capacity honestly. Deploying agents requires ongoing evaluation, monitoring and refinement. If you don't have that capability in-house, you need a partner who does – or you need to stick with simpler solutions. An AI Readiness Assessment examines your current capabilities and identifies gaps before you commit resources.

Build implementation roadmaps that account for change management. Agents change how people work. Without proper training, communication and workflow redesign, even well-functioning agents will be rejected by users. An AI Roadmap ensures you're planning for the human side of implementation, not just the technical side.

Consider secure, managed solutions for production use. If you need agent capabilities without the complexity of building and maintaining infrastructure yourself, solutions like OpenClaw in a Box provide enterprise-grade security and compliance without requiring you to become an AI engineering shop.

The Real Lesson: Slow Down and Get It Right

The most valuable agentic AI deployment lesson from McKinsey's research isn't in their six-point framework. It's in their opening admission – many companies are failing badly enough to rehire people they replaced with agents.

That's what happens when organisations rush into agentic AI without understanding workflows, evaluation requirements or realistic implementation timelines. The technology works – when deployed thoughtfully by teams who understand both the business problem and the technical constraints.

For SMEs, the opportunity isn't in racing to deploy agents faster than competitors. It's in deploying them correctly – in workflows where they genuinely add value, with proper evaluation and monitoring and with realistic expectations about what agents can and can't accomplish.

McKinsey spent millions learning these lessons. You don't have to.

If you want to evaluate agentic AI opportunities without expensive trial and error? Take our free 2-minute AI Readiness Assessment to understand your starting point, or book an AI Workshop to map your workflows and identify where agents might actually deliver ROI.

Share this post

Subscribe to our AI newsletter

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.