Cyber Security Blog

40% of AI Projects Predicted to Fail

Written by Craig Pepper | May 12, 2026 6:45:00 AM

Over 40% of agentic AI projects will be cancelled by the end of 2027. If that number feels high, the reasons why are even more concerning, because they point to a fundamental misalignment between enterprise hype and execution capability.

Let's unpack what's happening, why it matters, and how to avoid becoming a statistic.

The Scale of the Failure

When Gartner polled 3,412 webinar attendees in January 2025, the investment picture looked like this:

  • 19% had made significant investments in agentic AI
  • 42% had made conservative investments
  • 8% had made no investments
  • 31% were in a wait-and-see holding pattern

That means over 60% of organisations are either holding back or uncommitted. Yet Gartner still expects 40% of active projects to fail. That's not a funding problem, it's a capability problem.

Why Projects Fail: The Three Culprits

1. Escalating Costs Without Clear ROI

Enterprises are discovering, in real time, that scaling agentic AI from proof-of-concept to production is far more expensive than they anticipated. The compute costs alone are substantial. But the real drag is the human overhead: AI-assisted discovery generates enormous triage burdens. Security teams spend weeks validating findings. Business process teams discover that "autonomous" workflows still need constant human adjustment.

The initial business case, which often promised 80% automation with minimal oversight, doesn't survive contact with reality. By the time you've factored in the true cost of governance, monitoring, and incident response, many use cases no longer pencil out.

2. Inadequate Risk Controls (The Governance Gap)

This is the leading cause, and it's critical to understand why: agentic AI breaks traditional security models.

When you deploy a chatbot, you control the surface: what users can ask, what tools it accesses, and what it can output. The system is bounded by human intent.

When you deploy an agent that can autonomously plan, act, and iterate—integrating with APIs, databases, workflows, and external systems, the attack surface explodes. A single misconfigured permission can cascade across an entire infrastructure. A compromised low-risk tool in the workflow can inherit the agent's elevated privileges and pivot laterally.

Most organisations don't have a governance playbook for this. Their existing security frameworks (which assume human decision-makers and bounded interactions) don't scale to autonomous systems that can act faster than traditional incident response cycles. They're trying to fit agentic AI into cybersecurity models designed for software deployment—and it's not working.

3. Unclear Business Value (The Hype-Reality Gap)

Here's where "agent washing" becomes a problem. Vendors have spent the last 12 months rebranding old products, RPA, chatbots, workflow automation, as "agentic AI" without meaningful changes to the underlying capabilities.

Meanwhile, organisations are investing in pilots based on inflated expectations. They expect agents to autonomously achieve complex business goals. In practice, most agents are well-suited only for repetitive, well-defined, low-risk tasks—exactly the things RPA already handles.

When projects finally go to production and deliver limited business value relative to the complexity and cost of execution, stakeholders pull the plug. The vendor landscape makes this worse: Gartner estimates only about 130 of the thousands of agentic AI vendors are real, and many of those are still early.

Where the Cancellations Will Hit Hardest

The projects most at risk are those where:

  • Business justification is weak. Use cases that looked good in a demo but lack a clear ROI when measured against operational reality.
  • Governance wasn't built in upfront. Teams that treat security and compliance as an afterthought face delays and rework that kill project timelines.
  • Autonomy is treated as binary. Organisations that expect agents to operate without close monitoring discover they still need extensive human oversight—negating the efficiency promise.
  • Integration complexity was underestimated. Agents that need to touch multiple systems, databases, and workflows face integration overhead that exceeds initial estimates.

What Successful Projects Do Differently

The 60% that survive to 2027 will likely share these characteristics:

  1. Start narrow and low-risk. Successful deployments focus on specific, bounded, repetitive tasks where the value is measurable and the downside is limited.
  2. Governance-first architecture. They build scoped access, network controls, audit trails, and staged rollout into the design—not as add-ons after the fact.
  3. Realistic expectations. They frame agents as productivity tools, not autonomous decision-makers. Human oversight remains central to the control model.
  4. Measured ROI. They track hard metrics: time saved, accuracy, consistency, cost-per-transaction. When the math stops working, they iterate or exit cleanly.
  5. Vendor diversification. They avoid lock-in with unproven vendors and test against multiple models and platforms.

The Real Message

The 40% projection isn't anti-innovation. It's a reality check.

Agentic AI is powerful. It can automate complex workflows, improve security posture (as Firefox's Mythos results demonstrated), and unlock new business models. But it requires a fundamentally different approach to governance, risk, and deployment than previous waves of automation.

Organisations that treat it as a quick productivity hack will fail. Organisations that invest in the governance infrastructure, start with realistic scopes, and measure relentlessly will succeed.

The next 18 months will separate the strategists from the hype-followers. If you're planning an agentic AI project, now is the time to ask hard questions: What exactly are we trying to automate? What are the real risks? What governance framework do we need in place? What's the actual ROI?

Answer those questions before you build. The 40% that gets cancelled are the ones that didn't.