What AI Really Means for Cybersecurity — An Architect’s Honest View

A dark geometric mesh human face with glowing blue eyes, representing the unsettling intersection of artificial intelligence and human identity at the heart of modern cybersecurity threats.
Photo by A Chosen Soul on Unsplash

CYBERSECURITY · ARTIFICIAL INTELLIGENCE · OPINION

Everyone has an opinion on AI and cybersecurity. Most of them haven’t had to actually defend critical infrastructure with it.

Ujjwal Sharma · Cybersecurity Architect, SLB · 8 min read

Last year, a vendor walked into our office and told me their AI solution would reduce our false positives to zero.

Zero. Not “significantly reduced.” Not “near elimination.” Zero.

I’ve been a cybersecurity architect for 13 years, protecting critical infrastructure for one of the world’s largest energy companies. I’ve sat through a lot of vendor pitches. I’ve never seen zero false positives — from any technology, at any maturity level, in any environment.

This is the AI and cybersecurity conversation we need to have. Not the vendor version. The real one.

Where AI Is Actually Delivering Value Today

Let me start with what’s real. AI is not hype everywhere — there are two areas where it has genuinely matured and is delivering measurable value in cybersecurity today.

  • Threat Intelligence Platforms. This is where AI has earned its place without question. The volume of threat data — indicators of compromise, attack patterns, dark web signals, vulnerability disclosures — long ago exceeded what any human team could process manually. AI doesn’t just speed up threat intelligence. It makes the entire function viable at modern scale. The numbers reflect this: AI systems now detect breaches 108 days faster than traditional approaches, reducing associated costs by 43% (IBM Cost of Data Breach Report 2024). It’s no surprise that the AI-driven threat intelligence market is projected to nearly triple from $6.3 billion in 2024 to $18.7 billion by 2029.
  • Vulnerability Management Platforms. In 2024 alone, 40,289 CVEs were published — a 39% increase from the previous year. Organizations still take an average of 55 days to patch half of their critical vulnerabilities. Against that volume and velocity, AI-powered prioritization — moving beyond CVSS scores to assess actual exploitability in your specific environment — is not a nice-to-have. It’s the only way to make rational triage decisions at scale.

These two use cases share a common characteristic: AI is processing and prioritizing data at a scale and speed that humans structurally cannot match. That’s the right job for AI. And it’s genuinely working.

The Honest Practitioner Reality

Now let me tell you where we actually are in practice — because the gap between the vendor pitch and the operational reality is significant.

In my environment, AI is not yet fully integrated into our SIEM stack. Alert fatigue reduction, false positive elimination, automated response — these are capabilities we are actively working toward, not results we are currently measuring. We are investing in AI-driven automation precisely because we know the potential is real. But “potential” and “deployed and validated” are different things, and I won’t pretend otherwise.

Where we do use AI — in investigation workflows and root cause analysis — we treat its output with deliberate caution. AI can summarize faster, collect context more efficiently, and surface relevant data points in seconds that would take an analyst hours to compile. That is genuinely useful.

But for the actual analysis — the conclusion about what happened, why, and what to do about it — we check every AI recommendation. Every single one. Because in critical infrastructure, a hallucinated remediation step isn’t an inconvenience. It can take systems offline, create new vulnerabilities, or mask the original threat. The consequences are too significant to outsource that judgment to a model that cannot understand the operational context of what it’s recommending.

I’m not alone in this caution. 57% of SOC analysts report that traditional threat intelligence approaches are already insufficient against AI-accelerated attacks — yet the same analysts are being asked to trust AI outputs in their most consequential decisions. That tension is real and unresolved.

The AI Trust Boundary

After 13 years of working in security architecture, I’ve come to think about AI deployment in cybersecurity through a framework I call the AI Trust Boundary. It’s a simple but important distinction.

Below the boundary — SAFE ZONE: Collecting, sorting, aggregating, correlating, and prioritising data.
  • AI excels here. Volume and speed are the challenge. Human judgment is not the bottleneck. Let AI do what AI does best.
Above the boundary — DANGER ZONE: Analysing, concluding, recommending, and deciding.
  • This is where hallucinations become consequential. Where organizational context matters. Where the cost of being wrong is real. Human judgment is not optional here — it is the control.

The problem is that most AI security tools are being deployed across both zones simultaneously, often without any explicit acknowledgment that the boundary exists. A tool that is excellent at aggregating threat intelligence gets extended to recommend remediation steps. A platform that prioritizes vulnerabilities brilliantly gets trusted to prescribe the fix. The safe zone capability is used to justify trust in the dangerous zone capability — and that’s where things go wrong.

The Wave Nobody Is Talking About

Here is what concerns me most about where this is heading.

AI will be embedded in virtually every security product in the next two to three years. SIEM platforms, EDR tools, vulnerability scanners, identity management systems, network monitoring solutions — all of them will have AI capabilities built in, often enabled by default, often with marketing language that obscures exactly where the AI Trust Boundary sits within that product.

Many organizations will deploy these products without asking the questions that matter: Where exactly is this AI making autonomous decisions? What happens when it’s wrong? Has anyone validated its output in our specific environment? What are the consequences of a hallucination here?

We are not at a point where we can fully trust AI for security decisions. I want to be precise about that statement — I am not saying AI is not good. It is a genuine game changer. But trust in any security control must be earned through validation, not assumed through marketing. And right now, the industry is moving faster than the validation.

The attack side has already noticed. AI-assisted attacks increased 72% in 2024, phishing surged 1,265% due to generative tools, and the average cost of an AI-powered breach has reached $5.72 million. Attackers have no procurement cycles, no change management processes, no security reviews before deploying new AI capabilities. They move at the speed the technology allows. Defenders do not.

The AI arms race is real. And right now, the offense has the structural advantage.

A human finger touching a robotic hand, representing the essential partnership between human judgment and AI capability in modern cybersecurity operations.
Photo by Katja Ano on Unsplash

What Smart Organisations Are Actually Doing

The organizations navigating this well aren’t the ones with the most AI. They’re the ones with the most clarity about where AI can and cannot be trusted.

They define their AI Trust Boundary explicitly. Before deploying any AI security capability, they ask: is this tool operating in the safe zone or the danger zone? Data aggregation and prioritization — deploy with confidence. Analysis and autonomous remediation — deploy with oversight, validation, and clear human escalation paths.

They measure before they trust. Every AI tool gets a baseline and a validation framework before it influences decisions. If you cannot show the impact in numbers — time saved, accuracy rate, false positive reduction — you cannot justify trusting its output. IBM’s data shows organizations using security AI extensively save an average of $2.2 million in breach costs compared to those without. But that result requires knowing your AI is actually working, not just assuming it is.

They train their teams to interrogate AI output. Not to distrust it — to evaluate it. A security analyst who knows how to pressure-test an AI recommendation is more valuable than one who simply executes it. That skill needs to be deliberately built, not assumed.

They keep the human layer strong. As I wrote in my previous article, the human attack surface is the one no technology fully protects. As AI-powered attacks grow more sophisticated, the organizations that invest equally in human resilience — not just AI tools — will be the ones still standing.

AI is not your cybersecurity savior. It is a powerful, double-edged capability that is reshaping both sides of a conflict that was already asymmetric. The organizations that treat it with clear eyes — deploying it confidently below the AI Trust Boundary, and with rigorous human oversight above it — are the ones building defenses that will actually hold.

The vendor who promised me zero false positives, by the way, is no longer in our procurement pipeline.

I write weekly about cybersecurity, AI, and the human psychology that connects them. If this resonated with you, follow me here on Medium and connect with me on LinkedIn.

Where do you draw your AI Trust Boundary? I’d love to hear how your organisation is navigating this in the comments.


What AI Really Means for Cybersecurity — An Architect’s Honest View was originally published in Towards AI on Medium, where people are continuing the conversation by highlighting and responding to this story.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top