In this innovative approach, AI models don't merely provide insights—they act as decision-makers. They analyze, react, and occasionally operate independently. This transition necessitates a complete reevaluation of how we perceive risk, establish trust, and shield digital infrastructures.
From Logic to Learning: The Framework Has Shifted
Traditionally, enterprise software was constructed in layers: infrastructure, data, logic, and presentation. Now, a new layer has emerged in the architecture—the model layer. It's dynamic, probabilistic, and progressively crucial to how applications perform.
Jeetu Patel, EVP and GM of security and collaboration at Cisco, explained this transformation during a recent discussion: “We are striving to create highly predictable enterprise applications atop a stack that is inherently unpredictable.”
This unpredictability isn't a defect—it's a characteristic of large language models and generative AI. However, it complicates conventional security presumptions. Models don't consistently yield identical outputs from the same inputs. Their actions can vary with new data, fine-tuning, or environmental stimuli. This variability makes them more challenging to protect.
AI Is the New Vulnerability
As AI assumes a greater role in application processes, it also becomes a more appealing target. Threat actors are already exploiting weaknesses via prompt injection, jailbreaks, and system prompt extraction. With models being trained, shared, and fine-tuned at unprecedented speeds, security measures find it hard to keep pace.
Patel highlighted that most enterprises require six to nine months to verify a model, yet models themselves might only be applicable for three to six months. The calculations don't add up.
Moreover, the proliferation of models results in increased inconsistency—each with differing safety thresholds, behaviors, and safeguards. This quilt of protections leaves gaps. Patel contended that the sole path forward is “a unified substrate for security and safety across all models, agents, applications, and clouds.”
Runtime Constraints and Algorithmic Validation
Considering the velocity and complexity of contemporary threats, traditional QA practices fall short. Patel stressed that red teaming must evolve into something automated and algorithmic. Security must transition from sporadic evaluations to ongoing behavioral verification.
He depicted one such method as “the game of 1,000 questions”—an automated inquiry technique that scrutinizes a model's responses for indications of compromise. This type of adaptive red teaming exposes how models might be manipulated into unsafe conduct through indirect or deceptive prompts. “We successfully breached DeepSeek 100% of the time with the top 50 benchmark prompts,” he mentioned, “while OpenAI managed to break 26% of the time.”
Such differential risk underscores the necessity for a standardized, cross-model framework for runtime enforcement. Models cannot be treated as black boxes—they must be observed, validated, and directed in real time.
Agent AI: When Models Operate Alone
The hazard extends beyond outputs. With the emergence of agent AI—where models autonomously complete tasks, call APIs, and interact with other agents—the intricacy amplifies. Security must now consider autonomous systems that make choices, communicate, and execute code without human oversight.
Patel cautions that inter-agent communication introduces new threat pathways, as models exchange data and instructions among themselves. Without supervision, these exchanges could escalate vulnerabilities or mask malicious activities.
This progression is accelerating. By next year, we may witness widespread deployment of agents executing multi-step workflows with minimal human input. Securing these systems will demand a mix of visibility, behavioral heuristics, and real-time enforcement—at a scale the sector has never encountered before.
“As AI grows more intelligent and self-reliant, the stakes for maintaining its security rise significantly. We must alter our perspective on risks and act more swiftly than before,” warned Russell Fishman, senior director, global head of solutions product management for AI and modern workloads at NetApp. “This involves focusing closely on data lineage—ensuring we have clarity into, protection of, and faith in the data used to fine-tune and retrain models, as well as the information driving real-time inference. By tracing and securing this entire ‘chain of trust,’ we can reduce the risks linked to subpar agent responses and shield against increasingly sophisticated attack vectors."
A Case for Unified Infrastructure and Open Collaboration
Patel warns that if each model, platform, and business deploys its distinct security framework, we're heading toward disorder. What's required is a unified infrastructure—a neutral, interoperable base for AI security that spans clouds, vendors, and models.
Acknowledging this, Cisco unveiled the introduction of Foundation AI at RSAC 2025—a significant move toward democratizing AI security.
Foundation AI is marketed as the first open-source reasoning model explicitly crafted to boost security applications. By making this model publicly accessible, Cisco intends to cultivate a community-driven strategy for securing AI systems, inspiring cooperation across the industry to tackle the intricate challenges posed by AI integration.
The launch of Foundation AI symbolizes a broader industry tendency toward open collaboration in AI security. By participating in the open-source community, Cisco is not only addressing the immediate security concerns tied to AI but also establishing a precedent for other entities to follow in fostering transparency and joint problem-solving in the AI era.
The Human Element: Judgment Remains Vital
Even with AI's prowess, it doesn't supplant human intuition. Patel emphasized that even advanced models find it difficult to mimic instinct, subtlety, and non-verbal reasoning. “Most of the things you and I engage in,” he stated, “involve some degree of data—but then a lot of judgment.”
The finest systems will be those that complement human expertise, not replace it. We still require individuals to pose the right questions, interpret the right signals, and make the right decisions—especially when AI's recommendations venture into ambiguous territories.
Much like using GPS in a city you already know, humans must retain the capacity to confirm, override, and refine machine-generated suggestions. AI should be a co-pilot, not an autopilot.
Redefining Trust in the Era of Intelligence
As organizations integrate intelligence further into their systems, they must also instill trust. This entails constructing models that are accountable. It means validating behavior consistently, not just at launch. And it means collaborating—across businesses, disciplines, and platforms—to ensure that AI strengthens security without becoming its own liability.
Fishman concluded, "Real-time monitoring, smarter guardrails, and cross-sector collaboration—with transparency at every stage—are vital to building trust in AI and safeguarding our digital realm."
AI is already reshaping the cybersecurity landscape. The query is whether we can fortify that transformation in time. The intelligence layer is here. It's potent. And it's susceptible.
Now is the moment to reimagine what security looks like when intelligence is omnipresent.
The above is the detailed content of Why AI Is The New Cybersecurity Battleground. For more information, please follow other related articles on the PHP Chinese website!

Hot AI Tools

Undress AI Tool
Undress images for free

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Clothoff.io
AI clothes remover

Video Face Swap
Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics

Investing is booming, but capital alone isn’t enough. With valuations rising and distinctiveness fading, investors in AI-focused venture funds must make a key decision: Buy, build, or partner to gain an edge? Here’s how to evaluate each option—and pr

Let’s talk about it. This analysis of an innovative AI breakthrough is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here). Heading Toward AGI And

Remember the flood of open-source Chinese models that disrupted the GenAI industry earlier this year? While DeepSeek took most of the headlines, Kimi K1.5 was one of the prominent names in the list. And the model was quite cool.

Let’s talk about it. This analysis of an innovative AI breakthrough is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here). For those readers who h

By mid-2025, the AI “arms race” is heating up, and xAI and Anthropic have both released their flagship models, Grok 4 and Claude 4. These two models are at opposite ends of the design philosophy and deployment platform, yet they

For example, if you ask a model a question like: “what does (X) person do at (X) company?” you may see a reasoning chain that looks something like this, assuming the system knows how to retrieve the necessary information:Locating details about the co

Clinical trials are an enormous bottleneck in drug development, and Kim and Reddy thought the AI-enabled software they’d been building at Pi Health could help do them faster and cheaper by expanding the pool of potentially eligible patients. But the

The Senate voted 99-1 Tuesday morning to kill the moratorium after a last-minute uproar from advocacy groups, lawmakers and tens of thousands of Americans who saw it as a dangerous overreach. They didn’t stay quiet. The Senate listened.States Keep Th
