Point of View

Cybersecurity for AI starts now because models are the new attack surface

Home » Research & Insights » Cybersecurity for AI starts now because models are the new attack surface

AI is no longer a lab experiment—it’s making real-world decisions in finance, healthcare, HR, and critical infrastructure. But while these systems scale in influence, their security posture hasn’t caught up. Enterprises continue to treat AI like an internal application rather than a frontline risk.

This concern is no longer just theoretical. From prompt injection to model inversion, attackers are already targeting AI systems to exfiltrate data, manipulate outcomes, or bypass controls. AI has become a new attack surface and must be secured as such, with the same urgency applied to your cloud, network, or endpoints.

This PoV reframes AI not as a technical asset to be optimized but as a digital surface that must be defended, governed, and monitored as tightly as your infrastructure.

AI expands the attack surface with new threat vectors

AI systems expand the attack surface far beyond traditional code. Threat actors exploit large language models (LLMs) with crafted prompts, feed adversarial inputs to perception models, and reverse-engineer deployed models to extract logic or sensitive data. Because AI relies on statistical reasoning and emergent behavior, validating and securing these systems is significantly harder than securing conventional software.

AI vulnerabilities span the full stack:

  • Model-level threats include hallucinations, policy evasion, adversarial inputs, and model inversion.
  • Data-level attacks involve poisoning training data, flipping labels, spoofing sensors, and leaking proprietary inputs.
  • Infrastructure risks range from insecure APIs to shadow AI, third-party model dependencies, and stolen models on edge devices.
  • Systemic risks such as excessive model autonomy and improper output handling compound technical exposure with operational volatility.
AI needs layered security built for emerging threats

Traditional security frameworks don’t fully map to the complexity of AI systems. To address this, enterprises need a focused approach that highlights where AI-specific risks emerge and where controls must evolve.

DEFEND: Securing AI at every layer of the stack

DEFEND is a structured model to help enterprises secure AI systems across their lifecycle — from data to deployment. It offers a focused lens to pinpoint the most critical vulnerabilities and prioritize protection before issues scale.

DEFEND breaks down the AI stack into six high-risk zones often missed by traditional security approaches — from input to interface. (see Exhibit 1):

Exhibit 1: DEFEND – the six pillars of technical security for AI systems

Source: HFS Research, 2025

  • Data integrity: Are models being trained on clean, accurate, and trusted data? Poisoned inputs, mislabeled records, or hidden manipulations in the data pipeline can compromise the model.
  • Execution control: If AI environments aren’t controlled, attackers can hijack compute, inject malicious code, or bypass checks — corrupting the system undetected.
  • Federated learning: If models train across devices or organizations, how to ensure bad actors don’t poison updates or leak sensitive data through model gradients?
  • Endpoint hardening: Are the APIs, model endpoints, and interfaces exposed? These are frequent entry points for prompt injections, unauthorized queries, or model exfiltration attacks.
  • Network vigilance: How can we find out what the AI system is doing across the network? From calling external services to leaking responses, real-time visibility and traffic inspection are critical.
  • Dynamic audit trails: How to prove what the AI system did, when, and why? Logging and traceability aren’t optional when things go wrong—they’re your only way to show accountability.

Each of these pillars represents a common exposure point across the AI lifecycle. By addressing them early, teams can reduce security blind spots and close off paths attackers are likely to exploit.

AI governance can’t be left to security alone

Most enterprises assume their security teams can extend existing controls to cover AI. However, that’s a dangerous oversimplification. AI introduces decisions with legal, ethical, and reputational consequences that can’t be managed with technical safeguards alone. It demands governance that defines who owns the risk and not just who owns the infrastructure.

That’s where GUARD comes in: a model to translate responsible AI principles into concrete oversight.

GUARD translates Responsible AI principles into oversight structures, ensuring AI systems are not only secured but used ethically. It helps leaders define risk ownership and operational governance as AI drives more decisions with legal and reputational consequences. (see Exhibit 2).

Exhibit 2: GUARD model – the five layers of operational governance for
responsible AI

Source: HFS Research, 2025

  • Governance structures: Clear policies and oversight to define how AI is governed and by whom.
  • User access control: Define and restrict who can build, deploy, or modify models based on role and accountability.
  • Adversarial robustness: Regular testing to expose and defend against prompt injection, jailbreaking, or adversarial attacks.
  • Responsibility mapping: Assign accountability across teams for model behavior, failures, and decisions.
  • Disclosure and compliance: Maintain transparent records, meet regulatory requirements, and explain AI decisions to auditors.

DEFEND and GUARD give enterprises a practical starting point for securing AI systems technical and governance perspectives. DEFEND focuses on securing the underlying systems, while GUARD ensures those systems are used responsibly and accountably.

While most security teams are focused on infrastructure, AI introduces new risk classes such as opaque decision logic and model misuse. Addressing these risks demand business and compliance oversight, not just technical safeguards.

AI governance needs structure, not assumptions

The Pulse Study, also states that 64% of enterprise leaders consider cybersecurity the most value-generating digital capability. This positions AI security not just as a risk management necessity but as a strategic differentiator.

AI systems must be secured not only through technical measures but also through enterprise-level governance. GRC frameworks provide the structure to manage AI-related risk responsibly and in compliance with emerging regulations.

This means going beyond internal safeguards by maintaining traceability across model behavior, data usage, and controls — including those in third-party AI environments.

According to the HFS Pulse Study, 2025, 39% of enterprise leaders rank data security and data reliability as the #1 most critical SLA for AI-driven service models.

Key areas include:

  • Conducting risk assessments tailored to AI architectures and use cases
  • Documenting model development, training data lineage, and inference pipelines
  • Aligning with industry frameworks such as NIST AI RMF and ISO/IEC 42001
  • Mapping models to risk tiers and maintaining audit trails for transparency
AI ethics is now a security risk

Ethical failures such as bias, opacity, and decision misuse can trigger regulatory scrutiny, reputational damage, and customer mistrust.

A clear lesson from the public domain is that AI-driven surveillance and predictive policing demonstrate how unchecked algorithms can reinforce systemic bias. Enterprises remain accountable when flawed AI influences hiring, access, or decisions they didn’t directly control.

For CISOs, CIOs, and risk leaders, this means governance isn’t just about compliance—it’s about control. If an AI system can’t explain its decisions, it can’t defend them. And when things go wrong, leadership must bear the impact.

To mitigate this, enterprises must act on three fronts:

  • Testing models for bias and fairness
  • Building explainability into high-impact AI decisions
  • Defining clear accountability for autonomous actions

As AI continues to shape decisions at scale, trust, transparency, and traceability must become pillars of your enterprise AI strategy. Ignoring these risks invites reputational collapse — especially when AI makes decisions no one can explain or justify.

The Bottom Line: If your AI makes decisions, it’s already part of your attack surface.

Securing AI isn’t about wrapping old controls around new systems. It requires embedded security, operational guardrails, and enterprise-wide governance—from data integrity to decision accountability.

Organizations that act now will build scalable trust into AI-driven operations. Those that delay will inherit systems they can’t fully explain, control, or defend and will pay the price when something breaks.

Sign in to view or download this research.

Login

Register

Insight. Inspiration. Impact.

Register now for immediate access of HFS' research, data and forward looking trends.

Get Started

Logo

confirm

Congratulations!

Your account has been created. You can continue exploring free AI insights while you verify your email. Please check your inbox for the verification link to activate full access.

Sign In

Insight. Inspiration. Impact.

Register now for immediate access of HFS' research, data and forward looking trends.

Get Started
ASK
HFS AI