Your organisation passed every audit. Met every compliance requirement. Implemented NIST CSF, ISO 27001, and CIS Controls. And you still got breached through your AI systems.
You're not alone.
According to IBM's Cost of a Data Breach Report 2025, 23.77 million secrets were leaked through AI systems in 2024—a 25% increase from the previous year. The compromised organisations had comprehensive security programs. They followed the frameworks. The frameworks just don't cover this.
Here's the uncomfortable truth: the security frameworks we've relied on for decades weren't designed for AI. They're comprehensive for traditional systems. But AI introduces attack surfaces that don't map to existing control categories—and security teams are finding out the hard way.

The Compliance Illusion:
When Passing Audits Doesn't Mean You're Secure
Let me be direct about something I've seen repeatedly across implementations in the African continent: compliance doesn't equal protection when it comes to AI systems.
The major frameworks organisations depend on—NIST Cybersecurity Framework 2.0, ISO 27001:2022, and CIS Controls v8—were developed when the threat landscape looked completely different. These frameworks focus on traditional asset protection, information security controls, and endpoint security. None of them provide specific guidance on AI attack vectors.
Why? Because these attack vectors didn't exist when the frameworks were written.
Consider this breakdown:
Framework
NIST CSF 2.0
ISO 27001:2022
CIS Controls v8
Release Year
2024
2022
2021
AI-Specific Controls
None
None
None
Primary Focus
Traditional asset protection
Information security management
Endpoint and access controls
From my experience implementing security frameworks for healthcare AI systems protecting million patient records, I discovered this gap firsthand. We could tick every box on the compliance checklist and still leave critical AI-specific vulnerabilities completely unaddressed. (Related: AI Risk Management Fundamentals)
The HiddenLayer AI Threat Landscape Report confirms this pattern: 74% of organizations in 2024 reported at least one AI-related breach. That's nearly three-quarters of companies—many of whom passed their security audits with flying colors.
What Makes AI Systems Fundamentally Different
Traditional software operates in predictable ways. You give it an input, it produces a defined output. Security controls assume this predictability—detect the bad input, block the malicious code, and log the unauthorized access.
AI systems don't work like that.
They learn from data. They adapt over time. They make probabilistic decisions rather than following deterministic logic. And they can be manipulated through entirely new attack methods that traditional controls weren't designed to catch.

Here's a practical comparison:
Traditional Application Security:
- Inputs are structured and validated
- Behaviour is deterministic
- Vulnerabilities are in the code
- Attacks exploit technical flaws
AI System Security:
- Inputs include natural language and unstructured data
- Behaviour is probabilistic and evolving
- Vulnerabilities exist in models, data, and training processes
- Attacks exploit reasoning and intent
This fundamental difference creates a massive blind spot in traditional security programs. For a deeper understanding of these risks, see our guide on Understanding AI Security Threats.
The Attack Vectors Your Frameworks Don't Cover
When security teams try to map AI threats to existing control families, they hit a wall. The attacks simply don't fit.

Prompt Injection: Bypassing Access Controls Entirely
Every major framework includes access control requirements. NIST CSF covers identity management. ISO 27001 addresses logical access controls. CIS Controls mandate strict access permissions.
But here's the problem: prompt injection attacks manipulate AI behavior through carefully crafted natural language input, bypassing authentication entirely.
The attacker doesn't need credentials. They don't need to breach your network. They craft language that causes the AI to execute unintended actions—and your access controls never trigger because no unauthorized "access" occurred.
In 2025, researchers documented a zero-click vulnerability in Microsoft 365 Copilot called EchoLeak. Attackers embedded hidden instructions inside ordinary emails. When recipients interacted with Copilot, the system retrieved and executed those instructions as part of its normal workflow. No exploit code. No network breach. The AI simply followed what it interpreted as a legitimate command.
Traditional frameworks offer zero guidance on defending against this. Learn more about prompt injection in our article on OWASP LLM Top 10 Vulnerabilities.
Model Poisoning: Corruption During "Authorized" Processes
System integrity controls are a fundamental component of every security framework. They focus on detecting malware and preventing unauthorised code execution.
But model poisoning happens during the authorised training process. An attacker doesn't need to breach your systems—they corrupt the training data, and the AI learns malicious behaviour as part of normal operation.
Your integrity controls never fire because nothing "unauthorized" happened. The training process worked exactly as designed. It just learned the wrong things.
According to Trend Micro's State of AI Security Report, organizations are increasingly vulnerable to these attacks as AI adoption accelerates without corresponding security measures.
Adversarial Attacks: When Inputs Look Normal But Aren't
Configuration management and input validation controls assume that malicious inputs can be identified and blocked. But adversarial attacks exploit mathematical properties of machine learning models using inputs that look completely normal to humans and traditional security tools yet cause models to produce incorrect or dangerous outputs.
An image that humans see as a stop sign, but the AI identifies as a speed limit sign. A financial transaction that appears legitimate but is specifically crafted to evade fraud detection. These attacks succeed not because they bypass controls, but because the controls can't detect what's happening.
MITRE ATLAS (Adversarial Threat Landscape for AI Systems) documents these attack techniques extensively—and most organisations haven't even heard of it.
AI Supply Chain Attacks: Risks Beyond Your Bill of Materials
Traditional supply chain risk management (the SR control family in NIST SP 800-53) focuses on vendor assessments, contract security requirements, and software bill of materials.
AI supply chains include additional components these controls don't address:
- Pre-trained models downloaded from public repositories
- Training datasets sourced from multiple providers
- ML frameworks with their own dependencies and risks
But how do you validate the integrity of model weights? How do you detect if a pretrained model has been backdoored? How do you assess whether a training dataset has been poisoned?
The frameworks don't provide guidance because these questions didn't exist when the frameworks were developed.
The December 2024 Ultralytics AI library compromise demonstrates this perfectly. Attackers didn't exploit a missing patch or weak password. They compromised the build environment itself, injecting malicious code after code review but before publication. Traditional software supply chain controls couldn't detect this type of manipulation.
For guidance on managing AI supply chain risks, see Data Privacy & AI Security.
The Detection Problem:
You Can't Find What You're Not Looking For
According to IBM's 2025 data, organisations take an average of 276 days to identify a breach and another 73 days to contain it.
For AI-specific attacks, detection times are potentially even longer. Why? Security teams lack established indicators of compromise for these novel attack types. Your SIEM isn't configured to alert on prompt injection attempts. Your vulnerability scanners don't check for model poisoning. Your threat intelligence feeds don't include AI-specific attack signatures.
Meanwhile, the attack surface keeps expanding. Sysdig's research shows a 500% surge in cloud workloads containing AI/ML packages in 2024. Most security teams can't even inventory the AI systems in their environment, let alone apply AI-specific controls.
From my experience implementing AI security across healthcare operations in multiple African countries, I've observed that organisations typically discover AI-related issues through operational failures—not security detection. A customer service chatbot starts giving incorrect answers. A recommendation engine produces obviously wrong suggestions. By then, the damage is often already done.
Related reading: AI Security Operations Best Practices.
Why Frameworks Haven't Caught Up (And When They Might)
Security frameworks evolve slowly by design. They require industry consensus, public comment periods, and careful vetting. This deliberate approach ensures stability and broad applicability.
But AI isn't waiting.
The Regulatory Landscape:
The EU AI Act, which took effect in 2025, imposes penalties up to €35 million or 7% of global revenue for serious violations. It addresses AI-specific risks that traditional frameworks ignore. For a complete breakdown, see our EU AI Act Compliance Guide.
NIST has been working to close the gap:
- The AI Risk Management Framework (released January 2023) provides guidance for trustworthy AI
- The Generative AI Profile (released July 2024) addresses specific GenAI risks
- The draft Cybersecurity Framework Profile for AI (released December 2025) maps AI considerations to CSF 2.0
But here's the critical issue: NIST AI RMF is not yet integrated into the primary security frameworks that drive organisational security programs. It's voluntary guidance, not required controls. Organisations that follow only NIST CSF, ISO 27001, or CIS Controls won't automatically implement AI-specific protections.
For a detailed walkthrough, read Understanding the NIST AI RMF.
What Security Teams Should Do Now
Waiting for frameworks to catch up means responding to breaches instead of preventing them. Based on my AI/ML, CISA, and CDPSE certification training combined with practical implementation experience, here are the steps that actually matter:

1. Conduct an AI-Specific Risk Assessment
Don't try to force AI risks into your existing risk assessment templates. Create a separate assessment that addresses:
- AI model inventory (including shadow AI and unauthorized deployments)
- Training data sources and integrity verification
- Model supply chain dependencies
- Prompt injection attack surfaces
- Integration points between AI systems and critical data
The IBM Cost of a Data Breach Report 2025 found that 97% of organizations that suffered AI-related breaches lacked proper AI access controls. You can't control what you don't know exists.
For a step-by-step guide, see How to Conduct an AI Security Assessment.
2. Implement AI-Specific Security Controls
Go beyond framework requirements. Add controls specifically designed for AI threats:
For Prompt Injection:
- Input sanitization for natural language inputs
- Output filtering and validation
- Behavioral monitoring for anomalous AI responses
For Model Integrity:
- Model versioning and provenance tracking
- Integrity verification before deployment
- Continuous monitoring for model drift or unexpected behavior changes
For AI Supply Chain:
- Create an AI Bill of Materials (AI-BOM) that includes models, datasets, and dependencies
- Verify model sources and implement allowlists for approved repositories
- Scan training data for poisoning indicators
Google's Secure AI Framework (SAIF) provides additional guidance on implementing these controls at scale.
3. Update Detection Capabilities
Your current security monitoring probably has AI blind spots. Address them:
- Add AI-specific indicators of compromise to your detection rules
- Monitor for prompt injection patterns in logs and telemetry
- Establish baselines for AI system behavior to detect anomalies
- Include AI systems in your vulnerability management program
Related: Building AI-Aware Security Operations.
4. Build AI Security Knowledge Within Your Team
The Cisco 2025 Cybersecurity Readiness Index found that 86% of business leaders with cyber responsibilities reported at least one AI-related incident over the past 12 months. Yet most security teams lack specialized AI security training.
Resources to build capability:
- NIST AI RMF and associated guidance documents
- MITRE ATLAS (adversarial threat landscape for AI systems)
- OWASP LLM Top 10 for large language model vulnerabilities
- Google's Secure AI Framework (SAIF)
5. Prepare for Regulatory Compliance
Even if traditional frameworks don't require AI security controls, regulations increasingly do. Prepare for:
- EU AI Act requirements (particularly for high-risk AI systems)
- Industry-specific regulations addressing AI (healthcare, finance, critical infrastructure)
- Emerging state and national AI governance requirements
For healthcare-specific guidance, see AI Security in Healthcare.
The Business Case for Acting Now
Organizations using AI and automation extensively in security operations saved an average of $1.9 million in breach costs and reduced the breach lifecycle by 80 days, according to IBM's 2025 report.
But there's a catch: AI adoption is outpacing AI governance. IBM found that 63% of organizations lack policies for AI use. Those lacking governance pay more when AI-related incidents happen.
The math is straightforward: invest in AI security now, or pay significantly more when breaches occur later.
Related: Building the Business Case for AI Security Investment.
A Framework Gap, Not a Framework Failure
Let me be clear: traditional security frameworks aren't wrong. They're incomplete.
NIST CSF, ISO 27001, and CIS Controls remain essential for protecting traditional systems. The controls they mandate work exactly as designed—for the systems they were designed to protect.
The problem is that AI systems have evolved beyond what those frameworks anticipated. Organizations that fully meet framework requirements remain fundamentally vulnerable to an entire category of threats the frameworks don't address.
Security approaches need to change with the threat landscape. Not because current frameworks are inadequate for what they were designed to protect, but because the systems being protected have changed.
Key Takeaways
- 23.77 million secrets leaked through AI systems in 2024—even from compliant organizations
- Traditional frameworks (NIST CSF, ISO 27001, CIS Controls) don't cover AI-specific attack vectors
- Prompt injection, model poisoning, adversarial attacks, and AI supply chain risks require controls beyond current framework requirements
- 97% of organisations with AI-related breaches lacked proper AI access controls
- Detection gaps mean AI-specific attacks often go unnoticed for extended periods
- Regulatory pressure is increasing, with EU AI Act penalties reaching €35 million or 7% of revenue
- Organisations should act now—conduct AI-specific risk assessments, implement AI security controls, and build team capabilities
The threat landscape has fundamentally changed. Your security program needs to change with it.
Frequently Asked Questions
Does ISO 27001 certification protect against AI-specific threats?
ISO 27001:2022 provides a comprehensive information security management system, but it doesn't include specific controls for AI attack vectors like prompt injection or model poisoning. Organisations can extend their ISMS to address AI risks, but the standard itself doesn't require it.
What's the difference between NIST CSF and NIST AI RMF?
NIST CSF (Cybersecurity Framework) addresses general cybersecurity risk management across organisations. NIST AI RMF (AI Risk Management Framework) specifically addresses risks associated with AI systems, including trustworthiness, bias, and AI-specific security concerns. They're complementary but serve different purposes. See our detailed comparison: NIST CSF vs NIST AI RMF.
How do I convince leadership to invest in AI security beyond compliance requirements?
Focus on the business case: IBM's 2025 data shows organizations with extensive AI security automation saved $1.9 million per breach and reduced response time by 80 days. Also highlight regulatory risks—EU AI Act penalties can reach €35 million or 7% of global revenue.
What's the most urgent AI security control to implement?
Start with visibility. Create an AI inventory that includes sanctioned systems, shadow AI deployments, and AI-powered features in existing tools. You can't protect what you don't know exists. IBM found that 97% of AI-breached organizations lacked proper AI access controls—which starts with knowing where AI is used.
Are there certifications for AI security professionals?
Several certifications address AI security aspects, including ISACA's CDPSE (Certified Data Privacy Solutions Engineer) which covers AI privacy considerations, and emerging certifications focused specifically on AI risk management. NIST AI RMF training is also becoming widely available.
About me
Patrick D. Dasoberi

Patrick D. Dasoberi is the founder of AI Security Info and a certified cybersecurity professional (CISA, CDPSE) specialising in AI risk management and compliance. As former CTO of CarePoint, he operated healthcare AI systems across multiple African countries. Patrick holds an MSc in Information Technology and has completed advanced training in AI/ML systems, bringing practical expertise to complex AI security challenges.