Menu
AI Framework

AI Compliance by Industry

Why Universal AI Frameworks Fail — And How to Build Industry-Specific Compliance That Actually Works

By Patrick Dasoberi, CISA, CDPSE | Former CTO, CarePoint | Subject Matter Expert, Ghana Ethical AI Framework

"Universal AI compliance frameworks are a myth. The organisations that succeed treat compliance as industry-specific engineering, not checkbox exercises."

I learned this lesson the hard way. As CTO of CarePoint, I operated healthcare AI systems across Ghana, Nigeria, Kenya, and Egypt — four countries, four different regulatory frameworks, and one brutal reality: the same AI system that worked in Accra had to be technically reconfigured in Nairobi and contractually restructured in Lagos. Egypt added encryption and reporting obligations that didn't exist anywhere else.

 But here's what made it even more complex: our healthcare systems didn't just handle clinical data. A single telemedicine consultation could trigger clinical documentation, mobile money payment, insurance claims, and prescription fulfilment—each workflow invoking different regulatory regimes across health, finance, telecoms, and insurance.

 This experience—combined with my work as a subject matter expert on Ghana's Ethical AI Framework with the Ministry of Communications and UN Global Pulse — taught me something that most compliance frameworks miss entirely: AI compliance isn't about mastering one industry's rules. It's about understanding why those rules exist, where they conflict, and how to translate between regulatory philosophies when your AI system crosses sector boundaries.

This guide is for compliance officers encountering AI governance for the first time, CTOs navigating multi-sector regulatory requirements, and founders expanding into regulated industries. Whether you're in healthcare, financial services, or government, or managing AI that touches multiple sectors, you'll find practical frameworks drawn from real deployments — not theoretical abstractions.

The Cross-Industry Compliance Problem Nobody Talks About

Diagram showing three compliance conflicts: Healthcare vs Finance on data minimisation, Government vs Healthcare on explainability requirements, and Telecom vs Civil Society on metadata sensitivity

Most AI compliance guidance assumes you're operating in a single, clearly-defined industry. The reality is messier. Modern AI systems don't respect neat regulatory boundaries — they process data and make decisions that span multiple sectors simultaneously.

 
Consider a telemedicine platform in West Africa. When a patient books a consultation, the system handles:

  • Clinical data (highest sensitivity, governed by health data protection laws)
  • Financial data (extreme fraud-prevention scrutiny, central bank regulations)
  • Mobile money transactions (regulated under telecommunications and payment system frameworks)
  • Insurance claims (separate regulatory regime with its own data requirements)

Ensuring compliance across all four simultaneously is one of the hardest challenges in Africa's digital health ecosystem — and it's a challenge that's becoming universal as AI systems grow more interconnected.

The Data Minimisation Paradox

Here's a concrete example of how industry compliance philosophies collide. Health data regulations typically require rich clinical detail for proper patient care. Financial regulators want minimal data sharing to reduce breach exposure. Meanwhile, insurers demand extensive data for fraud prevention and risk assessment.

These aren't just different rules — they're fundamentally different philosophies about what data should exist and who should access it. No single "AI compliance framework" can reconcile these tensions. You need industry-specific engineering that understands why each sector approaches data the way it does.

How Different Industries Approach AI Compliance

During my work on Ghana's Ethical AI Framework, I observed major differences in maturity, priorities, and risk culture across sectors. Understanding these differences is essential for anyone building AI systems that touch multiple industries — or advising organisations that do.

Healthcare: Safety and Privacy Above All

Healthcare approaches AI compliance through a patient safety lens. The priorities are clear: protect patients from harm, ensure clinical validity, prevent bias in diagnostic models, manage cross-border data flows appropriately, and obtain meaningful informed consent.
The sector tends to be risk-averse, slow-moving, and highly regulated — for good reason. When your AI makes a diagnostic recommendation, the stakes are life and death. This creates a compliance culture that prioritises extensive validation before deployment and continuous monitoring afterward.
During Ghana's AI framework discussions, healthcare representatives pushed strongly for strict purpose limitation — the principle that patient data should only be used for what it was originally collected for. This put them directly at odds with financial services stakeholders who wanted broader allowances for historical data analysis.

Financial Services: Accuracy, Auditability, and Adversarial Resistance

Healthcare approaches AI compliance through a patient safety lens. The priorities are clear: protect patients from harm, ensure clinical validity, prevent bias in diagnostic models, manage cross-border data flows appropriately, and obtain meaningful informed consent.
The sector tends to be risk-averse, slow-moving, and highly regulated — for good reason. When your AI makes a diagnostic recommendation, the stakes are life and death. This creates a compliance culture that prioritises extensive validation before deployment and continuous monitoring afterward.
During Ghana's AI framework discussions, healthcare representatives pushed strongly for strict purpose limitation — the principle that patient data should only be used for what it was originally collected for. This put them directly at odds with financial services stakeholders who wanted broader allowances for historical data analysis.

Government: Transparency and Public Accountability

Government AI compliance operates under intense public scrutiny. The priorities are fairness in automated decision-making, transparency about how systems work, maintaining citizen trust, meeting procurement requirements, and ethical deployment of surveillance and national systems.
During Ghana's framework development, government stakeholders wanted strict explainability requirements for all high-stakes AI — particularly for public services, automated administrative decisions, and surveillance technologies. Private sector representatives pushed back, warning this would slow innovation and arguing that some models (like deep learning for medical imaging) simply cannot be fully explained.

The outcome was tiered explainability requirements based on risk level rather than absolute mandates. This compromise reflects a broader truth: government AI compliance must balance transparency demands with practical limitations of modern AI systems.

Telecommunications: Scale, Security, and Reliability

Telecoms approaches AI compliance through an infrastructure lens. The priorities are identity verification at a massive scale, secure data storage, preventing service outages, and AI-enabled fraud detection.

One of the most contentious debates during Ghana's framework involved telco metadata. Telecommunications companies argued that metadata — call records, location data, usage patterns — was low sensitivity and should be flexibly available for AI model training in fraud detection and churn prediction.

Civil society groups strongly disagreed, pointing out that metadata reveals movement patterns, behavioural predictions, and household identity clusters. The framework ultimately classified metadata as regulated personal data, with specific obligations for model explainability when using telco-derived features. This was a significant shift from the industry's preferred position.

The Five Compliance Translation Failures I See Repeatedly

Five numbered cards showing common compliance translation failures: treating healthcare AI like fintech, applying government controls to commercial settings, assuming GDPR applies everywhere, ISO 27001 without AI controls, and forcing explainability everywhere

When organisations try to apply one industry's AI framework to another, they make predictable mistakes. Here are the five most common failures I've encountered across deployments in multiple countries and sectors.

Failure 1:
Treating Healthcare AI Like Fintech AI

The mistake: Over-prioritising fraud detection models while under-prioritising clinical safety and model validation. This happens when organisations bring fintech compliance thinking into healthcare environments without adaptation.

The outcome: Models perform well on risk scoring but poorly on diagnostic reliability. You end up with AI that's excellent at flagging suspicious billing patterns but dangerous when making clinical recommendations. In healthcare, a false negative on fraud is a financial problem. A false negative on diagnosis can be fatal.

Failure 2:
Applying Government Procurement Controls to Commercial Environments

The mistake: Implementing heavy documentation requirements and rigid approval processes designed for government AI procurement in fast-moving commercial settings.


The outcome: Innovation slows to a crawl. AI deployment becomes impossible for SMEs. And here's the real danger — teams start bypassing controls entirely because compliance becomes an obstacle rather than a safeguard. Government-style controls work in government because the procurement cycle expects them. In commercial environments, they create workarounds and shadow AI.

Failure 3:
Assuming GDPR-Style Requirements Apply Everywhere

The mistake: Assuming all African (or Asian, or Latin American) countries have identical privacy requirements because they've all enacted data protection laws.
The reality from my four-country experience: Ghana, Kenya, Nigeria, and Egypt differ significantly. Consent rules are not identical. Cross-border transfer requirements vary. Health data categories are defined differently. What qualifies as "sensitive" data changes between jurisdictions.


The outcome: Non-compliance through misalignment. Organisations either over-collect data (creating unnecessary risk) or under-collect (limiting AI functionality). I've seen systems designed for Nigerian NDPR requirements fail compliance audits in Kenya because the assumptions about cross-border data transfers didn't translate.

Failure 4:
Applying ISO 27001 Without AI-Specific Controls

The mistake: Treating ISO 27001 certification as sufficient AI security without addressing model-specific risks.
ISO 27001 is excellent for traditional information security, but it wasn't designed for machine learning systems. It doesn't address adversarial attacks on models, model drift over time, dataset lineage and provenance, or AI-specific incident response.

The outcome: Attacks on ML models go undetected because traditional security monitoring isn't looking for them. Bias increases over time as models drift. Regulatory exposure increases because you can't demonstrate the controls regulators are starting to expect for AI systems.

Failure 5:
Forcing Explainability Where Accuracy Matters More

The mistake: Assuming all African (or Asian, or Latin American) countries have identical privacy requirements because they've all enacted data protection laws.

The reality from my four-country experience: Ghana, Kenya, Nigeria, and Egypt differ significantly. Consent rules are not identical. Cross-border transfer requirements vary. Health data categories are defined differently. What qualifies as "sensitive" data changes between jurisdictions.

The outcome: Non-compliance through misalignment. Organisations either over-collect data (creating unnecessary risk) or under-collect (limiting AI functionality). I've seen systems designed for Nigerian NDPR requirements fail compliance audits in Kenya because the assumptions about cross-border data transfers didn't translate.

Multi-Jurisdictional AI Compliance: A Four-Country Case Study

Comparison cards for Ghana, Nigeria, Kenya, and Egypt showing different AI compliance requirements and system modifications needed for each country including data protection laws and specific technical adaptations

Let me walk you through exactly how compliance requirements for the same clinical decision support system differed across Ghana, Nigeria, Kenya, and Egypt. This isn't theoretical — these are modifications we actually implemented.

Ghana: Explicit Consent and Data Minimisation

Ghana's Data Protection Act required explicit patient consent and minimal data collection necessary for clinical use. Local storage was required unless specific conditions were met for cross-border transfer.

Our response: We configured the CDSS to collect fewer metadata fields in Ghana than other deployments. The consent workflow was more granular, with separate permissions for different data uses. This meant some AI features available elsewhere weren't available in Ghana because we couldn't collect the training data they required.

Nigeria: NDPR Audits and Third-Party Vendor Requirements

Nigeria's NDPR required listing all data processors, completing NDPR-compliant Data Protection Impact Assessments, and providing evidence of mandatory staff training.
Our response: We rewrote vendor risk management controls specifically for Nigerian operations. Every third-party service touching patient data needed documented NDPR compliance. The DPIA process was more extensive than other jurisdictions, requiring detailed analysis of each AI model's data flows.

Kenya: Data Localisation and Cross-Border Approvals

Kenya required prior approval for cross-border data transfers, stronger anonymisation for training datasets, and restrictions on cloud providers storing identifiable data externally.

Our response: We created a Kenya-only data pipeline with stricter pseudonymisation requirements and local data storage for raw clinical records. This meant maintaining separate infrastructure for Kenyan operations — adding cost and complexity, but necessary for compliance.

Egypt: Sensitive Data Registration and Encryption Requirements

Egypt's PDPL required registration as a "controller of sensitive health data," strong encryption obligations, and mandatory breach reporting within tight timelines.

 Our response: We implemented Egypt-specific encryption-at-rest policies that exceeded requirements elsewhere, plus a separate incident-reporting protocol with faster timelines than other jurisdictions. The registration process added months to our Egypt launch compared to other markets.

The key insight from this experience: "the same AI system that worked in Ghana had to be technically reconfigured in Kenya and contractually restructured in Nigeria — and Egypt added encryption and reporting obligations that didn't exist anywhere else." There is no shortcut to multi-jurisdictional compliance. You either do the work for each market, or you don't operate there.

Building AI-Specific Controls Beyond ISO 27001

Diagram showing ISO 27001 information security framework extended with five AI-specific controls: Dataset Lineage Tracking, Model Drift Detection, Adversarial Hardening, AI Incident Response, and Clinical-Technical Approval workflow

One of my biggest frustrations with standard security frameworks is their inadequacy for AI systems. ISO 27001 is excellent for what it covers, but it wasn't designed for machine learning. At MyClinicsOnline, we developed supplementary controls specifically for AI systems—controls that addressed the gaps traditional frameworks miss.

Dataset Lineage Tracking 
We implemented logging for every dataset version, recording clinical validation notes and tracking data source, transformations, and quality checks. This addressed bias traceability, audit trails, and regulatory documentation requirements that ISO 27001 doesn't contemplate. When regulators ask "how did you train this model?" — and they will — you need documented answers.

Model Drift Detection

We established scheduled model performance reviews, defined thresholds for "drift alerts," and created mandatory retraining cycles. This improved patient safety, predictive accuracy, and compliance transparency. Models that performed well at launch can degrade over time as patient populations change or data distributions shift. Without drift detection, you won't know until harm occurs.

Adversarial Attack Hardening

We introduced adversarial robustness testing, input perturbation tests, and edge-case clinical scenario testing. This protected models from manipulated medical images, out-of-distribution inputs, and prompt-injection vulnerabilities in RAG systems. Traditional penetration testing doesn't cover these attack vectors — you need ML-specific security testing.

AI Incident Response Playbook

Standard incident response procedures weren't adequate for AI-specific failures. We added processes for model failure alerts, steps for rolling back deployments, mandatory clinical review before redeployment, and exploit testing remediation steps. When an AI model fails in production, you need a different playbook than when a database goes down.

Clinical-to-Technical Approval Workflow

We developed a unique framework where clinicians validated outputs, engineers validated stability, and compliance reviewed audit trails. This filled the governance gap that ISO 27001 doesn't cover — the intersection between clinical safety, technical reliability, and regulatory compliance. No single team has all three perspectives; you need structured handoffs between them.

Practical Framework:
Becoming a Compliance Translator

Five-step framework for becoming a compliance translator: Map Regulatory Touchpoints, Understand the Why, Identify Conflicts Early, Engineer Solutions, and Build Regulator Relationships, with key questions for each step

If there's one capability that separates organisations that succeed at cross-industry AI compliance from those that struggle, it's the ability to translate between regulatory philosophies. Here's how to develop that capability.

Step 1: Map Your AI System's Regulatory Touchpoints.

Before you can translate between frameworks, you need to know which frameworks apply. For each AI system, identify every industry sector it touches. Don't think in terms of your company's primary industry — think in terms of what data flows through the system and what decisions it influences. A healthcare AI that processes payments touches financial regulation. An HR AI that screens candidates touches employment law. Map every touchpoint.
Step 2: Understand the "Why" Behind Each Sector's Rules

Compliance translation fails when you treat rules as arbitrary requirements to check off. Each sector's AI compliance approach reflects underlying values and risk tolerances. Healthcare prioritises patient safety because errors can kill. Finance prioritises auditability because regulators need to investigate discrimination claims. Government prioritises transparency because citizens have rights to understand decisions affecting them. When you understand why rules exist, you can find solutions that honour multiple sectors' concerns simultaneously.
Step 3: Identify Conflicts Early

The data minimisation paradox I described earlier is one conflict with another's cloud-first approach? Document these conflicts explicitly example of genuine regulatory conflict. Identify these conflicts before you build, not after. Where do healthcare's data richness requirements conflict with finance's data minimisation expectations? Where do government's transparency demands conflict with commercial confidentiality? Where do one jurisdiction's localisation requirements.
Step 4: Engineer Solutions, Don't Force Frameworks

Once you've identified conflicts, engineer solutions rather than forcing one framework onto all contexts. This might mean separate data pipelines for different jurisdictions, tiered access controls that satisfy multiple sectors' requirements, modular consent workflows that can adapt to different regulatory expectations, or architecture decisions that enable compliance flexibility. The goal is systems that can satisfy multiple regulatory philosophies simultaneously — not systems that fully embrace one and ignore others.
Step 5: Build Relationships With Regulators

AI regulation is evolving rapidly. Regulators are often figuring out requirements as they go, just like the organisations they regulate. Build relationships early. Participate in consultations. Engage with frameworks while they're being developed — as I did with Ghana's Ethical AI Framework. The organisations that shape regulation are better positioned to comply with it than those who wait for final rules and scramble to adapt.

The Bottom Line: Compliance as Competitive Advantage

I've watched organisations treat AI compliance as a cost centre — an obstacle to innovation that legal and compliance teams handle while the "real work" happens elsewhere. This is backwards.
The organisations that will dominate AI in regulated industries are those that treat compliance as industry-specific engineering. They're building systems that can expand into new markets because compliance flexibility is architected in from the start. They're earning customer trust because their AI governance is visible and robust. They're avoiding the regulatory crackdowns that will hit organisations that cut corners.
Universal AI compliance frameworks are a myth — but universal compliance principles exist. Understand why each industry regulates AI the way it does. Respect the genuine concerns behind regulatory requirements. Engineer solutions that honour multiple frameworks simultaneously. Build relationships with regulators while rules are still being written.
That's how you turn AI compliance from a constraint into a competitive advantage. That's how you build AI systems that can scale across industries and jurisdictions. And that's how you position your organisation for long-term success as AI regulation matures globally.

Ready to build industry-specific AI compliance that actually works?

Join my Foundation Training Program launching soon,  where I'll teach the frameworks, tools, and approaches that come from actually operating AI systems across multiple countries and industries — not from reading about compliance in textbooks.

Failure 1: Treating Healthcare AI Like Fintech AI