Why Universal AI Frameworks Fail — And How to Build Industry-Specific Compliance That Actually Works
By Patrick Dasoberi, CISA, CDPSE | Former CTO, CarePoint | Subject Matter Expert, Ghana Ethical AI Framework
"Universal AI compliance frameworks are a myth. The organisations that succeed treat compliance as industry-specific engineering, not checkbox exercises."
I learned this lesson the hard way. As CTO of CarePoint, I operated healthcare AI systems across Ghana, Nigeria, Kenya, and Egypt — four countries, four different regulatory frameworks, and one brutal reality: the same AI system that worked in Accra had to be technically reconfigured in Nairobi and contractually restructured in Lagos. Egypt added encryption and reporting obligations that didn't exist anywhere else.
But here's what made it even more complex: our healthcare systems didn't just handle clinical data. A single telemedicine consultation could trigger clinical documentation, mobile money payment, insurance claims, and prescription fulfilment—each workflow invoking different regulatory regimes across health, finance, telecoms, and insurance.
This experience—combined with my work as a subject matter expert on Ghana's Ethical AI Framework with the Ministry of Communications and UN Global Pulse — taught me something that most compliance frameworks miss entirely: AI compliance isn't about mastering one industry's rules. It's about understanding why those rules exist, where they conflict, and how to translate between regulatory philosophies when your AI system crosses sector boundaries.
This guide is for compliance officers encountering AI governance for the first time, CTOs navigating multi-sector regulatory requirements, and founders expanding into regulated industries. Whether you're in healthcare, financial services, or government, or managing AI that touches multiple sectors, you'll find practical frameworks drawn from real deployments — not theoretical abstractions.

Most AI compliance guidance assumes you're operating in a single, clearly-defined industry. The reality is messier. Modern AI systems don't respect neat regulatory boundaries — they process data and make decisions that span multiple sectors simultaneously.
Consider a telemedicine platform in West Africa. When a patient books a consultation, the system handles:
Ensuring compliance across all four simultaneously is one of the hardest challenges in Africa's digital health ecosystem — and it's a challenge that's becoming universal as AI systems grow more interconnected.
Here's a concrete example of how industry compliance philosophies collide. Health data regulations typically require rich clinical detail for proper patient care. Financial regulators want minimal data sharing to reduce breach exposure. Meanwhile, insurers demand extensive data for fraud prevention and risk assessment.
These aren't just different rules — they're fundamentally different philosophies about what data should exist and who should access it. No single "AI compliance framework" can reconcile these tensions. You need industry-specific engineering that understands why each sector approaches data the way it does.
During my work on Ghana's Ethical AI Framework, I observed major differences in maturity, priorities, and risk culture across sectors. Understanding these differences is essential for anyone building AI systems that touch multiple industries — or advising organisations that do.
Healthcare approaches AI compliance through a patient safety lens. The priorities are clear: protect patients from harm, ensure clinical validity, prevent bias in diagnostic models, manage cross-border data flows appropriately, and obtain meaningful informed consent.
The sector tends to be risk-averse, slow-moving, and highly regulated — for good reason. When your AI makes a diagnostic recommendation, the stakes are life and death. This creates a compliance culture that prioritises extensive validation before deployment and continuous monitoring afterward.
During Ghana's AI framework discussions, healthcare representatives pushed strongly for strict purpose limitation — the principle that patient data should only be used for what it was originally collected for. This put them directly at odds with financial services stakeholders who wanted broader allowances for historical data analysis.
Healthcare approaches AI compliance through a patient safety lens. The priorities are clear: protect patients from harm, ensure clinical validity, prevent bias in diagnostic models, manage cross-border data flows appropriately, and obtain meaningful informed consent.
The sector tends to be risk-averse, slow-moving, and highly regulated — for good reason. When your AI makes a diagnostic recommendation, the stakes are life and death. This creates a compliance culture that prioritises extensive validation before deployment and continuous monitoring afterward.
During Ghana's AI framework discussions, healthcare representatives pushed strongly for strict purpose limitation — the principle that patient data should only be used for what it was originally collected for. This put them directly at odds with financial services stakeholders who wanted broader allowances for historical data analysis.
Government AI compliance operates under intense public scrutiny. The priorities are fairness in automated decision-making, transparency about how systems work, maintaining citizen trust, meeting procurement requirements, and ethical deployment of surveillance and national systems.
During Ghana's framework development, government stakeholders wanted strict explainability requirements for all high-stakes AI — particularly for public services, automated administrative decisions, and surveillance technologies. Private sector representatives pushed back, warning this would slow innovation and arguing that some models (like deep learning for medical imaging) simply cannot be fully explained.
The outcome was tiered explainability requirements based on risk level rather than absolute mandates. This compromise reflects a broader truth: government AI compliance must balance transparency demands with practical limitations of modern AI systems.
Telecoms approaches AI compliance through an infrastructure lens. The priorities are identity verification at a massive scale, secure data storage, preventing service outages, and AI-enabled fraud detection.
One of the most contentious debates during Ghana's framework involved telco metadata. Telecommunications companies argued that metadata — call records, location data, usage patterns — was low sensitivity and should be flexibly available for AI model training in fraud detection and churn prediction.
Civil society groups strongly disagreed, pointing out that metadata reveals movement patterns, behavioural predictions, and household identity clusters. The framework ultimately classified metadata as regulated personal data, with specific obligations for model explainability when using telco-derived features. This was a significant shift from the industry's preferred position.

When organisations try to apply one industry's AI framework to another, they make predictable mistakes. Here are the five most common failures I've encountered across deployments in multiple countries and sectors.
The mistake: Over-prioritising fraud detection models while under-prioritising clinical safety and model validation. This happens when organisations bring fintech compliance thinking into healthcare environments without adaptation.
The outcome: Models perform well on risk scoring but poorly on diagnostic reliability. You end up with AI that's excellent at flagging suspicious billing patterns but dangerous when making clinical recommendations. In healthcare, a false negative on fraud is a financial problem. A false negative on diagnosis can be fatal.
The mistake: Implementing heavy documentation requirements and rigid approval processes designed for government AI procurement in fast-moving commercial settings.
The outcome: Innovation slows to a crawl. AI deployment becomes impossible for SMEs. And here's the real danger — teams start bypassing controls entirely because compliance becomes an obstacle rather than a safeguard. Government-style controls work in government because the procurement cycle expects them. In commercial environments, they create workarounds and shadow AI.
The mistake: Assuming all African (or Asian, or Latin American) countries have identical privacy requirements because they've all enacted data protection laws.
The reality from my four-country experience: Ghana, Kenya, Nigeria, and Egypt differ significantly. Consent rules are not identical. Cross-border transfer requirements vary. Health data categories are defined differently. What qualifies as "sensitive" data changes between jurisdictions.
The outcome: Non-compliance through misalignment. Organisations either over-collect data (creating unnecessary risk) or under-collect (limiting AI functionality). I've seen systems designed for Nigerian NDPR requirements fail compliance audits in Kenya because the assumptions about cross-border data transfers didn't translate.
The mistake: Treating ISO 27001 certification as sufficient AI security without addressing model-specific risks.
ISO 27001 is excellent for traditional information security, but it wasn't designed for machine learning systems. It doesn't address adversarial attacks on models, model drift over time, dataset lineage and provenance, or AI-specific incident response.
The outcome: Attacks on ML models go undetected because traditional security monitoring isn't looking for them. Bias increases over time as models drift. Regulatory exposure increases because you can't demonstrate the controls regulators are starting to expect for AI systems.
The mistake: Assuming all African (or Asian, or Latin American) countries have identical privacy requirements because they've all enacted data protection laws.
The reality from my four-country experience: Ghana, Kenya, Nigeria, and Egypt differ significantly. Consent rules are not identical. Cross-border transfer requirements vary. Health data categories are defined differently. What qualifies as "sensitive" data changes between jurisdictions.
The outcome: Non-compliance through misalignment. Organisations either over-collect data (creating unnecessary risk) or under-collect (limiting AI functionality). I've seen systems designed for Nigerian NDPR requirements fail compliance audits in Kenya because the assumptions about cross-border data transfers didn't translate.

Let me walk you through exactly how compliance requirements for the same clinical decision support system differed across Ghana, Nigeria, Kenya, and Egypt. This isn't theoretical — these are modifications we actually implemented.
Ghana's Data Protection Act required explicit patient consent and minimal data collection necessary for clinical use. Local storage was required unless specific conditions were met for cross-border transfer.
Our response: We configured the CDSS to collect fewer metadata fields in Ghana than other deployments. The consent workflow was more granular, with separate permissions for different data uses. This meant some AI features available elsewhere weren't available in Ghana because we couldn't collect the training data they required.
Nigeria's NDPR required listing all data processors, completing NDPR-compliant Data Protection Impact Assessments, and providing evidence of mandatory staff training.
Our response: We rewrote vendor risk management controls specifically for Nigerian operations. Every third-party service touching patient data needed documented NDPR compliance. The DPIA process was more extensive than other jurisdictions, requiring detailed analysis of each AI model's data flows.
Kenya required prior approval for cross-border data transfers, stronger anonymisation for training datasets, and restrictions on cloud providers storing identifiable data externally.
Our response: We created a Kenya-only data pipeline with stricter pseudonymisation requirements and local data storage for raw clinical records. This meant maintaining separate infrastructure for Kenyan operations — adding cost and complexity, but necessary for compliance.
Egypt's PDPL required registration as a "controller of sensitive health data," strong encryption obligations, and mandatory breach reporting within tight timelines.
Our response: We implemented Egypt-specific encryption-at-rest policies that exceeded requirements elsewhere, plus a separate incident-reporting protocol with faster timelines than other jurisdictions. The registration process added months to our Egypt launch compared to other markets.
The key insight from this experience: "the same AI system that worked in Ghana had to be technically reconfigured in Kenya and contractually restructured in Nigeria — and Egypt added encryption and reporting obligations that didn't exist anywhere else." There is no shortcut to multi-jurisdictional compliance. You either do the work for each market, or you don't operate there.

One of my biggest frustrations with standard security frameworks is their inadequacy for AI systems. ISO 27001 is excellent for what it covers, but it wasn't designed for machine learning. At MyClinicsOnline, we developed supplementary controls specifically for AI systems—controls that addressed the gaps traditional frameworks miss.
We established scheduled model performance reviews, defined thresholds for "drift alerts," and created mandatory retraining cycles. This improved patient safety, predictive accuracy, and compliance transparency. Models that performed well at launch can degrade over time as patient populations change or data distributions shift. Without drift detection, you won't know until harm occurs.
We introduced adversarial robustness testing, input perturbation tests, and edge-case clinical scenario testing. This protected models from manipulated medical images, out-of-distribution inputs, and prompt-injection vulnerabilities in RAG systems. Traditional penetration testing doesn't cover these attack vectors — you need ML-specific security testing.
Standard incident response procedures weren't adequate for AI-specific failures. We added processes for model failure alerts, steps for rolling back deployments, mandatory clinical review before redeployment, and exploit testing remediation steps. When an AI model fails in production, you need a different playbook than when a database goes down.
We developed a unique framework where clinicians validated outputs, engineers validated stability, and compliance reviewed audit trails. This filled the governance gap that ISO 27001 doesn't cover — the intersection between clinical safety, technical reliability, and regulatory compliance. No single team has all three perspectives; you need structured handoffs between them.

If there's one capability that separates organisations that succeed at cross-industry AI compliance from those that struggle, it's the ability to translate between regulatory philosophies. Here's how to develop that capability.
I've watched organisations treat AI compliance as a cost centre — an obstacle to innovation that legal and compliance teams handle while the "real work" happens elsewhere. This is backwards.
The organisations that will dominate AI in regulated industries are those that treat compliance as industry-specific engineering. They're building systems that can expand into new markets because compliance flexibility is architected in from the start. They're earning customer trust because their AI governance is visible and robust. They're avoiding the regulatory crackdowns that will hit organisations that cut corners.
Universal AI compliance frameworks are a myth — but universal compliance principles exist. Understand why each industry regulates AI the way it does. Respect the genuine concerns behind regulatory requirements. Engineer solutions that honour multiple frameworks simultaneously. Build relationships with regulators while rules are still being written.
That's how you turn AI compliance from a constraint into a competitive advantage. That's how you build AI systems that can scale across industries and jurisdictions. And that's how you position your organisation for long-term success as AI regulation matures globally.
Join my Foundation Training Program launching soon, where I'll teach the frameworks, tools, and approaches that come from actually operating AI systems across multiple countries and industries — not from reading about compliance in textbooks.
