Navigating AI Compliance Across Multiple Jurisdictions: Building Compliance Before Laws Pass
By Patrick Dasoberi, CISA, CDPSE, MSc IT | Former CTO, CarePoint | Founder, AI Cybersecurity & Compliance Hub
Most organizations wait for laws to pass before addressing compliance. They watch regulatory developments, maybe track bills in parliament, and then—when legislation is enacted—they scramble to retrofit compliance into existing systems.
This reactive approach is expensive, disruptive, and risky.
As CTO of CarePoint, managing healthcare systems across Ghana, Nigeria, Kenya, and Egypt, I took a different approach: building compliance infrastructure ahead of legislative enactment.
While Nigeria's NDPR, Egypt's PDPL, and Kenya's DPA were all under parliamentary review during my tenure, we worked with legal counsel to interpret proposed requirements and built compliance into our systems proactively.
Ghana's Data Protection Act (2012) was the only fully enacted law we operated under. For the other three countries, we were preparing for regulations before they became legally binding.
This forward-thinking approach meant that when these laws eventually passed, we were ready—while competitors scrambled to retrofit compliance.
Here's what I learned: Regulatory compliance for AI isn't just about following enacted laws. It's about understanding regulatory intent, anticipating requirements, and building systems that respect fundamental rights even before legal obligations crystallise.
I hold the CISA certification with a focus on compliance and control frameworks. I understand traditional IT compliance. But AI introduces compliance challenges that existing regulations weren't designed to address.
AI System Compliance:
When we deployed AI models across our healthcare platforms, regulators in multiple countries asked: "Can you explain how the AI makes decisions?"
For traditional software, I could point to the code and logic flow. For AI models—especially complex neural networks—explainability wasn't that simple. We had to develop new approaches to demonstrating compliance with transparency requirements that weren't written with AI in mind.
This is why AI regulatory compliance requires specialised knowledge beyond traditional IT compliance expertise.

Operating Under Different Regulatory
Maturity Levels
Managing compliance across four countries meant navigating vastly different regulatory environments:
Nigeria, Kenya, Egypt (Proposed/Developing Regulations):
This created a unique challenge: How do you build compliant AI systems when half your operating jurisdictions have established laws and the other half have proposed-but-not-enacted regulations?

When Regulatory Requirements Directly Conflict
One of the most significant challenges I faced was reconciling fundamentally different approaches to cross-border data transfers:
Nigeria & Ghana: Flexible Approach
Egypt: Strict Restrictions (Proposed PDPL)

The Proactive Approach
While most organizations waited for laws to pass, we built compliance infrastructure proactively for Nigeria, Egypt, and Kenya.
Why This Approach:
A) Avoid Expensive Retrofits
B) Reduce Deployment Risk
C) Demonstrate Good Faith
D) Future-Proof Systems
How We Did It:
Step 1: Track Legislative Developments
Step 2: Interpret Proposed Requirements with Legal Counsel
Step 3: Design Compliance Into Architecture
Step 4: Conduct Internal Audits
Step 5: Iterate as Legislation Evolved
Even before the PDPL was fully enacted, Egyptian authorities conducted structured reviews of data processing activities.
Our Egypt-based team went through the formal regulatory review process. This was transitional enforcement—not yet under fully codified law, but regulatory authorities asserting oversight as legislation progressed.
Key Lesson:
Don't assume you're safe from regulatory scrutiny just because laws haven't passed. Regulators can and do conduct pre-enforcement reviews, especially in sensitive sectors like healthcare.
What Prepared Us:
Because we had built compliance infrastructure proactively, we had documentation ready:
Organizations that waited for final enactment would have scrambled to produce this documentation during inspection.
Context: The Kenya Data Protection Act was passed in 2019, but we were building compliance infrastructure before enactment.
What the Review Focused On:
The Biometric Challenge:
Biometric data is particularly sensitive under most data protection frameworks. We had to demonstrate:
What We Learned:
Biometric data provisions in data protection laws often have stricter requirements than general personal data. For AI systems that process biometric information (facial recognition, voice authentication, and gait analysis), these heightened requirements significantly impact compliance obligations.
For Nigeria, where the NDPR was under development during my tenure, we conducted structured internal audits aligned with proposed NDPR requirements.
Focus Areas:
1. Vendor Compliance
Real Challenge:
Many AI vendors—especially international vendors—didn't understand Nigerian regulatory requirements or proposed NDPR provisions. We had to educate vendors about anticipated obligations and, in some cases, reject vendors who couldn't demonstrate readiness for Nigerian compliance.
2. Access Control
3. Cross-Border Transfers
The proposed NDPR had provisions about cross-border transfers. We proactively implemented transfer impact assessments and documentation even before they were legally required.
4. Data Minimisation
The Value of Proactive Internal Audits:
These internal audits—conducted before NDPR enactment—significantly reduced our exposure to external regulatory scrutiny after the law passed. We had already identified and addressed compliance gaps.
Organisations that waited for enactment faced:

March–May 2022
One of the most significant recognitions of my AI compliance and governance expertise was being selected as a national and international expert resource person for the development of
Ghana's Ethical AI Framework and
Data Exchanges Roadmap.
Selection Process:
This wasn't open participation. I was specifically invited based on my operational experience managing AI systems in healthcare and my demonstrated expertise at the intersection of AI technology, security, and regulatory compliance.
My Contributions:
1. Expert Testimony at High-Level Workshop
I participated in an invitation-only workshop involving:
The workshop focused on:
My role was providing:
2. Expert Interview and Strategic Document Review
Before the workshop, I participated in:
This preparatory work helped shape the workshop agenda and focus discussions on areas where Ghana needed the strongest guidance.
3. Critical Terminology Shift: "Data Markets" → "Data Exchanges"
One of my most impactful contributions was guiding the terminology shift from "data markets" to "data exchanges."
Why This Mattered:
"Data Markets" Implied:
"Data Exchanges" Better Represented:
Ghana's Intent:
The government wasn't trying to create commercial data markets. They wanted frameworks for responsible data sharing that served public good—healthcare research, agricultural development, and educational improvement—while protecting individual rights and national sovereignty.
The "data markets" framing didn't represent this intent and could have led to unintended consequences.
Impact of the Shift:
My input directly influenced the conceptual foundations of Ghana's ethical AI and data governance efforts. The framework now:
What This Demonstrates:
This appointment demonstrates credibility as an AI regulatory and governance expert with influence at both national and international levels. It's one thing to manage compliance within a company. It's another to shape national AI governance frameworks.a
This experience directly informs how I approach AI regulatory compliance—not just as technical implementation of legal requirements, but as participation in ongoing societal conversations about how AI should be governed.


Most data protection laws require transparency about automated decision-making. Data subjects have rights to understand how decisions affecting them are made.
The AI Challenge:
For complex AI models—especially deep neural networks—explaining specific decisions isn't straightforward. The model itself is a black box processing millions of parameters.
How We Addressed It:
1. Layered Explanations
High-level explanation: "The AI analyzes patient symptoms, medical history, and clinical indicators to assess disease risk."
Model-level explanation: "The model was trained on 50,000 patient cases and uses pattern recognition across 200 clinical features."
Instance-level explanation: "For this specific patient, the elevated risk score is primarily driven by [top 5 contributing factors]."
The Lesson:
Perfect explainability isn't always achievable with current AI technology. But you can still meet transparency requirements through documentation, layered explanations, and appropriate human oversight.
The Regulatory Requirement:
Data protection laws (GDPR, Ghana DPA, proposed NDPR, proposed Kenya DPA) grant data subjects the right to erasure—the "right to be forgotten."
The AI Challenge:
Once patient data is used to train an AI model, the model has "learned" from that data. The data itself might be deleted from databases, but the learned patterns remain in model weights.
Can you truly erase someone's data from a trained model?
How We Addressed It:
1. Data Minimization During Training
2. Model Retraining Protocols
3. Honest Communication with Regulators
3. Honest Communication with Regulators
The Lesson:
This remains an unsolved challenge in AI compliance. Regulators are still developing guidance. The key is demonstrating good faith effort, technical understanding, and honest communication about limitations.
The Regulatory Requirement:
Data protection laws require that data be collected for specified, explicit, and legitimate purposes. Using data for new purposes requires new legal basis.
The AI Challenge:
Is using patient data to improve AI models a "compatible purpose" with the original healthcare purpose, or is it a new purpose requiring new consent?
Real Scenario:
Is that the same purpose or a different purpose?
1. Don't assume original healthcare consent covers model training
2. Include explicit consent provisions: "Your anonymised health data may be used to improve AI diagnostic models."
3. Allow patients to opt out of model training while still receiving care
4. Document consent separately for model training purposes
a) Clearly document the purpose of each data processing activity
b) Distinguish between:
The Lesson:
When in doubt about purpose compatibility, obtain explicit consent. Regulatory authorities and courts are still developing guidance on AI model training as a purpose. Conservative interpretation protects both patients and organisations.
The Regulatory Requirement:
Some jurisdictions require:
Kenya's Proposed Provisions:
The Kenya DPA (before my CTO tenure ended) had explicit requirements for automated decision-making. Organizations using automated systems for decisions significantly affecting data subjects needed to:
How We Addressed It:
1. System Inventory
a) Fully automated with no human review
b) AI-assisted, with human approval required
c) Human decision with AI providing information only
2. Risk Classification
Conducted DPIAs for high-risk AI systems
Documented:
a) What data is processed
b) How AI makes decisions
c) Risks to data subjects
d) Safeguards implemented
e) Human oversight mechanisms
4. Registration Preparation
The Lesson: Proactive impact assessments serve multiple purposes:
Don't wait for regulatory requirements—conduct impact assessments for high-risk AI systems as a matter of good governance.
Mistake 1: Initially Underestimating Documentation Requirements
What Happened: Early in my CTO tenure, I focused heavily on technical compliance—building the right security controls, implementing proper access management, and ensuring data isolation.
But I underestimated the importance of documentation.
Regulators don't just want you to be compliant—they want you to demonstrate and document compliance.
The Gap:
When the Egypt inspection happened:
We scrambled to create documentation that should have existed from day one.
The Fix:Implemented a comprehensive documentation framework:
The Lesson:
In compliance, if it's not documented, it didn't happen. Build documentation discipline from day one, not after regulatory inquiry.
Mistake 2: Vendor Compliance Assumptions
What Happened:
We engaged AI vendors who made strong compliance claims:
I initially took these claims at face value, especially from well-known vendors with impressive client lists.
The Problem:
When we conducted detailed vendor assessments for Nigerian NDPR preparation:
Real Incident:
One AI vendor processed data through servers in multiple countries without clear documentation of data flows. When asked about compliance with Egypt's anticipated data residency requirements, they had no mechanism to ensure Egyptian data stayed in Egypt.
We had to terminate the relationship and rebuild functionality with a compliant infrastructure.
The Fix: Implemented a rigorous vendor assessment process:
The Lesson: Vendor compliance is your compliance. You can't outsource compliance responsibility even if you outsource technical operations. Assess vendors thoroughly before engagement and monitor them continuously.
Mistake 3: Treating Compliance as One-Time Activity
What Happened: After the initial compliance build-out, I made the mistake of thinking "we're compliant now" and shifting focus to other priorities.
The Reality: Compliance is continuous.
When I Learned This: A proposed amendment to Ghana's Data Protection Act would have required additional reporting for automated decision-making systems. We almost missed the consultation period because we weren't actively monitoring regulatory developments.
The Fix: Established continuous compliance processes:
The Lesson:
Budget time and resources for ongoing compliance activities, not just initial compliance build-out. Compliance is a program, not a project.
Mistake 3: Treating Compliance as One-Time Activity
What Happened: After the initial compliance build-out, I made the mistake of thinking "we're compliant now" and shifting focus to other priorities.
The Reality: Compliance is continuous.
When I Learned This: A proposed amendment to Ghana's Data Protection Act would have required additional reporting for automated decision-making systems. We almost missed the consultation period because we weren't actively monitoring regulatory developments.
The Fix: Established continuous compliance processes:
The Lesson:
Budget time and resources for ongoing compliance activities, not just initial compliance build-out. Compliance is a program, not a project.
Mistake 4: Insufficient Legal Expertise for Multi-Jurisdictional Complexity
What Happened: Initially, I attempted to manage compliance with limited involvement from legal counsel. I'm technically knowledgeable, hold CISA and CDPSE certifications, and understand compliance frameworks.
But I'm not a lawyer.
The Gap:
Real Example: For cross-border data transfers, I understood the technical options (data isolation, encryption, and anonymization). But determining which technical approaches satisfied which legal requirements in which jurisdictions required legal expertise I didn't have.
The Fix:
The Cost: Legal expertise is expensive. But it's far less expensive than regulatory enforcement actions, fines, or having to redesign noncompliant systems.
The Lesson: Technical expertise and legal expertise are both essential for AI compliance. Neither alone is sufficient. Build strong collaboration between technical and legal functions.
Challenge: Speed vs. Compliance
The Tension: Healthcare is a fast-moving field. Clinical needs are urgent. Competitive pressure demands quick deployment of AI innovations.
But compliance takes time:
Real Scenario: We developed an AI diagnostic tool with promising clinical validation results. Medical staff wanted it deployed immediately to improve patient outcomes.
But Kenya's proposed automated decision-making requirements are needed:
This added 6-8 weeks to the deployment timeline.
How I Balanced It:
1. Build Compliance into the Development Lifecycle
2. Prioritise Based on Risk and Impact
3. Staged Deployment
4. Set Realistic Expectations
The Lesson: Compliance doesn't have to kill innovation speed, but it does require planning. Organisations that integrate compliance into development processes move faster than those that treat compliance as an afterthought
Challenge: Compliance in Resource-Constrained Environments
The Reality: Comprehensive compliance programs are expensive.
Large enterprises can afford dedicated compliance teams. Smaller organisations and startups struggle.
How We Did Compliance Without Huge Budgets:
1. Leveraged In-House Expertise
2. Focused on High-Impact Activities
3. Used Open-Source and Free Resources
4. Strategic External Expertise
5. Automation Where Possible
The Lesson: You don't need unlimited budgets for good compliance. You need:
Small organisations can achieve strong compliance through smart approaches, not just large budgets.
Challenge: Getting Good Local Legal Advice
The Problem:
AI regulatory compliance in African markets requires local legal expertise. But:
AI is relatively new in many African legal markets
Few lawyers deeply understand both AI technology and data protection law
Local expertise availability varies significantly across countries
Our Experience:
Ghana:
More mature legal market for data protection (DPA since 2012)
Easier to find lawyers with data protection expertise
More precedents and regulatory guidance available
Nigeria, Kenya, Egypt:
Fewer lawyers with deep AI and data protection expertise
More challenging to get clear answers on novel AI compliance questions
Had to educate legal advisors about AI technology and implications
How We Navigated This:
1. In-House Legal Counsel as Foundation
Built in-house expertise that understood our AI systems
In-house counsel coordinated with external local experts
Reduced reliance on expensive external counsel for routine matters
2. Relationships with Regulatory Authorities
Engaged directly with data protection authorities
Asked clarifying questions about regulatory expectations
Participated in industry consultations
Built relationships that facilitated compliance guidance
3. International Expertise for Novel Issues
For cutting-edge AI compliance questions without local precedent, consulted international AI law experts
Adapted international best practices to local regulatory context
Used global guidance (GDPR, EU AI Act) as reference points while respecting local requirements
4. Peer Learning
Participated in industry associations
Shared compliance challenges and solutions with peers (within legal constraints)
Learned from how other healthcare technology companies approached similar issues
The Lesson:
In emerging regulatory environments, you sometimes need to build the expertise you can't easily buy. Invest in education (yours, your team's, sometimes even your legal advisors') about AI regulatory compliance.


Pre-Deployment Compliance Checklist
After managing compliance across four jurisdictions, I developed a systematic checklist used before any AI system deployment:
Effective AI compliance requires cross-functional collaboration:

Executive Leadership:
Key Principle: No single function owns AI compliance. It requires collaboration across technical, legal, clinical, and business functions.
The Challenge
Managing compliance across four jurisdictions meant tracking regulatory developments in multiple countries simultaneously:
1. Direct Regulatory Monitoring
- Subscribed to official updates from:
- Monitored parliamentary proceedings for legislative changes
-Reviewed proposed bills and amendments
2. Legal Counsel Network
3. Industry Associations
4. International Developments
5. Regulatory Relationship Building
The System:
My involvement in developing Ghana's Ethical AI Framework taught me that compliance isn't just reactive—it can be proactive and contributory.
Why Regulatory Engagement Matters:
For Organizations New to AI Compliance
Start by clearly identifying:
Don't assume:
Document:
Classification:
For each AI system, assess:
Prioritize:
Build foundational compliance infrastructure:
Policies:
Processes:
Documentation:
Technical controls:
Organisational controls:
Ongoing compliance activities:
Don't try to do everything at once. Build systematically, starting with highest-risk systems and highest-priority requirements.
If you're coming from traditional IT security and compliance:
If you're coming from traditional compliance or privacy:
What Transfers:
Recommended Path:
Recommended Path:
Recommended Path:
Key Skill Development:
If you're building or operating AI systems in Ghana, Nigeria, South Africa, Kenya, or elsewhere in Africa:
Pay Attention To:
1. Regulatory Maturity Stages
Strategy: Build compliance infrastructure proactively, even before laws are enacted. This positions you ahead of competitors and reduces risk when regulations pass.
2. Multiple Jurisdictional Requirements
Strategy: Design for the most stringent requirements as baseline. Easier to have strong controls everywhere than to have different compliance levels in different markets.
4. Resource Constraints
Strategy: Use risk-based prioritisation, leverage free resources, focus on documentation, and build internal capability.
5. Infrastructure Realities
Strategy: Design compliance controls that work within your infrastructure realities, not theoretical optimal conditions.
Recommended Path:
Common Compliance Pitfalls to Avoid
Pitfall 1: Assuming GDPR Compliance = Global Compliance
The Mistake:
Many AI vendors claim "GDPR compliant" and assume this satisfies all data protection requirements globally.
The Reality:
What to Do Instead:
Pitfall 2: Compliance as Final Gate Before Deployment
The Mistake:
Treating compliance as something to address after AI system development is complete, right before deployment.
Why This Fails:
What to Do Instead:
Pitfall 3: Over-Reliance on Vendor Compliance Claims
The Mistake: Accepting vendor compliance representations without verification.
Why This Fails:
What to Do Instead:
Pitfall 4: Ignoring Regulatory Developments Until Laws Pass
Why This Fails:
What to Do Instead:
Pitfall 5: Insufficient Documentation
The Mistake: Focusing on technical compliance implementation without adequate documentation.
Why This Fails:
What to Do Instead:
AI regulatory compliance connects deeply with other pillars:
Start Your AI Compliance Journey
AI regulatory compliance is complex, rapidly evolving, and critically important. Organisations that master compliance gain a competitive advantage, reduce risk, and build trust with customers and regulators.
This isn't just about avoiding penalties. It's about building AI systems that respect individual rights, operate transparently, and contribute to responsible AI adoption.
The regulatory landscape will continue evolving. The organisations that thrive will be those that:
Start building your AI compliance capabilities today. The regulatory future is coming—be ready for it.

CISA, CDPSE, MSc IT, BA Admin, AI/ML Engineer
Former CTO, CarePoint | Founder, AI Cybersecurity & Compliance Hub
Patrick Dasoberi brings executive healthcare technology leadership, regulatory expertise, and hands-on multi-jurisdictional compliance experience to AI regulatory education.
Executive Healthcare Technology Leadership
Until recently, Patrick served as Chief Technology Officer of CarePoint (formerly African Health Holding), responsible for healthcare systems across Ghana, Nigeria, Kenya, and Egypt. In this role, he navigated compliance across four different regulatory frameworks—including proactive compliance building for Nigeria's NDPR, Egypt's PDPL, and Kenya's DPA before these laws were fully enacted.
National AI Governance Expertise
Patrick was selected as Subject Matter Expert and Resource Person for Ghana's Ethical AI Framework and Data Exchanges Roadmap (Ministry of Communications & Digitalisation, Ghana & UN Global Pulse, March-May 2022). His contributions directly influenced the conceptual foundations of Ghana's national AI governance efforts, including guiding the critical terminology shift from "data markets" to "data exchanges" to better represent non-commercial, humanitarian data sharing models.
Technical Education Background
Before healthcare technology leadership, Patrick taught web development and Java programming in Ghana for seven years, developing deep expertise in making complex technical and regulatory concepts accessible.
Current Operations & Focus
Patrick currently operates AI-powered healthcare platforms (DiabetesCare.Today, MyClinicsOnline, BlackSkinAcne.com) across Ghana, Nigeria, and South Africa. Through AI Security Info and the AI Cybersecurity & Compliance Hub, he shares practical compliance insights from operating real-world AI systems under multiple regulatory frameworks.
Professional Certifications & Education:
Executive & Operational Experience: