Managing AI Risk in Production: Lessons from Operating Healthcare Systems Across Four Countries
Most AI risk management frameworks look impressive on paper. Comprehensive matrices. Sophisticated taxonomies. Multi-layered governance structures.
Then you try to deploy an AI diagnostic tool across Ghana, Nigeria, Kenya, and Egypt simultaneously—each with different data protection laws, varying infrastructure reliability, inconsistent data quality, and populations underrepresented in your training data.
Reality hits hard.
As CTO of CarePoint, I was responsible for healthcare systems operating across four African countries. I didn't have the luxury of theoretical risk management. I had to manage actual AI risks affecting real patients, navigate four different regulatory regimes, and build systems that worked despite infrastructure challenges that Western frameworks never anticipated.
Here's what I learned: AI risk management isn't about perfect frameworks. It's about identifying what can actually go wrong in your specific context, implementing controls that work with your constraints, and monitoring continuously because AI systems don't stay static.
This pillar is about real AI risk management—the kind you need when you're actually responsible for AI systems in production, not just presenting to a board.
I hold the CISA certification. I understand traditional IT risk management. But AI introduces fundamentally different risk categories that caught me off guard as CTO:
I. Systems learn behaviours from data
2. Failures can be probabilistic and subtle
3. Testing can't cover all possibilities
3. Risks evolve continuously (model drift, distribution shift)



Through operating AI healthcare systems across four countries, I encountered AI risks that fell into four main categories:
1. Data Quality & Fragmentation Risk
The Problem:
Health data was inconsistent across facilities and countries. Poor data quality directly impacted model performance, especially for diagnostic-support models.
What This Looked Like:
The Impact:
Data quality issues weren't just technical problems—they directly affected model accuracy, which affected clinical decisions.
How I Managed It:
The Problem:
Each country had different data protection laws creating compliance complexity for AI systems using shared or aggregated data:
What This Meant:
A single AI model couldn't share identical data pipelines across all markets. Compliance in Nigeria looked different from compliance in Egypt.
Real Incident:
During a cross-border data review for Egypt, we discovered a regulatory compliance gap in how we were processing certain clinical metadata. It required re-engineering our entire data processing pipeline for that market.
How I Managed It:
The Problem:
Latency, downtime, and variable compute availability affected AI model deployment—especially clinical decision support tools.
Country-Specific Challenges:
Real Incident:
We underestimated third-party API dependency risks. When one cloud provider had an outage, it cascaded into delayed clinical reporting across multiple facilities.
How I Managed It:
The Problem:
Sensitive healthcare data processed by AI increased exposure to privacy violations, unauthorised access, and model inversion risk.
Specific Concerns:
Real Incident:
A vendor-provided de-identification engine failed to remove certain quasi-identifiers from a dataset we were preparing for model training. We caught it during our internal review, but it highlighted that vendor promises don't equal vendor performance.
How I Managed It:
I expected traditional security risks. I planned for compliance complexity. What caught me less prepared was bias and fairness risks when deploying AI models across diverse African populations.
Real Example: Skin Condition ClassifierWe tested a dermatology AI model for BlackSkinAcne.com. The model performed significantly worse for darker skin tones.
Why? The training dataset was Eurocentric—predominantly lighter skin tones. The model had learned patterns that didn't generalize to African populations.
This wasn't just a technical failure. It was an equity and patient safety issue.
A diabetes risk prediction model overestimated risk in one region due to data skew between urban and rural populations. The model had learned patterns from predominantly urban patients and wasn't calibrated for rural health indicators.
1. Dataset Representation Audits
2. Fairness Metrics During Evaluation
3. Post-Deployment Bias Monitoring
4. Retraining and Recalibration

After managing AI risk across four countries, I developed a hybrid approach combining established frameworks with custom extensions for healthcare AI in African markets:
NIST AI Risk Management Framework
Not all AI risks are equal. I use five factors to prioritize:
This produces a High/Medium/Low classification with documented control requirements for each risk level.
Example:

The biggest lesson from managing production AI systems: Risk assessment isn't a one-time activity.
AI systems change even when code doesn't:
Real Example:
We were far advanced in implementing automated drift detection for AI models in our SIEM application that would alert us when our diabetes risk model's prediction distribution shifted in one region. The investigation revealed the demographic change I mentioned earlier. Without continuous monitoring, we wouldn't have caught it until clinical outcomes showed the problem—too late.
As CTO, I had to enable innovation while ensuring patient safety and compliance. The tension is real.
Move too slowly, and you can't compete or deliver value. Move too fast, and you deploy unsafe systems or violate regulations.

1. Model performance validation
2. Stress testing
3. Security testing
1. Clinical expert review
2. Real-world testing with human oversight
3. Safety threshold validation
1. Threat modeling
2. Penetration testing
3. Access control validation
1.Country-specific compliance review
2. Documentation requirements
3. Approval processes
This enables innovation without compromising patient safety or compliance.
Through reviewing AI systems and evaluating vendor solutions, I see risks that vendors consistently underestimate or ignore:
1. Population Representativeness Risk
The Problem:
Vendors train models on datasets that don't represent African populations, then sell these models to African healthcare providers.
The Risk:
Poor performance on darker skin tones, different disease presentations, and different health indicators.
What to Ask Vendors:
2. Infrastructure Assumption Risk
The Problem:
Vendors assume reliable high-speed internet, consistent power, and abundant compute resources.
The Risk:
Solutions that work in Silicon Valley fail in African healthcare facilities with intermittent connectivity.
What to Ask Vendors:
3. Regulatory Localization Risk
The Problem:
Vendors build for GDPR or HIPAA without understanding local African regulations.
The Risk:
Compliance gaps, inability to meet local data residency requirements, unclear legal responsibilities.
What to Ask Vendors:
AI risk profiles vary dramatically based on use case:
Required Controls:
Risk Profile:
Required Controls:
Risk Profile:
Required Controls:
For Organizations New to AI Risk Management


For Organisations New to AI Risk Management
Recommended Path:
→ AI Risk Management (this pillar)
→ Data Privacy & AI with country-specific focus
→ AI Regulatory Compliance for African markets
I'm building comprehensive knowledge at the intersection of AI technology, risk management, and regulatory compliance.
What's Available Now:
I'm committed to quality over speed. Each article is grounded in practical experience and includes real-world examples, implementation guidance, and lessons learned.
AI Risk Management connects to all other pillars:
AI risk management isn't optional. If you're building, deploying, or operating AI systems, you're managing AI risk whether you realise it or not.
The question is: Are you managing it well, or are you waiting for an incident to expose gaps?
The professionals and organizations that thrive will be those who:
AI risk management isn't about achieving zero risk—that's impossible. It's about understanding your risks, implementing appropriate controls, and making informed decisions about acceptable risk levels.
Ready to build proper AI risk management capabilities?
Browse the comprehensive topic areas below, or start with the recommended learning path for your role.

Patrick Dasoberi brings executive healthcare technology leadership, technical depth, and hands-on teaching experience to AI risk management education.
Executive Healthcare Technology Leadership
Until recently, Patrick served as Chief Technology Officer of CarePoint (formerly African Health Holding), where he was responsible for healthcare systems operating across four countries: Ghana, Nigeria, Kenya, and Egypt. In this role, he managed AI risk across multiple regulatory jurisdictions, navigated infrastructure constraints, addressed model bias for African populations, and made strategic decisions affecting real patient care.
He dealt with actual AI risk incidents, including model drift, vendor control failures, regulatory compliance gaps, and infrastructure dependency cascades—bringing practical, battle-tested insights to AI risk management education.
Technical Education Background
Before moving into healthcare technology leadership, Patrick taught web development and Java programming in Ghana for seven years. This extensive teaching experience shapes his approach to content creation—he knows how to break down complex risk management concepts into understandable, actionable knowledge.
Current Operations & Focus
Patrick currently operates AI-powered healthcare platforms, including DiabetesCare. Today, MyClinicsOnline and BlackSkinAcne.com operate across Ghana, Nigeria, and South Africa. Through AI Security Info and the AI Cybersecurity & Compliance Hub, he shares practical insights from building, securing, and managing AI risk in real-world systems.
His unique expertise combines AI technology understanding, risk management frameworks (NIST AI RMF, ISO 31000, ISO 27001), and regulatory compliance across multiple African jurisdictions—bringing an executive-level perspective to AI risk management education.
Professional Certifications & Education:
Executive & Operational Experience:
Below you'll find the major topic areas within AI risk management. Content is being developed systematically based on real-world experience managing AI risk in production environments.
Understanding what makes AI risk different from traditional IT risk, core risk categories, and why AI systems require specialised risk management approaches.
Key topics: AI risk taxonomy, risk vs. traditional IT risk, risk assessment basics, governance foundations
Skill level: Beginner
Content status: Building comprehensive coverage
Methodologies for assessing AI-specific risks, including model risks, data risks, deployment risks, and operational risks.
Key topics: Risk identification, risk analysis, likelihood and impact assessment, and risk matrices.
Skill level: Intermediate
Content status: Coming soon
Practical approaches to mitigating identified risks through technical controls, process controls, and governance mechanisms.
Key topics: Control selection, defence in depth, compensating controls, control effectiveness
Skill level: Intermediate
Content status: Coming soon
Approaches to continuous monitoring including model drift detection, bias monitoring, security monitoring, and compliance monitoring.
Key topics: Drift detection, performance monitoring, fairness audits, security monitoring
Skill level: Advanced
Content status: Coming soon
Strategies for managing AI risk when operating across multiple countries with different regulations, especially in African markets.
Key topics: Regulatory mapping, country-specific controls, cross-border compliance, data residency
Skill level: Advanced
Content status: Coming soon
Managing risks unique to healthcare AI, including patient safety, clinical validation, HIPAA compliance, data protection regulations (country-specific), and medical device regulations.
Key topics: Clinical risk, patient safety, medical AI regulations, healthcare data privacy
Skill level: Advanced
Content status: Coming soon
Start exploring topic areas based on your needs, or follow the recommended learning paths above.
Content updated: November 2025
Pillar 2 of 7 in the AI Security Info comprehensive framework