AI security risks in African healthcare — digital threat landscape
African healthcare organisations face 3,575 cyberattacks per week in 2025. AI is expanding both the opportunity — and the attack surface.

Executive Summary

African healthcare organisations face 3,575 cyberattacks per week in 2025 — a 38% year-on-year surge. AI is making health systems smarter, but it’s also creating new attack surfaces that most security teams aren’t prepared for.

This article covers the seven most critical AI security risks facing African healthcare, draws on real 2025 attack incidents across South Africa, Kenya, and Morocco, and provides a practical four-layer security framework you can start implementing today.

Time to read: 15 minutes  |  Audience: CISOs, CTOs, Healthcare Compliance Officers, Board-Level Executives

The Silent Emergency in African Healthcare

I spent three years as CTO of CarePoint — one of Africa’s largest health networks, operating across Ghana, Nigeria, Kenya, and Egypt. We were protecting over 25 million patient records across four different regulatory frameworks, none of which were designed with AI in mind.

Every time we deployed a diagnostic model or integrated a machine learning tool into our clinical workflows, I asked the same question our vendors couldn’t answer: Who is responsible when this system is compromised?

That question still doesn’t have a clean answer in most of Africa. And the stakes have never been higher.

AI security risks in African healthcare are no longer theoretical. In 2025 alone, Mediclinic Southern Africa suffered a cyber extortion attack compromising sensitive HR data. A ransomware strike on South Africa’s National Health Laboratory Service disrupted blood test processing nationwide, delaying critical care for millions. M-Tiba, Kenya’s CarePay-backed digital health platform, was hit by a significant cyberattack. Lancet Laboratories received a regulatory penalty for failing to notify patients about data breaches under South Africa’s POPIA law.

These aren’t edge cases. They are the new normal — and AI adoption is accelerating the risk on every front.

This guide breaks down the seven specific AI security risks that African healthcare leaders must understand, why Africa faces compounding vulnerabilities that global frameworks don’t address, and what you can do about it.

Why African Healthcare Is a Prime Target

Before examining the specific risks, it’s worth understanding why this sector is so attractive to threat actors.

Health data is the most valuable personal data category on the dark web — worth up to ten times more per record than financial data. Globally, healthcare data breaches cost an average of $7.42 million per incident — the most expensive of any industry for 14 consecutive years, according to research by HIPAA Journal. In Africa, the problem compounds in ways that don’t show up in global statistics.

Four structural factors make African healthcare uniquely exposed:

  • Rapid AI adoption without security foundations. Diagnostic AI tools, triage chatbots, and predictive analytics platforms are being deployed at a pace that outstrips security infrastructure — often by international vendors with no understanding of Africa’s regulatory landscape.
  • Legacy systems sitting alongside cutting-edge AI. Many public hospital networks run on infrastructure from the 1990s, now interfacing with AI-powered electronic health records. That gap — legacy meets AI — is exactly where attackers operate.
  • Fragmented governance across 54 jurisdictions. When I navigated data protection compliance across Ghana, Nigeria, Kenya, and Egypt simultaneously, I was working with four different legal frameworks and two countries with almost no AI-specific guidance at all. That fragmentation creates exploitable grey zones.
  • Underfunded, understaffed security teams. Africa faces a projected shortage of 4.3 million doctors. The security skills gap is equally severe — and gets far less attention. Most African hospital IT teams have no dedicated cybersecurity personnel at all.

10%

of Africa’s GDP is lost to cyberattacks annually, according to the UN Economic Commission for Africa. Healthcare data sits at the centre of that loss.

The 7 Critical AI Security Risks in African Healthcare

Hero image for the article on AI security risks in African healthcare. Features Africa map with threat indicators over Cairo, Accra, Lagos, Nairobi, and Johannesburg with key statistics.
From model poisoning to regulatory grey zones — seven risks every African healthcare CISO must address before deploying AI.

1. AI Model Poisoning and Training Data Attacks

AI model poisoning is an attack where a threat actor deliberately injects corrupted data into a machine learning model’s training pipeline. In a healthcare context, the consequences aren’t just a compromised IT system. They’re wrong diagnoses. Incorrect drug dosages. Misclassified patient risk scores.

Africa faces a specific structural vulnerability here. Most AI models deployed in African healthcare are trained on datasets from Europe, North America, or Asia — not African patient populations. When African health systems fine-tune these models using local data, they’re often working with small, poorly labelled datasets that are far easier to poison without detection.

A 2025 analysis published on Preprints.org confirmed this risk directly, noting that AI models trained on non-African patient data “risk perpetuating biases and inaccuracies when applied in African contexts.” Attackers who understand the fine-tuning process can exploit this to introduce subtle but clinically dangerous errors — errors that may not surface for months.

What to do: Implement strict data provenance controls on every dataset used to train or fine-tune AI models. Externally sourced datasets must go through formal validation before entering your model pipeline. The NIST AI Risk Management Framework (AI RMF 1.0) provides a structured approach for documenting and managing these controls.

2. Patient Data Privacy Breaches in AI Pipelines

AI systems in healthcare are data-hungry. Diagnostic models need imaging data. Predictive tools need longitudinal patient records. Risk stratification engines need claims history, lab results, and prescription data. Every pipeline that moves patient data is a potential breach point — and in Africa, those pipelines cross multiple legal jurisdictions simultaneously.

The data protection landscape in Africa is fragmented by design. Ghana’s Data Protection Act 2012, Kenya’s Data Protection Act 2019, Nigeria’s NDPR (2019), and Egypt’s Personal Data Protection Law No. 151 of 2020 each carry different definitions of sensitive data, different breach notification timelines, and different enforcement postures.

An AI vendor operating across multiple African countries may be fully compliant in one jurisdiction while creating serious legal exposure in another — and not know it. When managing data flows across four countries at CarePoint, we mapped every AI-processed data element against each country’s framework separately. It added weeks to each deployment cycle. But it was the only defensible approach.

What to do: Before deploying any AI system that processes patient data across borders, conduct a cross-jurisdictional data flow audit. Map where data is collected, where it’s processed, where it’s stored, and which law governs each stage. This is the compliance baseline — not the ceiling.

→ Related Reading: Data Privacy and AI in African Healthcare — Your Complete Compliance Guide

3. Ransomware Targeting AI-Integrated Health Systems

Ransomware is no longer just an IT problem in African healthcare. It is a patient safety crisis.

When a ransomware attack encrypts a hospital’s AI-powered diagnostic system, clinicians lose access to the decision-support tools they now depend on — from radiology AI to sepsis prediction engines. The operational damage compounds: the data is locked, the AI system may need full retraining after the incident if attackers accessed model weights, and recovery timelines stretch into weeks.

The 2025 incidents make this concrete. A ransomware strike on South Africa’s National Health Laboratory Service disrupted blood test processing nationwide, delaying critical care for millions of patients. In Kenya, M-Tiba — a digital health platform backed by Safaricom and CarePay — suffered a significant data breach in late 2025. These are not isolated events. According to Intelligent CIO Africa, Ransomware-as-a-Service (RaaS) groups including LockBit and RansomHub have made African healthcare an increasingly active theatre of operations. Nigeria’s private healthcare sector is now described by security researchers as one of the most heavily targeted on the continent.

AI-driven phishing — which is now 4.5 times more effective than traditional phishing according to the Microsoft Digital Defense Report 2025 — is increasingly the entry vector. Once inside, attackers move laterally from unpatched legacy systems straight into AI-integrated clinical platforms.

What to do: Segment AI systems from legacy clinical infrastructure at the network layer. AI models, training pipelines, and inference endpoints must never share network segments with end-of-life operating systems. Test your backup and recovery procedures specifically for AI system components — not just your databases. Deploy phishing-resistant multi-factor authentication for every clinical staff account with AI system access.

→ Related Reading: AI Cybersecurity Fundamentals — A Practitioner’s Guide for African Organisations

4. Algorithmic Bias as a Security and Compliance Risk

Algorithmic bias isn’t just an ethics problem. In a regulated healthcare environment, it’s a compliance liability, a legal risk, and — depending on the patient outcome — potentially a harm event that triggers regulatory action.

African healthcare faces a specific, well-documented version of this problem. Virtually all commercially available AI diagnostic tools were developed using datasets that skew heavily toward Caucasian, Western patient populations. A 2025 analysis in Preprints.org cited dermatological AI tools that systematically underperform on darker-skinned patients because they were trained on lighter-skinned datasets. The same bias patterns appear in cardiac imaging AI, sepsis prediction models, and maternal health risk tools.

When you deploy a biased AI tool in an African healthcare setting — even with the best intentions — you’re not just delivering suboptimal care. You may be violating the African Union’s Continental Strategy for Artificial Intelligence (2024–2030), adopted in early 2025, which explicitly addresses fairness, accountability, and non-discrimination in AI deployment across member states.

(This is where most CISOs miss the point entirely. They treat bias as someone else’s problem — the vendor’s, the data scientist’s, the ethics committee’s. But when the regulatory audit arrives following a discriminatory AI outcome, it’s the CISO and compliance officer who face scrutiny.)

What to do: Before deploying any AI diagnostic or clinical decision tool, require the vendor to provide disaggregated performance metrics broken down by race, age, sex, and socioeconomic group — validated on African patient data. If they can’t provide this, that is your answer. Don’t deploy.

→ Related Reading: AI Risk Management in African Healthcare — Frameworks That Actually Work

5. Third-Party AI Vendor Risk and Supply Chain Attacks

Most AI tools deployed in African healthcare come from outside Africa. That’s the current market reality. It creates a supply chain risk profile that the vast majority of African health organisations have never formally mapped.

A typical AI vendor integration gives the vendor’s model access to your clinical system, your patient data, and potentially your network telemetry. How many African health organisations have audited what data those AI tools are sending back to external servers? How many have reviewed vendor security certifications? How many have contractual audit rights?

The scale of what’s possible when a single vendor is compromised is not hypothetical. The Change Healthcare ransomware attack in the United States in 2024 — a single vendor compromise — cascaded across hundreds of health systems and affected 190 million Americans. Africa’s healthcare supply chain carries structurally identical vulnerabilities without an equivalent regulatory safety net.

In 2025, Pharmacie.ma in Morocco became a visible example — an alleged customer database leak via a third-party integration exposed patient information without the organisation directly being “hacked” in the traditional sense. This is supply chain risk made real.

What to do: Implement a formal AI Vendor Risk Assessment process before any procurement approval. At minimum, assess: (1) data residency — where is patient data processed? (2) security certifications — ISO 27001 or SOC 2 Type II? (3) breach notification obligations in the contract (4) your contractual right to audit (5) an independent penetration test of the API integration layer before go-live for any high-risk system.

→ Related Reading: Enterprise AI GRC — Vendor Risk Management for African Health Organisations

AI vendor risk assessment checklist for African healthcare organisations
Five non-negotiable questions every African healthcare CISO must answer before signing an AI vendor contract.

6. Infrastructure Vulnerabilities Amplified by AI Adoption

AI doesn’t just inherit your existing infrastructure vulnerabilities — it amplifies them. And Africa’s healthcare infrastructure baseline creates compounding exposure.

Less than 30% of health facilities across Africa have access to reliable electricity, according to research published by PMC/NIH. This means systems are frequently shut down and restarted, patches are delayed because downtime has clinical consequences, and power gap windows leave systems in unprotected states. AI systems that depend on consistent connectivity and power stability are particularly fragile in this environment.

When AI is layered on top of weak infrastructure, it creates new entry points that didn’t exist before: the AI API endpoint, the data pipeline feeding the model, the inference server returning results. Each is a new attack vector — and adversaries targeting African healthcare know this.

Mobile health platforms compound the risk further. With smartphone penetration accelerating across the continent, AI-powered health apps are reaching patients on personal devices — shared, unpatched, running on unsecured mobile networks. Sensitive health data flowing through consumer apps on shared devices is a security exposure that no African regulator has yet fully addressed.

What to do: Conduct an infrastructure security baseline before any AI deployment. Three controls must be in place before go-live: (1) network segmentation between AI systems and clinical/legacy systems, (2) encryption in transit and at rest for all patient data processed by AI, and (3) phishing-resistant multi-factor authentication for all AI system users. These aren’t advanced controls — they’re the minimum acceptable standard.

→ Related Reading: AI Security Operations — Building a Detection and Response Programme for African Healthcare

7. Regulatory Grey Zones and Accountability Gaps

Who is legally responsible when an AI system in an African hospital makes a wrong clinical recommendation that harms a patient?

Right now, in most African jurisdictions: nobody.

That’s not rhetoric. A PMC research review on AI governance in Africa concluded that “there are no enacted laws guiding who takes responsibility for adverse outcomes that might result from the usage of AI in healthcare.” The absence of accountability frameworks creates a market dynamic where vendors face no legal consequences for deploying insecure or biased tools — and health systems accept AI outputs without proper validation, because “the algorithm said so.”

The Lancet Laboratories case in South Africa illustrates what happens when governance frameworks do exist but aren’t followed. Lancet received a regulatory penalty under South Africa’s Protection of Personal Information Act (POPIA) in 2025 for failing to notify patients about a data breach — the first high-profile enforcement action of its kind in African healthcare. It won’t be the last.

The African Union’s Continental Health Data Governance Framework — announced jointly by Africa CDC and AUDA-NEPAD in mid-2025 and expected for AU endorsement in 2026 — will establish shared standards for health data privacy, consent, and cross-border data sharing across member states. That’s meaningful progress. But until those frameworks carry enforcement mechanisms, African healthcare organisations cannot rely on regulation to backstop their AI security posture. They must build it themselves.

What to do: Don’t wait for regulation. Establish an internal AI Governance Policy that covers: (1) AI system validation requirements before clinical deployment, (2) incident response procedures specific to AI system failures, and (3) clear accountability assignments for every AI-related clinical decision. Map your policy to both your national data protection law and the AU AI Strategy — you’ll be ahead of compliance requirements when enforcement arrives.

→ Related Reading: AI Regulatory Compliance in Africa — What Health Leaders Need to Know in 2025

A Practical AI Security Framework for African Healthcare

The seven risks above aren’t theoretical — they’re active. Based on implementing security controls across four African health systems, here is the minimum viable AI security framework for a healthcare organisation operating on this continent.

Four-layer AI security framework for African healthcare organisations
Governance → Technical → Operational → Compliance: a practical framework for securing AI systems in African health settings.The Four-Layer AI Security Framework

Layer 1 — Governance (Before Deployment)

  • Establish an AI Risk Committee with CISO, CMO, and Legal representation
  • Require a security assessment for every AI tool before procurement approval
  • Map all AI data flows against applicable national data protection laws
  • Assign formal accountability for each AI system in production

Layer 2 — Technical Controls (At Deployment)

  • Segment AI systems from clinical and legacy infrastructure at the network layer
  • Encrypt all patient data processed by AI systems — in transit and at rest
  • Implement API security controls for all AI vendor integrations
  • Deploy anomaly detection monitoring on AI inference endpoints

Layer 3 — Operational Security (Post-Deployment)

  • Monitor AI model performance for signs of drift or poisoning — sudden accuracy drops and demographic performance gaps are key indicators
  • Conduct quarterly security reviews for all third-party AI vendors
  • Test AI-specific incident response procedures independently from general IT disaster recovery
  • Train clinical staff on AI security risks, not just AI capabilities

Layer 4 — Compliance Monitoring (Ongoing)

  • Track regulatory developments across every jurisdiction you operate in
  • Benchmark your AI governance policy against the AU AI Strategy (2024–2030) annually
  • Prepare for the Continental Health Data Governance Framework before enforcement begins
  • Document every AI procurement decision and security assessment for audit readiness

→ Build this framework with confidence: AI Security & Compliance Foundation Training Programme — designed for African health and security professionals.

What the Data Tells Us — And What It Doesn’t

One important caveat: Africa’s cybersecurity incident data is significantly underreported. Hospitals and healthcare facilities rarely disclose breaches publicly — a pattern confirmed by multiple security researchers and referenced directly in Intelligent CISO’s March 2026 analysis. The statistics available almost certainly understate the true frequency of AI-related security incidents across the continent.

This is itself a governance risk. When there’s no mandatory incident disclosure, there’s no shared learning. Health organisations across Africa are independently discovering the same vulnerabilities, paying the same remediation costs, and losing the same patient trust — when a coordinated continental incident-sharing framework could prevent much of it.

The World Economic Forum’s Global Cybersecurity Outlook 2025 found that 66% of organisations expect AI to significantly impact cybersecurity — but only 37% have formal processes to assess AI tool security before deployment. In African healthcare, that second number is almost certainly lower.

Key Takeaways for Executives

  • African healthcare faces 3,575 cyberattacks per week — a 38% year-on-year increase
  • Real incidents in 2025 hit Mediclinic (SA), National Health Laboratory Service (SA), M-Tiba (Kenya), Lancet Laboratories (SA), and Pharmacie.ma (Morocco)
  • AI model poisoning, ransomware, and algorithmic bias are the three most underestimated risk categories in African healthcare today
  • The AU Continental AI Strategy (2024–2030) and the forthcoming Health Data Governance Framework will create compliance obligations — organisations that build governance now will be ahead
  • Africa loses an estimated 10% of GDP to cyberattacks annually — healthcare data sits at the centre of that exposure
  • AI-driven phishing is 4.5x more effective than traditional phishing — staff awareness is now a frontline defence

Frequently Asked Questions

These are the questions healthcare leaders and security professionals across Africa are actively searching for answers to. Each answer is structured for both human readers and AI answer engines.

What are the biggest AI security risks in African healthcare?

The seven biggest AI security risks in African healthcare are: AI model poisoning and training data attacks; patient data privacy breaches in AI pipelines; ransomware targeting AI-integrated health systems; algorithmic bias creating both clinical harm and compliance liability; third-party AI vendor supply chain attacks; infrastructure vulnerabilities amplified by AI adoption; and regulatory grey zones where no clear accountability framework exists. Each risk is compounded by Africa’s fragmented governance landscape and the pace of AI adoption outstripping security investment.

How many cyberattacks does African healthcare face per week?

African healthcare organisations faced an average of 3,575 cyberattacks per week in 2025 — a 38% surge from the previous year. This figure is cited by Microsoft and reported across multiple cybersecurity publications tracking the African threat landscape. Ransomware dominates the threat profile, with RaaS groups including LockBit and RansomHub actively targeting the sector.

Which African countries have been hit by healthcare cyberattacks?

Confirmed healthcare cyberattack incidents in Africa in 2025 include: Mediclinic Southern Africa (May 2025 — cyber extortion, sensitive HR data compromised); South Africa’s National Health Laboratory Service (ransomware attack disrupting nationwide blood test processing); Lancet Laboratories, South Africa (regulatory penalty under POPIA for failure to notify patients about a data breach); M-Tiba, Kenya (late 2025 — cyberattack and data breach on the CarePay/Safaricom-backed digital health platform); and Pharmacie.ma, Morocco (alleged customer database leak involving unauthorised data export). These are only the reported incidents — the majority of breaches go undisclosed.

What is AI model poisoning and how does it affect healthcare?

AI model poisoning is a cyberattack in which a threat actor deliberately injects corrupted or manipulated data into a machine learning model’s training pipeline. In healthcare, the impact isn’t a downed server — it’s wrong diagnoses, incorrect drug dosages, and false patient risk scores. African healthcare systems are particularly exposed because most deployed AI tools are fine-tuned on small, poorly labelled local datasets that are easier for an attacker to corrupt without triggering detection alerts. Detection requires continuous monitoring of model performance metrics, particularly watching for sudden accuracy drops or demographic performance gaps that weren’t present during initial validation.

What data protection laws govern healthcare AI in Africa?

No single pan-African law specifically governs AI security in healthcare. Organisations must comply with their national data protection frameworks: Ghana’s Data Protection Act 2012; Kenya’s Data Protection Act 2019; Nigeria’s NDPR (2019); and Egypt’s Personal Data Protection Law No. 151 of 2020. At the continental level, the African Union’s Continental AI Strategy (2024–2030) — adopted in early 2025 — provides a governance framework covering responsible AI deployment. The forthcoming Continental Health Data Governance Framework, announced by Africa CDC and AUDA-NEPAD in mid-2025, is expected to establish binding health data standards pending AU endorsement in 2026.

How can African hospitals protect AI systems from ransomware?

The five most effective defences African hospitals can implement against ransomware targeting AI systems are: (1) Network segmentation — isolate AI systems from legacy clinical infrastructure; (2) Encryption — all patient data processed by AI must be encrypted at rest and in transit; (3) Phishing-resistant MFA — the Microsoft Digital Defense Report 2025 confirms that valid account compromise is the most common entry vector; (4) AI-specific backup and recovery — tested separately from general IT DR procedures; (5) Regular penetration testing of AI API endpoints and data pipelines before and after go-live.

What is algorithmic bias and why does it matter for African healthcare?

Algorithmic bias in healthcare occurs when an AI system produces systematically skewed outputs for certain patient groups because the model was trained on data that doesn’t represent those patients. In Africa, most commercial AI diagnostic tools were built on datasets from Western, Caucasian patient populations. When applied to African patients, these tools may produce incorrect clinical recommendations — particularly in dermatology, cardiac imaging, and maternal health risk assessment. Under the AU’s Continental AI Strategy (2024–2030), deploying biased AI tools may constitute a governance violation, making algorithmic bias both a clinical risk and a compliance liability for any African healthcare organisation.

How should African healthcare CISOs evaluate third-party AI vendors?

Before approving any third-party AI vendor, African healthcare CISOs should assess five areas: (1) Data residency — where is patient data processed and stored? (2) Security certifications — does the vendor hold ISO 27001 or SOC 2 Type II? (3) Breach notification obligations — is the vendor contractually required to notify you within the timeframe mandated by your jurisdiction’s data protection law? (4) Audit rights — do you have the contractual right to inspect their security controls? (5) Independent penetration testing — has the API integration layer been independently tested before go-live? If a vendor cannot satisfy all five criteria, that’s the answer.

Is there an African Union framework for AI security in healthcare?

Yes. The African Union adopted its Continental Strategy for Artificial Intelligence (2024–2030) in early 2025, providing a roadmap for responsible AI deployment across sectors including healthcare. In mid-2025, Africa CDC and AUDA-NEPAD jointly announced plans for a Continental Health Data Governance Framework to harmonise health data governance — covering privacy, consent, data ownership, and cross-border sharing — across AU member states. AU endorsement is expected in 2026. These frameworks currently operate as strategic guidance rather than enforceable law, but organisations that align their AI governance policies with them now will be ahead of compliance requirements when enforcement mechanisms follow.

What is Africa losing financially to cyberattacks?

Africa loses an estimated 10% of its GDP to cyberattacks annually, according to the UN Economic Commission for Africa. In healthcare specifically, the global average cost of a healthcare data breach is $7.42 million per incident — the most expensive of any industry for 14 consecutive years (HIPAA Journal). African healthcare organisations face compounded exposure because most operate without mandatory breach disclosure requirements, meaning the true financial and operational cost of cyberattacks in the sector is significantly underreported in available statistics.

The Bottom Line

AI is not optional for African healthcare. The workforce shortages, geographic access barriers, and disease burden across the continent make AI adoption not just attractive — it’s necessary. But AI deployed without security is a liability, not an asset.

The AI security risks in African healthcare are real, specific, and growing. Model poisoning. Patient data breaches across fragmented regulatory jurisdictions. Ransomware disrupting AI-integrated clinical systems. Algorithmic bias creating both clinical harm and regulatory exposure. Vendor supply chains with no security accountability. Infrastructure gaps that amplify every vulnerability. And accountability frameworks that, right now, protect no one.

You don’t have to solve all of this at once. But you do have to start — because the incidents happening right now in South Africa, Kenya, and Morocco are not warnings. They’re previews.

Build your governance layer first. Know what AI systems you have, what patient data they touch, and who is accountable when something goes wrong. Then work outward. The organisations that build security into their AI strategy before a major incident forces the conversation will be the ones that earn and keep patient trust as Africa’s digital health ecosystem matures.

Ready to build that security posture systematically? The AI Security & Compliance Foundation Training Programme at AI Security Info gives you a structured path from AI security fundamentals to implementation-ready frameworks — built specifically for Africa’s regulatory and operational context.

→ Explore more Industry-Specific AI Security resources for African healthcare, finance, and public sector organisations.

patrick dasoberi
The founder of Aisecurity website

About the Author

Patrick Dasoberi

CISA  ·  CDPSE  ·  AI/ML Security Engineer  ·  RAG Applications Specialist

Patrick Dasoberi is the founder of AI Security Info — Africa’s leading platform for AI security, governance, and compliance. As former CTO of CarePoint (African Health Holding), he led the security of 25M+ patient records across Ghana, Nigeria, Kenya, and Egypt. A specialist in AI/ML security engineering and RAG application architectures, he is a contributor to Ghana’s Ethical AI Framework and holds an MSc in Information Technology from the University of West of England.

Certifications: CISA  |  CDPSE  |  MSc IT, University of West of England  |  Contributor, Ghana Ethical AI Framework