By Raymond Amumpaire |
Artificial Intelligence (AI) involves using computers to classify, analyse, and draw predictions from data sets, using a set of rules called algorithms, is fundamentally transforming human life. It increases innovation, efficiency, and productivity, while also enhancing decision-making, reducing human error, reducing costs for providers and improving health outcomes. In relation to this, the World Health Organization (WHO) posits that AI presents great potential, especially in regard to health services, medical care interventions, and research and development.
However, AI has also raised several concerns, including privacy and security issues, bias, authoritative errors, misinformation, job market disruption, and intellectual property infringement. World-renowned scholars warn that AI could pose greater risks to society. The WHO reaffirms the potential harm that could be inflicted on millions with the use of AI if human rights-oriented ethical AI governance, especially within the health sector, is absent.
On the African continent, our AI watershed moment has been characterised by a disruptive interplay where sporadic regulatory frameworks have been struggling to meet the non-linear tech adoption, culminating in numerous enforcement challenges. Not to mention the fragmentation of AI and related laws within the continent, particularly in the e-health sector.
This piece examines how AI policy developments are addressing emerging e-health rights concerns across the continent, from privacy to gender. It proposes some recommendations to shape Africa’s digital future towards the realisation of digital rights for all.
AI Governance
AI governance is an umbrella term for the processes, standards and guardrails that help ensure AI systems and tools are safe and ethical for usage. AI governance frameworks guide AI research, development, and application to ensure safety, fairness, and respect for human rights, while incorporating oversight mechanisms that address risks such as bias, privacy infringement, and misuse, while fostering innovation and building trust.
e-Health
The WHO defines e-Health as “the use of information and communications technology in support of health and health-related fields” and has been predicated on three major areas of disruption viz., telemedicine, smartphone Apps, wearable sensors and the use of AI in healthcare. This encompasses the use of machine learning algorithms and cognitive technology within medical settings, representing the convergence of human and machine learning. Increasingly,AI-powered e-health services can help accelerate automation of health research and data collection, especially in unreached and underrepresented areas where citizens are using AI and digital technologies to support their own health as well as healthcare decision making. Such services also help tackle some social and cultural barriers to accessing health services.
Recent Trends
Recent developments in the data governance space include an increase in regulatory pushback from Data Protection Authorities (DPAs) across several jurisdictions, such as Nigeria, Uganda, and, most recently, Burkina Faso. Other developments are noted in Mozambique, which has recently broken ground on developing a National Data Governance Strategy and a National Data Policy, and in Sierra Leone, which has concluded the validation of its National Data Governance Strategy. This milestone aligns with the African Union Data Policy Framework, which aims to advance data sharing for robust economic and digital development across Africa.
From the AI space, the leading trend has been around Agentic AI, with the development of National AI strategies. As of July 2025, only 16 Countries from Africa have developed such strategies. For e-health, there have been several notable developments, such as the One Patient, One Record system in Mauritius, an initiative that leverages state-of-the-art technology to create a more cost-efficient, patient-centric, and accountable system for managing patient records, details, and test results.
Most recently, there has been an increased use of AI in healthcare endeavours to ‘democratise access to therapy.’ This is true regardless of concerns that large language models (LLMs) trained on a wide range of data can be unpredictable. Testament to these concerns, there have been high-profile cases in which chatbots encouraged self-harm and suicide, suggested that people dealing with addiction use drugs again and even discouraged the intake of table salt, with associated reports of “ChatGPT driving people mad”.
These models are often designed to be affirming and focus on keeping users engaged, rather than improving their physical or mental health. According to experts’ recent findings, these chatbots are going to extreme lengths to tell users what they want to hear despite warnings from developers that LLMs should not be ‘used as a substitute for professional advice.’
Linking data streams into the electronic health record remains a challenging task. An even greater challenge is ensuring the protection of data from being shared and re-shared by these chatbots with other unvetted applications that have questionable privacy and security guarantees. In March 2023, Cerebral, a virtual therapy service, disclosed that it hadshared protected health information for more than 3 million clients with third-party clients such as Facebook, TikTok, Google, and other online platforms. This data included contact information, birth dates, social security numbers, and results from mental health assessments. Another company, BetterHelp, offering online counselling through several websites, settled with the US Federal Trade Commissions for a similar violation.
Currently, the Flo App is battling a class action lawsuit accusing the period tracking app of improperly sharing user data with Meta for targeted advertising, which was opened in a Northern California Federal Court. The lawsuit alleges that despite promising women it would not share their private sexual and reproductive data with third parties, Flo allowed Meta to embed a software development kit in the app, which let it harvest data about women’s and girls’ periods.
These cases are not isolated and collectively indicate the nature of poor data governance and privacy rights, despite the WHO recognising the importance of data in e-health as critical to achieving universal health coverage.
The Impact of AI on e-Health
While data governance can mean enhancement of accessibility and quality of health care services; effectiveness and efficiency of health care services; and establishing secure and interoperable electronic health records systems, there have been fears among individuals who are dealing with health conditions that are often stigmatised or very personal (e.g. HIV, family planning and abortion care, sexual minorities) worrying that their confidential health information will be disclosed or their identity traced due to their participation in targeted communication programmes.
This can be compared to the enormity of loss arising from a single data breach involving health data, given its attractiveness to rogue cyber actors, which exposes health status details, demographic information, financial details, and other sensitive data, such as email addresses, usernames, and passwords. Moreover, such a serious breach may unduly facilitate human rights violations and abuses of the most vulnerable and marginalised communities by state and non-state actors, impacting their well-being.
Without data governance and e-rights as the backbone, we not only risk exacerbating exclusion and data misuse but also risk making e-health an instrument of surveillance in medical gloves. With its current strengths and the predicted rise of digital health services across the continent, it is important to recall that e-health is not a panacea to existing structural gaps and challenges in Africa’s health system. Firstly, there exists a chronic issue of uneven development and implementation of e-health laws across the continent. There is also a lack of and uneven distribution of health and digital infrastructures, digital connectivity, financial resources, data, and digital literacy of citizens and of specialised healthcare providers, deepening the digital health divide.
Common Barriers to AI Application
The implementation of AI in sub-Saharan Africa is hindered by several obstacles, including limited access to data, the absence of regulatory frameworks, inadequate infrastructure and networking connectivity, as well as a scarcity of talent and expertise in advanced AI. There is also a critical issue of algorithmic flaws that reflect human failures, which in turn reveal the priorities, values, and limitations of those who hold the power to shape technology.
According to the World Economic Forum’s Digital Healthcare Transformation (DHT) Initiative, current health AI ecosystems are fragmented, with an insufficient understanding of AI technologies in health from health leaders. AI systems are often criticised for being black boxes that are very complex and difficult to explain. Notably, the degree of “black box” lack of explainability that may be acceptable to regulators validating performance might differ from the amount of explainability clinicians demand. However, the latter is an open empirical question.
Domestic Legal Interventions
From a national perspective, Constitutions of the majority of African nation, such as Article 77 of the Angola Constitution, Article 43(1)(a) of the Kenyan Constitution and Article 21 of the Constitution of the Republic of Rwanda provide for the right to health reflective of Article 16 of the African Charter on Human and Peoples’ Rights. Attending these are the subsidiary legislations on Access to Information, Communication and Communication Commission laws, Identity Management laws, Cyber Crime laws, National Health laws, Consumer Protection regulations, Data Protection and Privacy laws as well.
These, read together, present an enabling framework environment for the basis, roll-out and implementation of digital health interventions. The development of national AI strategies and digital health strategies supplements these efforts. Cases in point include Kenya’s Digital Health Act (2023) that captures data governance through in-built mechanisms which seek to improve the client’s health, safeguard communities, and security throughout the data lifecycle, and the National Digital Health Strategy of South Africa (2019-2024).
Regional Legal Interventions
Some of the laws from a regional perspective include the African Union Convention on Cyber Security and Personal Data Protection (Malabo Convention), which requires all member states to develop a comprehensive data protection framework.
As outlined in Article 8(1), it aims to strengthen fundamental rights and public freedoms, particularly the protection of personal data, and to punish any violation of privacy without prejudice to the principle of the free flow of information. The African Union Data Policy Framework recognises data as the basis for a new social contract, and the need for data governance to create value and prevent harms has been a pivotal instrument.
The AU Continental AI Strategy also posits guidance on expanding AI Adoption in the African Health Sector in Africa particularly harnessing AI’s benefits for African people, institutions, the private sector and countries, in line with Agenda 2063 (i.e., improving healthcare and mainstreaming AI in priority sectors such as health while addressing the risk such as bias and discrimination as well as ethical considerations such as privacy and data protection.
Other directive interventions include the African Commission on Human and Peoples’ Rights (ACHPR) Resolution 473 (2021), which emphasises privacy, fairness, and cultural alignment in AI deployment as well as urging a comprehensive legal and ethical governance framework for AI technologies, robotics and other new and emerging technologies to ensure compliance with the African Charter.
International Legal Interventions
From an international perspective, the WHO has provided policy guidance through the Global strategy on digital health 2020-2025 aimed at promoting healthy lives and well-being for everyone, everywhere, at all ages by delivering national or regional Digital Health initiatives that integrate financial, organisational, human and technological resources. The WHO-ITU National e-Health Strategy Toolkit supplies an effective approach to national strategy development and implementation for party states under WHO and the International Telecommunication Union (ITU) on the African continent and globally.
The advent of the COVID-19 pandemic has presented a window of opportunity to accelerate the adoption of the FAIR Guidelines for scientific data management and stewardship, which guide the process of ensuring data and metadata are Findable, Accessible, Interoperable, and Reusable (FAIR) for both humans and machines, now and in the future.
From an AI perspective, several instruments are relevant to the application of AI in healthcare. The UNESCO Recommendation on the Ethics of Artificial Intelligence establishes a binding framework for all 194 UNESCO member states on the ethical use of AI across various sectors, including health. Particularly, is the human-centric approach, oversight and accountability, inclusive access, transparency and safety, etcetera.
It also creates an obligation to ensure the effective development and deployment of AI systems related to health in general, and specifically mental health, to ensure they are safe, effective, efficient, scientifically and medically proven, and enable evidence-based innovation and medical progress.
The Global Digital Compact, a landmark instrument by the United Nations, is a comprehensive global framework for digital cooperation and governance of artificial intelligence. It calls for the investment in and deployment of resilient digital infrastructure to promote equitable access. Nigeria, Rwanda, Kenya participated in the 2023 U.K. AI Summit, which led to the Bletchley Declaration on designing, developing, deploying, and using AI, in a manner that is safe, in such a way as to be human-centric, trustworthy and responsible. Nigeria and Kenya endorsed the Paris Charter on Artificial Intelligence in the Public Interest on preventing and mitigating individual and collective harms, risks, threats and violations caused by the use and abuse of AI.
The African Union Commission, South Africa and Kenya are signatories to the Paris AI Action Summit Statement on Inclusive and Sustainable Artificial Intelligence for People and the Planet that emphasises AI safety. The Artificial Intelligence Act of the European Union, also known as the EU AI Act, is described as the first-ever comprehensive AI legislation, takes a risk-based approach to regulation, applying different rules to AI according to the risk they pose. This is relevant not in the sense that it is directly binding on the African AI ecosystem. It draws the Brussels Effect view as a potential reference point with reasonable certainty that it will impact them.
Recommendations
With these major developments and challenges, rights-centered, ethical, inclusive, and tailored e-health services are necessary. Every player has a part to play in mitigating the existing and potential harms, thereby constructing sound definitions and expectations of accountability and a comprehensive framework to leverage AI for safety and to effectively meet the health rights and needs of all Africans. Below are some of the key measures to be undertaken.