Built Without Us, Used Against Us: Africa’s Digital Public Infrastructure Problem

By Atuhairwe Benardine |

The Fingerprint that Changed Everything

A citizen stands at a counter, and their finger is pressed against a scanner, and watches a system decide that they do not exist. Their finger is scanned severally yet the machine looks back blankly, unconvinced.

The frustration is immediate. Time is lost, opportunities are missed. But consider a harder question: what if that counter had been a hospital reception desk? What if the service being sought was not administrative, but medical? What if the person standing there was elderly, unwell, or had walked for hours to get there? What if being turned away did not mean inconvenience, but harm?

This is not a hypothetical. It is the lived reality of millions of Africans navigating Digital Public Infrastructure systems that were not designed with them in mind. And it is precisely why DPI can no longer be treated as a purely technical conversation.

What Is DPI, and Why You Should Care?

Digital Public Infrastructure (DPI) refers to the foundational digital systems that enable governments, businesses, and citizens to interact at scale. The three most discussed layers are digital identity systems, digital payment platforms, and data exchange systems. Together, they determine who can access healthcare, social protection, education, financial services, and even the right to vote.

Across Africa, governments are investing heavily in DPI as the backbone of their digital transformation strategies. The promise is enormous; more efficient service delivery, greater financial inclusion, broader civic participation. But promise and reality are two very different things.

Bias Baked into the Blueprint: The Technical Reality

When a fingerprint scanner fails to recognise an individual after ten attempts, most people assume something is wrong with their finger. The deeper problem, however, lies in the system’s training data.

Fingerprint recognition systems are powered by machine learning models; algorithms trained on large datasets of fingerprint images. The critical question is: whose fingerprints were in that dataset? If the training data skewed towards certain populations, skin tones, or age groups, the model will perform accurately for those groups and poorly for everyone else. Factors like darker skin tones, the effects of manual labour on fingerprints, age-related changes, and even environmental humidity significantly affect recognition accuracy. These are not random errors. They are predictable, systemic failures embedded into the system long before any citizen places their finger on that scanner.

This is what makes DPI so consequential. Every design decision: what data to collect, which vendor to procure from, which dataset to train on, what error rate is acceptable; reflects a value judgment. And when those decisions are made without diverse, representative voices in the room, the systems that emerge will serve some citizens well and fail others consistently. The tragedy is that those failed most consistently are almost always those who were already most marginalised.

Rights Without Remedy: The Legal Reality

From a legal standpoint, what is described above is not merely a technical inconvenience. It is a potential violation of fundamental rights that African states are bound to protect.

Africa’s legal obligations are clear. The International Covenant on Civil and Political Rights (ICCPR), ratified by the majority of African states, alongside the African Charter on Human and Peoples’ Rights,  specifically Articles 9 to 13 provide binding protections on privacy, freedom of expression, association, and political participation. When a digital identity system excludes citizens from healthcare or social protection, it does not operate in a legal vacuum. It operates in direct tension with these obligations.

Yet the enforcement architecture is deeply fragile. Of Africa’s 54 countries, 44 now have national data protection laws,  a significant leap in recent years. Yet only 15 have ratified the African Union’s Malabo Convention, the continental framework for data protection and cybersecurity. Data protection authorities, where they exist, are frequently underfunded, understaffed, and structurally dependent on the very governments they are meant to hold accountable. Independent oversight remains the exception, not the rule.

Hence, the legal architecture exists. The implementation gap is where rights go to die

A pattern, not an Accident: the Advocacy Reality

Kenya’s Huduma Number initiative attempted to consolidate all national identity systems into a single centralised biometric database. Civil society organisations challenged it in Nubian Rights Forum & 2 Others v Attorney General & 6 Others; Child Welfare Society & 9 Others (Interested Parties) [2020] eKLR, Consolidated Petitions No. 56, 58 & 59 of 2019 in the High Court of Kenya. The petitioners challenged the absence of a data protection framework, the collection of children’s data without parental consent, and the government’s explicit threat that citizens without the Huduma Number would lose access to public services. The case forced the government to fast-track a data protection law and operationalise a data protection authority before proceeding. This was a hard-won victory; but a reactive, expensive, and slow one.

In Uganda, older persons without national IDs were excluded from the Social Assistance Grants for Empowerment (SAGE) programme. Civil society organisations including the Initiative for Social and Economic Rights, Health Equity and Policy Initiative, and Unwanted Witness challenged this exclusion in ISER & 2 Others v Attorney General & NIRA (HCT-00-CV-MC-0066-2022), but High Court of Uganda dismissed the case on 10th June 2025. Despite a compelling Amicus Curiae brief from CIPESA, Access Now, and ARTICLE 19, the court ruled that exclusions were isolated rather than systemic, allowing the scheme to continue requiring national IDs for access to social protection. Millions of vulnerable Ugandans remained locked out.

These developments present a pattern of systems built without adequate consultation, deployed without adequate safeguards, and only evaluated after the damage is done.

Conclusion and Recommendations

DPI is not just technical infrastructure; It is a constitutional infrastructure. It defines the digital contract between the state and its citizens. And like any contract worth the paper it is written on, it must be negotiated, not imposed.

The fingerprint scanner does not know it is excluding anyone. But the governments that procure these systems, the vendors that build them, and the policymakers that deploy them without asking whether they will work for everyone have a choice. Civil society’s role is to ensure that choice is made responsibly, transparently, and with the full weight of Africa’s human rights obligations in mind.

civil society should shift from a reactive to a proactive posture, and engage with DPI from the onset of the architecture.

Specific actions must be undertaken including among others:

  • Demanding human rights impact assessments before systems go live.
  • Insisting on privacy-by-design principles in procurement contracts and vendor agreements.
  • Participating meaningfully in public consultations during the drafting of DPI legislation and policies.
  • Using regional mechanisms like the African Commission on Human and Peoples’ Rights to submit shadow reports and push for binding normative standards on DPI governance.

Journalists and activists in Beni, DR Congo, are turning to Tails Operating System to counter surveillance and censorship threats

Political and security tensions are escalating in the Democratic Republic of Congo. As journalists, activists, and human rights defenders document abuses on the front line, both their lives and those of victims are at risk.

In such high-risk environments, secure communication and digital safety reduce deadly mistakes. However, jumping between isolated tools (untrusted VPNs, common end-to-end encrypted messaging apps, email clients, online password managers, etc) leaves fragmented metadata, timestamps, and app footprints. State surveillance can reconstruct this information to map activists’ networks.

Given these evolving threats and the limitations of isolated security tools, it is clear that front-line defenders and journalists in this post-Snowden era require ultimate security, especially when working on revelations (next leaks), handling untrusted documents, or conducting confidential communications.

#Techshield: our approach for ultimate protection of frontline defenders

On April 15, 2026, Bingwa Civic Tech Lab hosted a digital security training for journalists, activists, and human rights defenders working in the Beni region, focusing on the portable Debian-based operating system Tails (The Amnesic Incognito Live System).

During the session, attendees learned the basics of digital security for high-risk environments. They also explored use cases for Tails OS apps: Thunderbird for emails, LibreOffice for documents, KeePassXC for password management, Metadata Cleaner, and Cleopatra for PGP keys.

Each participant received a copy of Tails OS on a USB drive. They learned how to boot different computers from the USB and configure persistent storage. This helps them get the most out of Tails OS as a “portable computer.”

Importantly, with Tails OS, digital security tools are not isolated, leaving no app footprints, and internet traffic goes directly through the Tor network, which encrypts and anonymises the connection by passing it through three relays.

Now more than ever, frontline defenders and journalists require the highest level of security when handling sensitive information. In DRC’s political turmoil, where human rights violations are rampant, this is crucial. At Bingwa Civic Tech Lab, we believe those advancing human rights need ultimate security, free of charge. That is why we rely on open-source technologies to protect frontline defenders and help safeguard human dignity.

For the past two years, Bingwa Civic Tech Lab has partnered with local Civil Society Organisations and HRDs to enhance their digital resilience, both individually and organisationally, through capacity building, technical support, and tailored digital security strategies.

To learn more about the activities we have conducted under the #TechShield for Human Rights Defenders program at Bingwa Civic Tech Lab, visit: http://bingwa-civic.tech/en/post-filter/TechShield

This article was first published on Bingwa Civic Tech Lab on 16 April, 2026

The Digital Rights Alliance Africa (DRAA) Launches its First Baseline Monitoring Report for the Year 2025

By Edrine Wanyama |

The Digital Rights Alliance Africa (DRAA) has launched its first baseline monitoring report. The report, which captures the state of digital rights in 9 countries in the year 2025, was a result of collective efforts of members of the alliance drawn from the study countries.

The countries reported include Cameroon, Democratic Republic of Congo (DRC), Ethiopia, Kenya, South Africa, Tanzania, Togo, Uganda and Zambia. Across the study countries, internet shutdowns, connectivity and affordability policies, cybercrime laws, digital identification frameworks, and data protection laws and enforcement and how they adversely impact on digital rights are underscored. The key online civil liberties scoped by the study include freedom of expression, access to information, privacy, peaceful assembly, and association

The goal of the report was to assess the evolving legal and policy environment governing digital rights. The snapshot provided in this report will not only be used to promote evidence-based advocacy but also to guide the efforts of DRAA members in the coming years to ensure advocacy actions continue to address identified gaps, needs and challenges. It will be the basis for tracking the current perceptions of the digital rights environment and how it could change over time, and to determine whether governments are making progress towards improving online civic space.

Rather than providing an in-depth examination of all digital rights topics, the report focuses on issues related to legal frameworks and policies that impact meaningful internet access and privacy rights. For holistic relevance, the report covers the national, regional and international norms on digital rights-highlighting key standards, challenges and need for reform.

The report fronts several areas for focus and reflection on the online civic space on the continent.  Firstly, despite constitutional protections, digital laws impermissibly restrict the rights to freedom of expression and privacy. Secondly, gaps in oversight and accountability by state actors and the private sector remain a key challenge. Thirdly, Digital Public Infrastructure and Artificial Intelligence are increasingly shaping the digital environment and inclusion in public decision making but with limited oversight and regulation. And fourthly, civil society organizations are critical players in the digital civic space governance.

The report fronts several recommendations for adoption by governments to ensure meaningful protection and promotion of online civil liberties including:

  • Increased and wide investment of both financial and human resources into connectivity infrastructure including the nationwide fibre-optic coverage.
  • Creating a favourable environment for private sector investment into the tech space including through providing tax incentives and deductions and subsidies.
  • Desisting from giving blanket orders to internet service providers to shut down the internet and disrupt internet access.
  • Proactively raising public awareness, promoting digital literacy and promoting digital inclusion of online rights and freedoms. 
  • Extending the internet to schools to facilitate free access and inclusion.

The full report can be found here

Africanising and Decolonising Content Moderation in the Digital Civic Space

By Mojirayo Ogunlana-Nkanga |

The digital civic space has become central to democratic participation, activism, and public debate across Africa. However, the moderation of online content is largely shaped by the Global North priorities, legal frameworks, and technical standards. For many African users, this has created gaps in language coverage, procedural justice, and cultural sensitivity. Africanisation and decolonisation are essential in content moderation to realign digital governance with African contexts, values, and human rights standards.

Africanisation refers to adapting global practices to African realities, such as local languages, plural legal traditions, and community-based approaches to justice. Decolonisation goes deeper by challenging the systemic power imbalances that underpin content moderation systems, questioning who sets the rules, who does the work, and whose interests are served. In this article, the Digital Rights Alliance Africa (DRAA) explores key developments in Africanisation and decolonisation of content moderation practices, the central issues emerging from these debates, their significance for Africa as a whole, and recommendations to strengthen inclusive and rights-respecting digital governance.

Why Decolonising Content Moderation Matters for Africa

Decolonising content moderation is urgent in Africa for several reasons. First, the continent’s extraordinary linguistic diversity demands inclusive moderation systems that respect local languages and cultural expression. Second, given Africa’s growing digital population, ensuring fair and accountable governance of online spaces is central to democratic participation and human rights. Third, African courts and regulators now possess tools, through both regional standards and national laws, to demand greater accountability from platforms. Finally, Africa bears a disproportionate share of the hidden labor of moderation, making labor justice a key part of digital decolonisation.

On content moderation and platform accountability in Africa, the most relevant normative developments include the African Commission’s Declaration of Principles on Freedom of Expression and Access to Information in Africa, especially Principle 39 on internet intermediaries, as well as ACHPR Resolutions 630 and 631 of 2025 on information integrity, independent fact-checking, and public-interest content on digital platforms. At least 44 out of 55 countries on the continent have enacted data protection laws. These laws regulate the processing of personal data and place restrictions on decisions based solely on automated processing, including profiling, particularly where such decisions produce legal or similarly significant effects. As a result, they can regulate the design and deployment of platform moderation systems where those systems rely on personal data, profiling, or automated decision-making.

In this vein, and in furtherance of labour rights litigation, courts in Kenya have allowed cases brought by African content moderators against Meta and its subcontractors to proceed. These suits, which raise issues relating to working conditions, mental health harms, dismissals, and labour rights, expose the inequities of the global outsourcing of content moderation labour and mark an important step towards holding platforms accountable within African jurisdictions.

Civil society, meanwhile, continues to bridge these gaps. African digital rights organisations, researchers, and community-driven projects such as Mozilla’s Common Voice have exposed persistent deficits in language coverage and contextual understanding in automated moderation systems. Their work shows that low-resource African languages remain under-supported online and that moderation systems often lack the linguistic and contextual depth needed to distinguish harmful content from legitimate speech, resulting in inconsistent enforcement, including the wrongful removal of benign speech and the failure to detect harmful content.

Key emerging issues include language and model bias, as many content moderation systems remain disproportionately designed for English and a limited number of well-resourced global languages. African languages are often under-represented in the datasets and linguistic resources used to train and test these systems. This can result in failures to interpret satire, idiomatic expressions, and political discourse accurately, leading both to the wrongful restriction of legitimate civic expression and to limited controls and checks over harmful content. Although Meta states that its moderation systems use language and region-based routing and reviewers covering multiple regions and languages, significant gaps in contextual representation remain. This challenge is especially evident in linguistically diverse states such as Nigeria, which has about 520 living indigenous languages.

In terms of procedural fairness, users in Africa often lack clear explanations or effective appeals when content is removed. Moreover, appeals are rarely available in African languages. This alone undermines trust in platforms and contradicts principles of necessity and proportionality established under the African continental regional standards.

Furthermore, Labour exploitation, where thousands of African content moderators work under precarious contracts, low pay, and high exposure to traumatic material continues to haunt Africa. For instance, reports from Kenya and Morocco reveal the psychological toll of reviewing graphic content without adequate support, underscoring the structural inequities of moderation outsourcing.

Power asymmetries in rule-making, where most platform content policies are drafted outside Africa, with limited consultation of local stakeholders is also a major challenge. This centralisation of authority risks importing external cultural assumptions that do not fit African contexts, particularly during elections and times of conflict.

As such, Africanization and decolonization of content moderation are not abstract ideals but practical necessities for a continent marked by linguistic plurality, rapid digital adoption, and democratic contestation. By embedding moderation practices in African values, laws, and lived realities, platforms can foster a healthier digital civic space. This requires meaningful co-governance, investment in language resources, labour justice, and greater transparency.

Ultimately, decolonised content moderation affirms Africa’s urgency in shaping its own digital future while safeguarding the rights and dignity of its people. Inter alia, the following should be undertaken to ensure and strengthen Africanization and decolonization of content Moderation.

  1. Institutional co-governance where platforms undertake to establish content policy councils with African regulators, academics, and civil society representatives could help to ensure moderation frameworks reflect regional realities.
  2. Language inclusion through investment in African language datasets and models should be prioritized by intermediaries. Platforms should further publish arising error rates across languages and improve coverage for low-resource languages.
  3. Fair labour standards should be established to ensure that African moderators receive adequate and fair compensation for their work, psychological support is provided, and collective bargaining in rights is assured. Contractor compliance should be subject to independent audits with specific entities having oversight.
  4. Procedural justice based on existing measures for redress such as that of META should be assured and upheld. Users should be notified in their own languages about takedown decisions and they should have access to transparent and fair appeals processes.
  5. Regulators and accredited researchers in Africa should have privacy-respecting access to moderation data to evaluate accuracy, fairness, and bias of all research studies.
  6. Regional courts, NHRIs, CSOs, and advocacy groups should continue to challenge unjustified state overreach including internet shutdowns and arbitrary takedowns as a means and strategy to re-inforce African human rights norms.

The Right to Access Information is a Boost for A Healthy Environment in Africa – ACHPR

By Raymond Amumpaire |

The African Commission on Human and Peoples’ Rights (African Commission) at its 86th Ordinary Session has pronounced itself on the importance of access to information in the realisation of the right to a healthy environment. In a recent resolution, ACHPR/Res. 657 (LXXXVI) 2026, the Commission recognised accurate and reliable information, transparency, participation and accountability as a necessary tool to address environmental harm, promote resilience, as clearly envisaged in the State Reporting Guidelines and Principles on Articles 21 and 24 and facilitate climate adaptation.

The resolution among others underscores the key provisions of the African Charter, Article 9 on access to information, Article 16 on the right to a healthy environment, Article 21 on the right to natural wealth and resources and Article 24 on the right to a general satisfactory environment.

This resolution comes at a time when gaps in transparency in the extractives industry, lack of information on the risks, and gaps in the cumulative impacts of projects and accountability are largely felt by affected communities.

Anchored on regional and international instruments, the resolution reaffirms the obligation of States to legally guarantee public access to information held by public bodies and relevant private bodies for rights protection. States have a duty to provide persons affected or likely to be affected with available, accessible and practical information on an equal and non-discriminatory basis.

This position aligns with, and reflects a growing global consensus on the relationship between the right to access information and the right to a healthy environment. The Commission has previously decided in its legal jurisprudence including in Social and Economic Rights Action Center (SERAC) and Center for Economic and Social Rights (CESR) v Nigeria [2001] ACHPR 35, Centre For Minority Rights Development and Another v Kenya (Communication 276 of 2003) [2009] ACHPR 102 and, in LIDHO and Others v Republic of Cote d’Ivoire (Application 041/2016) [2023] AfCHPR 21 that the right to access information is not just complimentary but central to the right to a healthy environment.

The resolution is a key reminder to State parties to prevent, investigate, and remedy acts of harassment, intimidation, arbitrary arrest or detention, and other forms of reprisals against journalists, environmental and other rights’ defenders covering environmental and climate issues as well as affected communities including vulnerable groups such as women, children, indigenous and minority peoples. It further condemns the weaponisation of abuse of due process and Courts by powerful individuals or organisations to silence, intimidate, and financially exhaust critics, such as journalists, activists, and campaigners, regarding matters of public interest through the use of strategic lawsuits against public participation (SLAPPs).

It is a boon for the prior calls for respect of human rights in the conduct of business pursuant to Resolution ACHPR/Res.367(LX)2017 and the State Reporting Guidelines and Principles on Articles 21 and 24 of the African Charter. It is a response to emerging and growing concerns around corporate responsibility and rights protection including environmental protection and how they intersect with  access information. It could boost intergenerational equity and sustainable development..

It is a call to business enterprises, particularly those operating in the extractive, energy, agribusiness and infrastructure sectors, to conduct human rights and environmental due diligence consistent with applicable regional and international standards including on transparency and meaningful public participation.By centering the right to access information, it tackles the issues to do with climate and environmental disinformation which thrive in an information void.

The resolution in highlighting the implications on information integrity, participation and environmental justice further calls upon State parties to:

  1. Ensure timely, accurate, accessible, and proactive disclosure of climate and environmental information, and to guarantee meaningful public participation and access to justice in environmental decision-making
  2. Facilitate unimpeded access to information for journalists, human rights defenders, and communities affected by climate and environmental issues, including through proactive disclosure, the removal of undue administrative and practical barriers, and the promotion of linguistic diversity and accessible formats, particularly for local and indigenous communities
  3. Adopt, review, and effectively implement national access to information legislation in line with the Model Law on Access to Information and the Declaration of Principles on Freedom of Expression and Access to Information in Africa (the Declaration), including by ensuring that information relating to the environment and climate change is subject to proactive disclosure
  4. Take appropriate, legislative, administrative and judicial measures to deter and address vexatious, harassing or abusive resort to litigation that unduly restrict and chill public participation in environmental matters, including, inter alia providing targeted procedural protections such as expedited handling of cases, facilitating timely access to legal aid and other appropriate assistance to victims
  5. Ensure that business enterprises, particularly those operating in the extractive, energy, agribusiness and infrastructure sectors, conduct human rights and environmental due diligence consistent with applicable regional and international standards, including by ensuring transparency, meaningful public participation, and access to relevant environmental and climate-related information.

The Digital Rights Alliance joins its voice in solidarity with the African Commission in calling for efforts and actions that recognise the right of access to information as a cornerstone for the protection, promotion and realisation of a healthy environment in Africa.

Artificial Intelligence and Disruptive Technologies and What They Mean for a Sustainable Future in Africa

By Gelila T. Geletu and Raymond B. Amumpaire |

Artificial Intelligence (AI), a buzzword connoting a machine-based system that infers, from the input it receives, how to generate outputs or decisions that can influence physical or virtual environments. AI has been credited for its disruptively massive contribution to society. AI has brought about opportunities, for instance, towards exploring and innovating around eco-friendly alternatives. Conversely, AI has also brought with it disruptive potentials towards environmental sustainability.  The Digital Rights Alliance Africa (DRAA) and its membership as part of the wider agenda is focussing on increasing the capacity of civil society organisations, human rights defenders, media, journalists and other civic actors to effectively understand, monitor and navigate the use of surveillance technologies and advocate for legal and policy frameworks that protect players surveillance. In this article, DRAA critically explores modern technology and its impact on a sustainable Africa ahead of Agenda 2030 and Agenda 2063.

What do guiding instruments say?

Article 4(1) of the United Nations Framework Convention on Climate Change (UNFCCC) calls on all state parties to promote and cooperate in technological, technical, research and development of data archives, related to the sustainable and resilient climate systems. The International Court of Justice (ICJ) in a recent advisory opinion requires States to ensure that companies and individuals subject to their jurisdiction comply with due diligence duties before acting in ways that could potentially cause environmental damage, mirroring  the spirit of the UN Guiding Principles on Business and Human Rights.

Further, Agenda 2063, the  AU Continental AI Strategy, the Digital Transformation Strategy for African 2020-2030 and Africa Digital Compact (ADC) propose harnessing and accelerating the integration of AI to improve the livelihoods of Africans including tackling climate change through climate technology. According to the 2025 State of Internet Freedom in Africa Report, countries like Kenya, Egypt, Nigeria and Senegal have aligned technological progress with broader national development goals. These are complemented by the Windhoek Statement on Artificial Intelligence in Southern Africa, Nairobi Statement on Artificial Intelligence and Emerging Technologies in Eastern Africa and the Africa Declaration on Artificial Intelligence which recommend leveraging AI for  climate resilience and adaptation.

The Addis Ababa Action Agenda equally creates a coherent framework for financing sustainable development, which, according to General Assembly resolution 70/1, para. 40,  is critical for the realisation of the SDGs and Agenda 2030.

Although these strides are remarkable, critical gaps continue to plague the current regional AI governance frameworks. These primarily include the absence of comprehensive, AI-specific legislation to clearly define AI, mandate algorithmic impact assessments, establish clear accountability mechanisms, guide independent oversight, mandatory impact assessments for high-risk AI systems, and provide specific redress for AI-driven harms.

Additionally, there is a lack of explicit guidelines for public-sector AI deployment, inadequate provisions for algorithmic transparency and explainability, measures to address algorithmic bias, weak data sovereignty mechanisms, and limited enforcement capacity of regulatory bodies.

AI and Environmental Sustainability

Defined as ‘meeting the needs of the present without compromising the ability of the future generations to meet their own needs,’ the ethos of sustainability aims towards a responsible, efficient, and equitable use of resources, including natural resources.

The construction of AI data centres have been criticised for ‘guzzling a lot of water.’ A 2025 study indicates that a medium-sized data centre can consume up to roughly 110 million gallons of water annually for cooling purposes (equivalent to the annual water usage of approximately 1,000 households) while larger data centres can each “drink” up to 1.8 billion annually (equivalent to a town of 10,000 to 50,000 people).

These data centres have also been faulted for having serious carbon emissions, exacerbating the e-waste problem, driving depletion of resources and inflating electricity costs. Indeed, by 2030, data center power demands are expected to increase by 160%. This has made ecology-friendly digital policies and  innovations a much needed elixir.

AI systems also have the potential to exacerbate the systemic biases and social inequality vis-a-vis the most vulnerable, especially women, and marginalised communities living in climate-sensitive areas. Additionally, AI has been cited as having a potential risk to spread climate disinformation and perpetuate the repression of environmental defenders.

Within the African context, the lack of safely managed water and sanitation remains a critical equality issue, disproportionately affecting women and girls. They often bear the burden of water collection, face heightened risks to health and safety, and experience barriers to education and economic participation. The African Union (AU) has declared 2026 as the Year of “Assuring Sustainable Water Availability and Safe Sanitation Systems to Achieve the Goals of Agenda 2063.” The theme elevates water and sanitation to a continental political priority, recognizing them as catalysts for economic transformation, climate resilience, public health, food security, and regional stability.

Furthermore, Africa’s expansive resources and rising data needs make it well-suited for greenpowered data hubs, enhancing digital inclusion and resilience. A great example of such initiatives is the Africa Data Centre (ADAC) which signed a 20-year power purchase agreement (PPA) with DPA Southern Africa (DPA SA) and iColo that is poised to become one of the greenest global connectivity hubs in Kenya which may serve as a strategic pilot which may be upscaled across the continent. Hence, there is an opportunity to align the operations and expansion of AI technologies with the AU agenda on sustainable water resource management. However, in the recent past, the AUC signed a memorandum of understanding to advance Africa’s sovereign AI and digital capacity with Google despite Google facing increasing criticism over adverse environmental impact of its data centers.

Nonetheless, the ‘corporate-everything’ model behind the AI eco-system encourages companies to prioritise profit over environmental protection, externalising costs, influencing regulations and pushing environmental responsibility to the backseat.

The Way Forward

Notwithstanding the above progresses and critical challenges, there is hope for stakeholders to take actions that potentially align AI technologies to environmental sustainability goals in Africa and beyond. The key actions include:

  1. Member states adhering to digitally and ecologically forward-looking laws via legal harmonisation, reform, and effective implementation through tailored awareness-raising, and capacitation initiatives for legislative and judiciary bodies, as well as  public institutions;
  2. Big-tech meaningfully uptakingaccessible, explainable, accountable, transparent, and competent AI governance mechanisms which embed the ethos of environmental sustainability and reflect the lived experiences of Africans, especially the most vulnerable;
  3. Human rights advocates, academia, researchers, think tanks, and networks  figureheading need-based research and human rights advocacy initiatives to ensure the effective and timely enforcement of laws, and as means of building robust cross-sectoral partnership in and around  environmental due diligence and inclusive digital transformation; as well as
  4. Civil society through training and capacity building empowering grassroot initiatives which leverage AI technologies to protect and preserve indigenous ecological practices geared toward environmental sustainability and social transformation within their communities.

Reflections for Africa from the 2026 AI Impact Summit in India

By Raymond Amumpaire |

The AI Impact Summit 2026 (the Summit) (Feb 16–20, 2026) brought together participants from across the globe to build consensus on strengthening international cooperation and multistakeholder engagement on Artificial Intelligence (AI). The Summit was centered around seven Chakras(pillars) centred around the principles of: development of human capital; broadening access for social empowerment; trustworthiness of AI systems; energy efficiency of AI systems; use of AI in science; democratising AI resources; and use of AI for economic growth and social good. These were aimed at translating the summit’s guiding principles – People, Planet, and Progress into concrete actions.

Lack of binding regulation 

Despite calls from international, regional and municipal players for increased regulation, there is a glaring lack of a binding instrument on global AI governance. In response to this challenge, the Summit resulted in the New Delhi Declaration on AI Impact, a voluntary and non-binding framework inspired by “सर्वजन हिताय, सर्वजन सुखाय” (Welfare for all, Happiness of all). It calls for international cooperation and multistakeholder engagement across countries along the seven Chakras (pillars) of the AI Impact Summit. It further seeks to foster shared understanding, while respecting national sovereignty, on how AI could be made to positively serve humanity. This could be achieved in complementarity with existing international and other initiatives. The Declaration was adopted and endorsed broadly by more than 89 countries including the United States despite Washington’s resistance to formal global AI governance efforts

Unfortunately, the question as to whether AI should be governed superseded the question of who and for whom AI is governed. The Declaration has been criticised for reflecting a growing pattern from nearly all global AI conversations yet without clear convergence on enforcement. As has been with every global AI dialogue, the Summit through the Declaration made ambitious promises on regulation and democratisation of access but lacked a binding mechanism. This flaw defeats well intended regulatory spirit while exacerbating the technological apartheid problem for the Global South.

Where were the voices of the affected communities?

The Summit, poised to be an opportunity for the Global South to get its voices and perspectives about AI Governance architecture heard ended up largely affording participation opportunities to top AI company CEOs, UN officials and Ministers. This relegated views from the Global South which bears the brunt of the AI ecosystem. Even the Civil Society actors that participated were involved in tokenist consultative roles as compared to active participation. 

The Global South was present mainly as the subject rather than architects of the discussion. Initiatives like Multistakeholder Approaches to Participation in AI Governance (MAP-AI) model aim to fill this gap by fostering meaningful and effective multistakeholder engagement across a range of critical AI governance-focused convenings, processes, and initiatives, with a particular focus on elevating underrepresented voices, perspectives and highlighting leadership from the Global Majority. Reflections and recommendations from MAP-AI Activities at the India AI Impact Summit, 2026 may be found here.

Human Capital and “AI” Workers’ Rights

Despite efforts to deliberate on responsible human capital at the Summit, the reality has been that the entire AI ecosystem and AI supply chain, from generating, annotating, and verifying data for AI training to content moderation, largely runs on the labour of thousands of gig labourers from Kenya to Madagascar, India, Philippines, and Venezuela. This raises issues around digital colonisation and the human cost of AI as well as business and human rights aspects. Researchers argue that leading AI companies from the Global North leverage the weakness around the labour laws in the Global South countries to power their machine while “sidestepping accountability” and obligations. Workers also have weak legal standing, ineffective grievance redressal mechanisms and negligible institutional protection. The Summit could have delivered more in terms of centering the views and addressing these labour concerns.

Deepfakes continue to be an issue 

The impact of AI on information integrity remained a going concern for plenary and discussions at the Summit. The frontier of AI-facilitated information manipulation which until recently included non-consensual intimate imagery, political disinformation, financial fraud has taken a new twist with citizens and governments taking action against major AI companies that commercialise the AI-powered applications which have capabilities to generate sexualised deepfakes of women and children. The lack of centrality, in design and governance, of affected persons’ views coupled with broader gaps makes the enforcement of existing laws on these issues and platform accountability difficult. The Summit discussions touched these ‘open wounds’ but there were no concrete ways forward. 

The other concern is the subliminal retirement of the word ‘safety’. The shift from the AI Safety Summit at Bletchley Park, Buckinghamshire (2023) to the AI Action Summit in Paris, France (2025) to the AI Impact Summit, reflects a deliberate conditioning of what the policy priorities  choices should be about, and it is not ‘safety’.

Commitments

The Summit also delivered the New Delhi Frontier AI Commitments, a number of voluntary commitments from participating organisations and global frontier AI firms that aimed to democratise AI access and innovation by mirroring a shared vision to ensure that the development and deployment of AI systems are aligned with equity, cultural diversity, and real-world needs, particularly across the Global South. In particular:

  1. The first commitment, “Advancing Understanding of Real-World AI Usage” focuses on real-world AI usage through anonymised and aggregated insights. Participating organisations will work to generate evidence that supports policymaking on the impact of AI on jobs, skills, productivity, and economic transformation. By enabling data-driven analysis of how AI is being deployed across sectors, the initiative aims to help governments and institutions craft informed strategies that maximise benefits while mitigating risks associated with technological change.
  2. The second commitment, “Strengthening Multilingual and Contextual Evaluations” centers on  efforts to ensure effectiveness of AI systems across languages, cultures, and real-world use cases. Organisations will collaborate with governments and local ecosystems to develop datasets, benchmarks, and expertise that support evaluation in under-represented languages and cultural contexts. This effort will improve AI performance for diverse populations and help democratise access to high-quality AI experiences globally, while preserving flexibility in the choice of tools and evaluation methodologies.

Despite a number of risks ranging from national security to loss of control taking centre stage in the corridors, the representation never made the official outcome. The time to act is now and the window is fast closing on Africa.

Key takeaways for Africa:

  1. Governments and regulators should prioritise rights respecting AI-strategy development that mandates human rights impact assessments at all stages of procurement and deployment across all sectors of their economies.
  2. Governments  should ensure effective implementation of AI governance efforts through AI-literacy and capacity-building initiatives targeting public officers and institutions.
  3. Big tech should entrench human rights in the design and roll-out of AI as well as ensure benefit sharing from final products whenever resources such as data for training foundational models are sourced from low income countries.
  4. Civil society organisations and networks such as DRAA, as well as academia, should lead on documentation and evidence building of AI-related harms to facilitate platform accountability, strategic litigation and transparency. 
  5. Civil society also has a key role in advocacy for inclusive models (especially on women and children’s online safety), and raising community awareness on AI and emerging technologies.
  6. Regional institutions such as the African Union should prioritise setting strategic agendas and forums and entry points to deliberate with big tech companies, private, public, civic, and digital actors to name the gap clearly and work structurally to close it.

Building Digital and Media Literacy Skills for Safer Online Spaces for Children in Southern Africa

By Phakamile Madonsela |

Children across Southern Africa are growing up in an increasingly connected world, yet most lack the digital literacy skills needed to navigate it safely. Rapid growth in mobile and internet access across South Africa, Lesotho, Botswana, Namibia, Zambia, and Zimbabwe has outpaced meaningful investment in digital education, leaving millions of young people exposed to cyberbullying, child sexual abuse material (CSAM), technology-facilitated gender-based violence (tfGBV), misinformation, and the risks of misused artificial intelligence. Marginalized children, particularly girls, rural learners, and children with disabilities, bear a disproportionate share of these risks.

In the current report by the Digital Rights Alliance Africa (DRAA), digital literacy education, government policies, and child online safety frameworks across six Southern African countries are analysed. The report among others identifies the common digital risks faced by the African child, the gap between legislative ambition and on-the-ground reality, and proposes actionable recommendations for dealing with the common risks and challenges.

Reiteration African Union’ s Digital Education StrategyGrounded guiding principles for Member States to promote digital education implementation in the region, the report calls on national governments and to:

  • Invest in teacher digital competence at all levels of education
  • Prioritise infrastructure, devices, and connectivity for vulnerable learners, especially girls, rural children, and children with disabilities.
  • Foster multi-stakeholder collaboration across government, civil society, researchers, and the private sector, and 
  • Enact policies that make digital literacy a core and compulsory component of children’s education across the region

Read the Full Report here

Considerations for Responsible Development and Trustworthy Use of Artificial Intelligence in Africa

By Beatrice Kayaga |

According to the United Nations Educational, Scientific and Cultural Organisation (UNESCO) recommendations on the Ethics of Artificial Intelligence, 2022, Artificial Intelligence (AI) is  the ability of machines to perform tasks in a manner that mimics intelligent human behaviour, often involving elements such as reasoning, learning, perception, prediction, planning, or control. At its core, AI encompasses several key fields, such as Natural Language Processing, which enables machines to understand and generate human language, making applications like chatbots, translation tools, and virtual assistants possible.  Machine Learning on the other hand, is a component of Artificial Intelligence that enables it to learn from data and improve over time without being explicitly programmed.

Deep Learning is a subset of Machine Learning which uses artificial neural networks to model complex patterns in data. Its functions can be identified in  speech recognition, autonomous vehicles and  image classification appliances or devices. Lastly, the Large Language Models like ChatGPT, Llama and Claude among others are designed  to process and generate human-like language . These broadly trained systems can be fine-tuned for specific tasks, and user needs.

Developing these models follows a Collect, Train, Evaluate, and Tune structured pipeline. Initially, vast amounts of general data are collected to train a foundation model, often using significant computational resources. Once trained, the model’s performance is evaluated, and it is then fine-tuned with domain-specific data to enhance its accuracy and relevance in specific applications.

However, this process is not without challenges. Bias and transparency, as AI systems can inadvertently perpetuate or amplify societal biases embedded in the data with which they are trained is a major concern.  For instance, Amazon’s job recruiting was found to favour men over women mainly because it was trained on predominantly male-dominated resume data.  Furthermore, the lack of transparency in so-called “black box” algorithms has sparked concerns around explainability and accountability, making it difficult for users and regulators to understand how decisions are made.

Privacy is another critical dimension of AI. Since AI systems often rely on massive datasets including data, the risk of privacy invasion looms large. For example, while facial recognition offers advantages such as improved security, social media photo tagging, contact tracing during public health crises like COVID-19, and personalized marketing, its deployment by law enforcement and surveillance agencies has sparked intense ethical and legal debates concerning data rights and risks like consent, data retention, and potential misuse. AI can also facilitate the spread of misinformation and disinformation by creating highly realistic yet false content such as deepfakes that make people appear to say or do things they never did and distribute it instantly through digital platforms. When such content spreads rapidly, it can fuel the explosion of fake news that potentially undermines democratic processes, manipulates public opinions and erodes trust in the media. It also violates fundamental rights and freedoms such as freedom of expression and information because AI moderation systems designed to detect and remove harmful content sometimes censor legitimate speech due to algorithmic errors or biases. These tech practices tend to have a chilling effect on digital rights and at times force people into self-censorship due to fear of repercussions.

As artificial intelligence is becoming deeply embedded in society, the question of   trustworthy, secure, privacy-preserving AI that is aligned with human rights has become critical. These concerns have prompted the development of legal and regulatory frameworks to balance the drive for AI innovation with societal safeguards. For example, in the European Union, the EU AI Act takes a risk-based approach towards regulating AI where the framework categorizes AI systems by their potential harm. The United States, by contrast, uses export controls and sector-specific regulations in order to regulate sensitive AI technologies as a way of protecting national security. In regions like Africa where there are no dedicated AI frameworks, governments are adopting existing legislation such as data protection laws, AI strategies, procurement regulations and  access to information statutes to regulate AI. However, the lack of specific AI laws  leaves significant challenges related to algorithmic bias, privacy, accountability mechanisms, intellectual property rights protection and the wide procurement and deployment of AI in largely unregulated spaces.

Trustworthy AI can also be achieved by integrating a human rights-based approach (HRBA) in its systems. The HRBA would ensure the placement of protection of fundamental rights and human dignity at the center of AI development, deployment, and governance. This approach ensures that AI systems are designed and operated in ways that respect fundamental rights and freedoms such as the right to privacy, freedom of expression, non-discrimination, and equality among others. Therefore, by embedding human rights principles into AI design, developers can mitigate these and create systems that serve all individuals fairly and equitably

Lastly, mandatory impact assessments should be conducted prior to the deployment of AI systems. The measures should evaluate potential social, economic, environmental, and ethical implications, and ensure that adequate safeguards are in place. Together, these measures will create a comprehensive governance structure that not only mitigates risks but also promotes trust, transparency, and accountability in the use of AI.

As AI systems continue to influence critical aspects of society, it is important to  ensure that they are based on transparency, fairness, accountability, and ethics. This would in turn build public confidence and safeguards against unintended harms. Achieving trustworthy AI requires a multi-stakeholder approach requiring collaboration among researchers, developers, policymakers, and civil society to align technological progress with societal needs and rights that underpin just and equitable societies.

Exploring the Data Protection and Data Governance Challenges and Opportunities in Lesotho

By Letsatsi Lekhooa |

In today’s interconnected world, data has become an important asset. It is now a powerful driver of social and economic development, a tool for improving governance, and an enabler of innovation across sectors. From digital payments and e-health services to e-government platforms, data is at the centre of everyday life. Yet, with this transformation comes a pressing responsibility to ensure the protection of rights, including the data protection of citizens. Lesotho was among the early African countries to recognise this responsibility. In 2011, the country enacted the Data Protection Act, a law that established privacy as a legal right and imposed obligations on organisations that collect and process personal data. The Act also mandated the creation of a Data Protection Commission (DPC) to oversee and enforce data compliance. This was a bold and progressive move at the time, placing Lesotho ahead of many of its regional peers.

However, almost fifteen years later, the Commission has not been established. The absence of this critical institution means that the law is not being implemented in practice. Citizens have legal rights on paper, but no independent body exists to enforce them or to hold organisations accountable.

This gap has left individuals exposed to potential data misuse and has weakened public trust in digital services. At the same time, Lesotho is embarking on ambitious digital reforms. The government is investing in Digital Public Infrastructure (DPI), while civil society, academia, and development partners are working to strengthen national data governance capacity. These developments highlight both the challenges and the opportunities that lie ahead. The choices made today will determine whether Lesotho can build a digital future that is inclusive, trusted, and rights-based.

The Current Landscape of Data Protection

The Data Protection Act 2011 remains the cornerstone of Lesotho’s data protection framework. The Act defines the obligations of data controllers and processors, recognises individual rights such as access and correction of personal data, and envisions an independent Data Protection Commission with oversight powers. However, in the absence of the Commission, the framework remains incomplete.

For several years, ministries debated whether the Act should be amended before implementation, frequently citing financial constraints as a limiting factor. At the same time, responsibility for digital policy and data governance remains fragmented across multiple government institutions and private sectors, resulting in duplication of efforts and delays in advancing effective data protection implementation.

Meanwhile, institutions such as banks, telecom operators, and universities have developed their own internal policies to manage data. While these efforts are important, they are siloed and inconsistent. Without national authority to provide oversight, citizens have little assurance that their rights are being protected.

Civil society has been vocal about these shortcomings. The National University of Lesotho Legal Clinic, through Advocate Rasetla Mofoka, has emphasised that Basotho currently lack an independent body to which they can report breaches or violations of their privacy. Instead, citizens are forced to rely on the goodwill of institutions themselves. This situation underscores the urgent need for a regulator that can enforce compliance, investigate complaints, and ensure accountability.

While Lesotho delays the necessary actions towards data protection, digital advancements that threaten cybersecurity continue to affect it. The protection of personal information law of one of its peers, the Republic of South Africa, defines ‘personal information’ and ‘data subject’ to encompass legal persons and other identifiable social groups. The definition raises serious concerns amid the data protection risks posed by the introduction of Artificial Intelligence, which impacts the personal data and private rights of legal persons and other identifiable social groups.  The sincere pursuit of interoperability demands that Lesotho and other peers consider the position taken by the RSA and take a stance.    

A Renewed Push for Data Governance

Although implementation has been slow, recent developments suggest a renewed focus on data governance at the highest levels of government. Early August 2025, the Ministry of Information, Communications, Science, Technology, and Innovation (MICSTI) convened the Lesotho Data Governance Capacity Building and Stakeholder Engagement Workshop. The event brought together policymakers, regulators, private sector players, academics, and civil society to explore the role of data as a driver of development. This capacity building was spearheaded and facilitated by CIPESA.

Closing the workshop, Permanent Secretary Kanono Leronti Ramashamole, speaking on behalf of the Minister Hon. Nthati Moorosi, captured the significance of the moment: “Data is no longer merely a byproduct of administration. It is a strategic national asset, a cornerstone of governance, digital service delivery, innovation, and trade. Responsible data governance is not just a technical necessity; it is a governance imperative”. This statement reflects a major shift: data is now being positioned not only as an ICT issue but as a national development priority.

One month later, in August 2025, MICSTI hosted another milestone event: a multi-stakeholder workshop on Digital Public Infrastructure (DPI), supported by UNDP Lesotho. The workshop aimed to validate Lesotho’s DPI Framework Concept Note, which sets out plans for:

  • Digital identity for all Basotho through biometric authentication.
  • Secure data exchange to protect citizens’ information and enable interoperability between systems.
  • Digital payments for government-to-person transfers, person-to-government transactions, and cross-border trade.
  • Cybersecurity and digital trust to safeguard online services and ensure resilience.

Stakeholders from across government, the financial sector, telecom operators, academia, and civil society contributed to the discussions. The framework will guide Lesotho’s digital transformation over the next three to five years and is closely linked to continental initiatives like the AU Digital Transformation Strategy and global priorities such as the UN Global Digital Compact.

These initiatives show that Lesotho is not standing still. The recognition of data as a strategic resource is growing, and the momentum for reform is being built. The challenge is to ensure that these efforts are matched by the institutional reforms needed to enforce rights and build trust.

Persistent Challenges

Despite this renewed momentum, the challenges facing Lesotho remain substantial. The Data Protection Act of 2011 cannot be effectively enforced in the absence of an operational Data Protection Commission, leaving citizens’ rights largely unprotected in practice. Institutional responsibilities for data protection and data governance are dispersed across multiple ministries and agencies, resulting in duplication, fragmented accountability, and slow progress.

While funding constraints are frequently cited as the primary cause of delays, many observers argue that the underlying issue is political prioritisation rather than the absolute availability of resources. Years of workshops, consultations, and strategy discussions have yielded limited tangible outcomes, contributing to growing frustration among civil society actors and development partners.

In the absence of a central oversight body, public and private organisations continue to rely on internal policies, leading to inconsistent standards and weak, uneven protection for citizens. In addition, there may be a need to revisit and clarify key definitions within the Data Protection Act, particularly the concepts of “personal information” and “data subject,” to ensure adequate protection for legal persons and identifiable social groups whose data may also be vulnerable to misuse.

Opportunities to Build On

Lesotho’s current challenges can be turned into opportunities if decisive action is taken.

Establishing the Data Protection Authority: The most urgent step is to operationalise the Authority mandated by the 2011 Act. This would establish an independent body to enforce compliance, investigate breaches, and provide citizens with a trusted avenue to report violations. Once in place and functional, the Data Protection Commission will identify shortcomings in the Act that may necessitate review, including any hindrances to interoperability.

Aligning with the African Union, Lesotho has signed but has yet to ratify the AU Malabo Convention on Cybersecurity and Personal Data Protection. Ratifying and implementing the Convention would align the country with continental standards, promote harmonisation of laws, and strengthen its role in cross-border digital trade under the African Continental Free Trade Area (AfCFTA).

Harnessing Digital Public Infrastructure: The DPI framework offers an unprecedented opportunity to modernise governance and service delivery. But without strong data protection, DPI systems risk eroding trust. Establishing a Data Protection Authority is critical to ensuring DPI is built on transparency, accountability, and citizen rights.

Empowering citizens and civil society: Civil society organisations (CSOs) can play a vital role by advocating for reforms, raising awareness, and educating citizens about their rights. Citizens themselves can begin exercising their rights under the 2011 Act by demanding transparency and accountability from institutions.

Expanding academic and professional expertise: Universities such as Botho University and the National University of Lesotho are producing graduates in data science and statistics. At the same time, lawyers are pursuing specialisations in digital law and cybersecurity. This growing pool of expertise can directly support the establishment of a functional regulatory authority and strengthen national capacity.

Steps Already Taken by the State

It is important to acknowledge progress while remaining clear about what remains to be done. The Data Protection Act of 2011 continues to provide a strong legal foundation for the protection of personal data. The Data Governance Workshop held from 28 to 31 July 2025 reaffirmed the strategic value of data for national planning, service delivery, and accountability. Similarly, the Data Protection Impact Assessment Workshop conducted in August 2025 produced a practical roadmap for building inclusive and rights-respecting digital systems. Together, these steps demonstrate clear political recognition of the importance of data governance and protection. What is now required is sustained implementation, stronger institutional coordination, and effective enforcement to ensure that these commitments translate into tangible and lasting outcomes.

Conclusion

Lesotho’s journey in data protection and governance is one of early ambition, stalled implementation, and renewed opportunity. The legal framework exists, but without a Data Protection Commission, it remains incomplete. Citizens have rights in theory, but no mechanism to enforce them in practice. At the same time, the country is making bold moves through DPI and capacity-building initiatives. Universities, lawyers, and civil society are engaged. Development partners are supportive. The momentum is there – but action is urgently required. If Lesotho operationalises its Data Protection Commission, ratifies the AU Malabo Convention, and leverages the growing expertise within its institutions, it can build a trusted data governance ecosystem. This will not only protect citizens’ rights but also enable the country to fully participate in Africa’s digital economy.

Call to Action

The path forward is clear and requires coordinated action across all sectors. Government leaders must establish and adequately resource the Data Protection Commission without further delay to ensure that the existing legal framework is effectively implemented. Civil society organisations and NGOs should sustain advocacy efforts, empower citizens with knowledge, and hold public and private institutions accountable for their data protection obligations. Universities and legal professionals have a critical role in building national expertise through research, training, and professional practice that supports ongoing reforms. Development partners should complement these efforts by providing targeted technical and financial support for institutional strengthening and capacity development. At the same time, citizens must demand transparency, actively use their voices, and assert their digital rights in everyday interactions with digital systems.

As Lesotho continues to develop digital identity systems, payment platforms, and data exchange frameworks, the need for robust and enforceable data protection has never been greater. A functioning Data Protection Commission is not only a legal requirement but also the foundation for public trust, social inclusion, and sustainable innovation in the digital era.

Ensuring that no Mosotho is left behind, and no Mosotho’s rights are left unprotected, is not only a governance responsibility but a national imperative.

Subscribe to Newsletter

Soubscribe to our newsletter to get the latest news and updates.