By Raymond Amumpaire & Gelila Geletu
On 23-24 April, 2025, the Collaboration on International ICT Policy for East and Southern Africa (CIPESA) convened a training workshop on Artificial Intelligence (AI) Regulation Trends in Africa. Hosted in Windhoek, Namibia, the event underscored CIPESA’s commitment to advancing discussions on emerging technologies in Africa and brought together lawyers, human rights advocates, media personnel, legislators, academia, and researchers from over 15 African countries. The participants engaged in rich deliberations focussing on freedom of expression, data privacy, tech-facilitated gender-based violence, Africa’s AI readiness, and the responsible use of AI.
Image from the AI Regulations Trends in Africa Workshop| Credit: Asimwe John Ishabairu/CIPESA
A highlight from the meeting was rigorous debate on the multifaceted implications of AI for human rights, society, the rule of law, and democracy in the African context. Discussions, recognised the vitality of rights-based and rights-respecting legal regimes for the realisation of robust and vibrant local human rights and democratic systems, the analysis of such implications was grounded in a core principle that AI based technologies should be developed and designed with the interests and realities of African users at their core. Such an approach enhances transparency, accountability and relevance to users through responsive and innovative products. Currently, various tensions exist between countries due to conflicts in design approaches as various governments prioritise different issues as they grapple to balance regulation and innovation.
Information integrity was another area of focus due to its impact with rising AI-driven content generation, disinformation, and its broader impact on democratic processes and public trust. Use cases from Rwanda and South Africa presented by CIPESA’s Ashnah Kalemera highlighted trends in information integrity that distort elections rather than enhance it through voter manipulation by influencing public narrative negatively.
However, as concerns about AI continue to grow, so are the efforts aimed at addressing them. Owilla Abiro Mercy exhibited Thraet’s user-friendly ‘Spot the Fakes’ online gaming tool as an example of a practical way to address gaps in information integrity. The tool mimics thousands of AI-generated and real pictures as a way of capturing the thin chasm that exists between fake and real content in the current hyper-digitised information age.
Discussions extended into the impact of AI in newsrooms due to the role of the media in shaping public narratives. Participants noted the function of the media in the public domain as necessary for educating and highlighting social injustices. It was noted that many of the emerging injustices are the products of AI in areas such as labour, gender, and environmental concerns.
According to Admire Mare, Associate Professor in the Department of Communication and Media at the University of Johannesburg, South Africa, there has been a decline in the media due to established journalists leaving the industry. This has resulted in a decline in quality reporting and the culture of investigative journalism leading to dire consequences for the interrogation of the impact of AI.
On the subject of AI readiness, a panel discussion at national level for a few sample African nations was moderated by Oarabile Mudongo, Ridwan Oloyede and Nashilongo Gervasius. The session looked at Namibia and Nigeria’s AI readiness, highlighting the countries’ proactive engagement in governance efforts, ranging from developing national AI strategies and proposed legislation to establishing dedicated task forces. The discussion also explored the broader African perspective. It also addressed the challenge of whether technology itself should be the primary focus of regulation, alongside detailing ongoing initiatives to build robust AI policy frameworks across the continent.
The workshop also explored the intersection of AI data governance and highlighted concerns that require urgent policy intervention. These include, the fragmentation of data governance frameworks at a national level, lack of harmonisation with regional instruments such as the AU Continental AI Strategy, as well as data localisation gaps.
The discussion was complemented with a practical session on advocacy strategy and action development. Through this activity, participants developed tactics unique to their experiences in a bid to forge solutions to some of the most complex hypothetical scenarios from the intersection of the AI ecosystem and the human experience within the African context.
The strategic insights acquired from the workshop at conclusion resulted in the following calls-to-action:
- The harmonisation of laws at the continental, national, and sub-national levels, such as the Universal Declaration of Human Rights (UDHR), the International Covenant on Civil and Political Rights (ICCPR), and the International Covenant on Economic, Social and Cultural Rights (ICESCR) as well as and mechanisms such as the Universal Periodic Review (UPR).
- Facilitating cross-sector collaboration for effective advocacy and governance (oversight and accountability) through initiatives that meaningfully engage local communities, media practitioners, human rights advocates, academia, and other key actors in the democracy and human rights ecosystem. These alliances can additionally foster resource mobilisation and knowledge sharing as an elixir to the legacy capacity bottlenecks.
- Empowering local ‘AI auditors’ to mitigate biases existent in local data and models with an end goal of promoting an Afrocentric leadership on the global AI stage.
States should allocate resources towards capacity-enhancing efforts such as research, training, and other knowledge-sharing initiatives for national institutions geared towards improved implementation following the Principles Relating to the Status of National Human Rights Institutions (Paris Principles) blueprint.