Uganda’s Data Regulator Upholds Rights Against Google LLC

has delivered a decisive ruling in favour of four complainants against Google LLC, finding the tech giant in breach of Uganda’s Data Protection and Privacy Act, Cap. 97. The PDPO determined that Google LLC failed to register locally as a data controller and collector, and unlawfully transferred Ugandan data abroad without demonstrating proper safeguards or accountability. The PDPO has ordered Google to register within 30 days and provide evidence of compliance for cross-border transfers, marking a significant victory for Ugandan digital rights. This ruling sends a loud message that global tech companies must adhere to local data protection laws and respect the privacy of Ugandan citizens.

Source:
Personal Data Protection Office (PDPO), Ssekamwa Frank & 3 Ors v. Google LLC Decision, July 18, 2025.

Uganda Regulator Finds Google in Breach of Data Protection Law

Uganda’s Personal Data Protection Office (PDPO) has ruled that Google LLC violated Uganda’s Data Protection and Privacy Act by collecting and processing Ugandan citizens’ data without registering locally. The PDPO ordered Google to register with Uganda’s regulator within 30 days and provide evidence on how it complies with local data transfer requirements. This decision expands accountability for global tech firms operating in Uganda, stressing that even companies without a physical presence must comply with Ugandan law when handling personal data of Ugandans. However, enforcement of such rulings remains limited, as Uganda’s PDPO does not have authority to issue fines or binding corrective orders to non-compliant entities, highlighting the need to strengthen the regulator’s powers for effective enforcement.

Source:
Ugandan Regulator Finds Google in Breach of Country’s Data Protection Law, Orders Local Registration – CIPESA, July 18, 2025.

HEAPI Requests Data Protection Assessment for Uganda’s National ID System

The Health Equity and Policy Initiative (HEAPI) has formally asked Uganda’s National Identification and Registration Authority (NIRA) for documents detailing the Data Protection Impact Assessment (DPIA) and Data Migration Policy for the National ID system. This step aims to ensure transparency and accountability in managing personal data during the mass National ID enrollment and renewal process. HEAPI emphasizes that Uganda’s Data Protection and Privacy Act (DPPA) and 2021 Regulations require proper risk assessment, safeguards, and notification regarding the handling of sensitive personal and biometric data. The request highlights the critical need for openness and the protection of data subject rights in national data initiatives.

Source:
Health Equity and Policy Initiative (HEAPI), Official Correspondence to NIRA, August, 6, 2025

Uganda Data Protection Office Decision: Data Controllers Held Liable

The Personal Data Protection Office (PDPO) of Uganda has issued a decision reaffirming that data controllers are liable for the proper handling of personal data. In a recent complaint (PDPO 061/2024) involving Chipper Technologies Uganda Limited, the PDPO assessed multiple issues including data retention, breach notification, subcontractor alerts, and access rights. While no infringement occurred in this case, the PDPO directed the respondent to update their Privacy Notice, clearly requiring explicit consent for data processing beyond regulatory compliance. The office highlighted that failure to follow these directions could result in legal penalties, fines, or imprisonment, ensuring strong accountability for data controllers under Uganda’s Data Protection and Privacy Act, Cap. 97.

Source:
Complaint PDPO 061/2024, Personal Data Protection Office Uganda, Decision dated March 12, 2025

Lesotho Government Warns Public on Deep Fakes Amid Lack of Cyber Law

The Government of Lesotho has issued a warning to the public about the circulation of AI-generated fake videos involving His Majesty King Letsie III and the Prime Minister, Mr Samuel Ntsokoane Matekane. Authorities caution that these videos are false, misleading, and potentially fraudulent, aiming to deceive and harm the reputation of their leaders. The public is urged to be vigilant, to rely on official government channels, and to report suspicious content. Importantly, Lesotho currently lacks a dedicated cyber law to address such digital crimes, which highlights the challenges in prosecuting and preventing digital disinformation and misinformation.

Source:
Lesotho Government Press Statement, Ministry of Information, Communications, Science, Technology, and Innovation, August 15, 2025.

The Government of Lesotho has issued a warning to the public about the circulation of AI-generated fake videos involving His Majesty King Letsie III and the Prime Minister, Mr Samuel Ntsokoane Matekane.

UN General Assembly Adopts Landmark AI Resolution

The United Nations General Assembly has officially adopted resolution A/79/L.118, marking a major milestone in the regulation and governance of Artificial Intelligence worldwide. Passed on August 28, 2025, the resolution sets out global principles for the ethical use and development of AI, emphasizing human rights protections, transparency, and inclusive international cooperation. This move reflects growing global commitment to managing opportunities and risks posed by AI, ensuring that innovation serves humanity while safeguarding public interest.

Source:
UN General Assembly, Resolution A/79/L.118 on Artificial Intelligence, August 28, 2025.

Elevating Children’s Voices and Rights in AI Design and Online Spaces in Africa

By Patricia Ainembabazi

As Artificial Intelligence (AI) reshapes digital ecosystems across the globe, one group remains consistently overlooked in discussions around AI design and governance: Children. This gap was keenly highlighted at the Internet Governance Forum (IGF) held in June 2025 in Oslo, Norway, where experts, policymakers, and child-focused organisations called for more inclusive AI systems that protect and empower young users.

Children today are not just passive users of digital technologies; they are among the most active and most vulnerable user groups. In Africa, internet use among youths aged 15 to 24 was partly fuelled by the Covid-19 pandemic, hence their growing reliance on digital platforms for learning, play, and social interaction. New research by the Digital Rights Alliance Africa (DRAA), a consortium hosted by the Collaboration on International ICT Policy for East and Southern Africa (CIPESA), shows that this rapid connectivity has amplified exposure to risks such as harmful content, data misuse, and algorithmic manipulation that are especially pronounced for children.

The research notes that AI systems have become deeply embedded in the platforms that children engage with daily, including educational software, entertainment platforms, health tools, and social media. Nonetheless, Africa’s emerging AI strategies remain overwhelmingly adult-centric, often ignoring the distinct risks these technologies pose to minors. At the 2025 IGF, the urgency of integrating children’s voices into AI policy frameworks was made clear through a session supported by the LEGO Group, the Walt Disney Company, the Alan Turing Institute, and the Family Online Safety Institute. Their message was simple but powerful: “If AI is to support children’s creativity, learning, and safety, then children must be included in the conversation from the very beginning.”

The forum drew insights from recent global engagements such as the Children’s AI Summit of February 2025 held in the UK and the Paris AI Action Summit 2025. These events demonstrated that while children are excited about AI’s potential to enhance learning and play, they are equally concerned about losing creative autonomy, being manipulated online, and having their privacy compromised. A key outcome of these discussions was the need to develop AI systems that children can trust; systems that are safe by design, transparent, and governed with accountability.

This global momentum offers important lessons for Africa as countries across the continent begin to draft national AI strategies. While many such strategies aim to spur innovation and digital transformation, they often lack specific protections for children. According to DRAA’s 2025 study on child privacy in online spaces, only a handful of African countries have enacted child-specific privacy laws in the digital realm. Although instruments like the African Charter on the Rights and Welfare of the Child recognise the right to privacy, regional frameworks such as theMalabo Convention, and even national data protection laws, rarely offer enforceable safeguards against AI systems that profile or influence children.

Failure to address these gaps will leave African children vulnerable to a host of AI-driven harms ranging from exploitative data collection and algorithmic profiling to exposure to biased or inappropriate content. These harms can deprive children of autonomy and increase their risk of online abuse, particularly when AI-powered systems are deployed in schools, healthcare, or entertainment without adequate oversight.

To counter these risks and ensure AI becomes a tool of empowerment rather than exploitation, African governments, policymakers, and developers must adopt child-centric approaches to AI governance. This could start with mainstreaming children’s rights such as privacy, protection, education, and participation, into AI policies. International instruments like the UN Convention on the Rights of the Child and General Comment No. 25 provide a solid foundation upon which African governments can build desirable policies.

Furthermore, African countries should draw inspiration from emerging practices such as the “Age-Appropriate AI” frameworks discussed at IGF 2025. These practices propose clear standards for limiting AI profiling, nudging, and data collection among minors. Given that only 36 out 55 African countries currently have data protection laws, with few of them containing child-specific provisions, policymakers must take efforts to strengthen these frameworks. Such reforms should require AI tools targeting children to adhere to strict data minimisation, transparency, and parental consent requirements.

Importantly, digital literacy initiatives must evolve beyond basic internet safety to include AI awareness. Equipping children and caregivers with the knowledge to critically engage with AI systems will help them navigate and question the technology they encounter. At the same time, platforms similar to the Children’s AI Summit 2025 should be replicated at national and regional levels to ensure that African children’s lived experiences, hopes, and concerns shape the design and deployment of AI technologies.

Transparency and accountability must remain central to this vision. AI tools that affect children, whether through recommendation systems, automated decision-making, or learning algorithms, should be independently audited and publicly scrutinised. Upholding the values of openness, fairness, and inclusivity within AI systems is essential not only for protecting children’s rights but for cultivating a healthy, rights-respecting digital environment.

As the African continent’s digital infrastructure expands and AI becomes more pervasive, the choices made today will define the digital futures of generations to come. The IGF 2025 stressed that children must be central to these choices, not as an afterthought, but as active contributors to a safer and more equitable AI ecosystem. By elevating children’s voices in AI design and governance, African countries can lay the groundwork for an inclusive digital future that truly serves the best interests of all.

Download full report here.

Human Rights in the Age of AI: Insights from a Strategic Engagement on AI Trends in Africa.

By Raymond Amumpaire & Gelila Geletu

On 23-24 April, 2025, the Collaboration on International ICT Policy for East and Southern Africa (CIPESA) convened a training workshop on Artificial Intelligence (AI) Regulation Trends in Africa. Hosted in Windhoek, Namibia, the event underscored CIPESA’s commitment to advancing discussions on emerging technologies in Africa and brought together  lawyers, human rights advocates,  media personnel,  legislators, academia, and researchers from over 15 African countries. The participants engaged in rich deliberations focussing on freedom of expression, data privacy, tech-facilitated gender-based violence, Africa’s AI readiness, and the responsible use of AI.

Image from the AI Regulations Trends in Africa Workshop| Credit: Asimwe John Ishabairu/CIPESA

A highlight from the meeting  was rigorous debate on the multifaceted implications of AI for human rights, society, the rule of law, and democracy in the African context. Discussions, recognised the vitality of rights-based and rights-respecting legal regimes for the realisation of robust and vibrant local human rights and democratic systems, the analysis of such implications was grounded in a core principle that AI based technologies should be developed and designed with the interests and realities of African users at their core. Such an approach enhances   transparency, accountability and relevance to users through responsive  and innovative products. Currently, various tensions exist between countries due to conflicts in design approaches as various governments prioritise different issues as they grapple to balance regulation and innovation. 

Information integrity was another area of focus due to its impact with rising AI-driven content generation, disinformation, and its broader impact on democratic processes and public trust.  Use cases from Rwanda and South Africa presented by CIPESA’s Ashnah Kalemera highlighted trends in information integrity that distort elections rather than enhance it through voter manipulation by influencing public narrative negatively.   

However, as concerns about AI continue to grow, so are the efforts aimed at addressing them. Owilla Abiro Mercy exhibited Thraet’s user-friendly ‘Spot the Fakes’ online gaming tool as an example of a practical way to address gaps in information integrity. The tool mimics thousands of AI-generated and real pictures as a way of capturing the thin chasm that exists between fake and real content in the current hyper-digitised information age. 

Discussions extended into the impact of AI in newsrooms due to the role of the  media in shaping  public narratives. Participants noted the function of the media in the public domain as necessary for educating and highlighting social injustices. It was noted that many of the emerging injustices are the products of AI in areas such as  labour,  gender, and environmental concerns. 

According to Admire Mare, Associate Professor in the Department of Communication and Media at the University of Johannesburg, South Africa, there has been a decline in the media due to established journalists leaving the industry. This has resulted in a decline in quality reporting and the culture of investigative journalism leading to dire consequences for the interrogation of the impact of AI. 

On the subject of AI readiness, a panel discussion  at national level for a few sample African nations was moderated by Oarabile Mudongo, Ridwan Oloyede and Nashilongo Gervasius. The session looked at Namibia and Nigeria’s AI readiness, highlighting the countries’ proactive engagement in governance efforts, ranging from developing national AI strategies and proposed legislation to establishing dedicated task forces. The discussion also explored the broader African perspective. It also addressed the challenge of whether technology itself should be the primary focus of regulation, alongside detailing ongoing initiatives to build robust AI policy frameworks across the continent.

The workshop also explored the intersection of AI  data governance and highlighted concerns that require urgent policy intervention. These include, the fragmentation of data governance frameworks at a national level,  lack of harmonisation with regional instruments such as the AU Continental AI Strategy, as well as data localisation gaps.

The discussion was complemented with a practical session on advocacy strategy and action development. Through this activity, participants developed tactics unique to their experiences in a bid to forge solutions to some of the most complex hypothetical scenarios from the intersection of the AI ecosystem and the human experience within the African context.

The strategic insights acquired from the workshop at conclusion resulted in  the following calls-to-action:

  • The harmonisation of laws at the continental, national, and sub-national levels, such as the Universal Declaration of Human Rights (UDHR), the International Covenant on Civil and Political Rights (ICCPR), and the International Covenant on Economic, Social and Cultural Rights (ICESCR) as well as and mechanisms such as the Universal Periodic Review (UPR). 
  • Facilitating cross-sector collaboration for effective advocacy and governance (oversight and accountability) through initiatives that meaningfully engage local communities, media practitioners, human rights advocates, academia, and other key actors in the democracy and human rights ecosystem. These alliances can additionally foster resource mobilisation and knowledge sharing as an elixir to the legacy capacity bottlenecks.
  • Empowering local ‘AI auditors’ to mitigate biases existent in local data and models with an end goal of promoting an Afrocentric leadership on the global AI stage.

States should allocate resources towards capacity-enhancing efforts such as research, training, and other knowledge-sharing initiatives for national institutions geared towards improved implementation following the Principles Relating to the Status of National Human Rights Institutions (Paris Principles) blueprint.

Monitoring Digital Rights in Africa: A Critical Step Towards Safeguarding Online Freedoms

By David Iribagiza

Across Africa, digital technologies have rapidly transformed how societies function. Technologies have revolutionised most sectors including economic, education, healthcare, and entertainment. However, these digital gains are increasingly under threat from internet shutdowns to surveillance, censorship and other forms of Tech Facilitated Violence, thus witnessing a troubling rise in digital rights violations on the continent. 

On 22nd April, 2025, the International Center for Not-for-Profit Law (ICNL) in collaboration with the Collaboration on International ICT Policy for East and Southern Africa (CIPESA)  convened a training session for the Digital Rights Alliance Africa (DRAA) on Digital Rights Monitoring. The virtual training assembled 21 representatives from Uganda, Tanzania, DRC, Lesotho, Cameroon, Zimbabwe, Ethiopia, South Africa, Malawi, Zambia, Nigeria and Togo including human rights defenders, activists, legal professionals and researchers to deepen their understanding of the strategies and processes of monitoring human rights developments within the digital landscape.

The training was aimed to facilitate DRAA’s mission to foster a collaborative and inclusive digital environment across Africa by empowering Civil Society Organisations (CSOs) to undertake efforts that seek to champion digital civic space and counter threats to digital rights on the continent. The members were equipped with the skills necessary to promote digital rights, and advocate for policies that uphold internet freedom, equity, and access for all, representing a proactive step toward building a coordinated monitoring initiative  rooted in international standards and best practices in the region. Participants discussed the current landscape of digital rights violations which is marked by network disruptions, which are especially heightened during elections, as well as growing criminalisation of online dissent and censorship. Concerns were also raised about privacy violations and data breaches as a result of arbitrary surveillance, particularly regarding how they enable the targeting of individuals and marginalized communities, as well as inadequate data protection frameworks.

Shabnam Mojtahedi, a Legal Advisor on digital rights at ICNL emphasised that digital rights monitoring processes should uphold principles of ethical data collection, verification, and victim protection. While presenting on data protection, she noted the need for  clear data collection processes which allow data subjects to know who is collecting, accessing and using or analysing their data. In the absence of clear processes, data subjects remain exposed to unwarranted access and use of their data – often  outside the prescribed data protection standards.

The training highlighted the different monitoring approaches to include (i) case monitoring, which focuses on individual or localised incidents, and; (ii) situation monitoring, which examines broader patterns and trends. 

In addition, event-based monitoring for moments like protests or campaigns, and legal monitoring, which typically involve tracking policy changes, enforcement practices, and compliance with national and international obligations were highlighted as critical for ensuring timely responses to digital rights violations, informing collective advocacy strategies, and holding duty bearers accountable.

A significant portion of the training focused on building a robust methodology for monitoring as essential to ensure credibility and trust in digital rights monitoring findings. In an age of misinformation, systematic verification helps prevent the spread of false narratives and strengthens the impact of advocacy by ensuring that findings are accurate, verifiable and reliable. Emphasis was placed on methods such as triangulating sources, cross-checking facts, and leveraging metadata to verify digital content. For example, using weather data or reverse image searches can help confirm the authenticity of user-generated content. These verification techniques are critical in an age of disinformation and digital manipulation.

Participants were encouraged to clearly define the scope of their monitoring work by answering essential questions such as: What will be monitored? Who is affected? Where is the focus area geographically? And for how long will the monitoring be conducted? These questions serve as the foundation for credible and targeted monitoring efforts. 

In response to the wealth of knowledge and tools shared, participants voiced the significant challenges they face while monitoring digital rights. These include threats to their own safety, lack of resources, weak legal protections and Internet disruptions that impede research and other monitoring efforts across countries. 

As digital ecosystems evolve, so must our efforts to defend human rights online. Training sessions like these not only build knowledge but also foster solidarity among civic society actors committed to an open, inclusive, and rights-respecting digital realm in line with the Digital Transformation Strategy for Africa 2020-2030

Following the training, key recommendations emerged in order to facilitate meaningful and effective monitoring of civil liberties in the online space by DRAA members. Among these included.

  • Implement consistent digital security measures such as encrypting data, maintaining secure device practices, and regularly backing up information collected during monitoring processes.
  • To ensure the sustainability of digital rights monitoring initiatives, DRAA members  should prioritize long-term resource planning and explore diversified funding strategies.
  • Establish a clear and standardized data management workflow that defines who collects, inputs, accesses, and analyzes data, along with consistent procedures to ensure accuracy and accountability in digital rights monitoring.
  • Establish regular and sustained coordination mechanisms among organizations engaged in digital rights monitoring to promote collaboration, share evidence, and avoid duplication.

Subscribe to Newsletter

Soubscribe to our newsletter to get the latest news and updates.