Reflections for Africa from the 2026 AI Impact Summit in India

  • April 2, 2026
  • |
  • by DRAA
  • |
  • No Comments
Reflections for Africa from the 2026 AI Impact Summit in India

By Raymond Amumpaire |

The AI Impact Summit 2026 (the Summit) (Feb 16–20, 2026) brought together participants from across the globe to build consensus on strengthening international cooperation and multistakeholder engagement on Artificial Intelligence (AI). The Summit was centered around seven Chakras(pillars) centred around the principles of: development of human capital; broadening access for social empowerment; trustworthiness of AI systems; energy efficiency of AI systems; use of AI in science; democratising AI resources; and use of AI for economic growth and social good. These were aimed at translating the summit’s guiding principles – People, Planet, and Progress into concrete actions.

Lack of binding regulation 

Despite calls from international, regional and municipal players for increased regulation, there is a glaring lack of a binding instrument on global AI governance. In response to this challenge, the Summit resulted in the New Delhi Declaration on AI Impact, a voluntary and non-binding framework inspired by “सर्वजन हिताय, सर्वजन सुखाय” (Welfare for all, Happiness of all). It calls for international cooperation and multistakeholder engagement across countries along the seven Chakras (pillars) of the AI Impact Summit. It further seeks to foster shared understanding, while respecting national sovereignty, on how AI could be made to positively serve humanity. This could be achieved in complementarity with existing international and other initiatives. The Declaration was adopted and endorsed broadly by more than 89 countries including the United States despite Washington’s resistance to formal global AI governance efforts

Unfortunately, the question as to whether AI should be governed superseded the question of who and for whom AI is governed. The Declaration has been criticised for reflecting a growing pattern from nearly all global AI conversations yet without clear convergence on enforcement. As has been with every global AI dialogue, the Summit through the Declaration made ambitious promises on regulation and democratisation of access but lacked a binding mechanism. This flaw defeats well intended regulatory spirit while exacerbating the technological apartheid problem for the Global South.

Where were the voices of the affected communities?

The Summit, poised to be an opportunity for the Global South to get its voices and perspectives about AI Governance architecture heard ended up largely affording participation opportunities to top AI company CEOs, UN officials and Ministers. This relegated views from the Global South which bears the brunt of the AI ecosystem. Even the Civil Society actors that participated were involved in tokenist consultative roles as compared to active participation. 

The Global South was present mainly as the subject rather than architects of the discussion. Initiatives like Multistakeholder Approaches to Participation in AI Governance (MAP-AI) model aim to fill this gap by fostering meaningful and effective multistakeholder engagement across a range of critical AI governance-focused convenings, processes, and initiatives, with a particular focus on elevating underrepresented voices, perspectives and highlighting leadership from the Global Majority. Reflections and recommendations from MAP-AI Activities at the India AI Impact Summit, 2026 may be found here.

Human Capital and “AI” Workers’ Rights

Despite efforts to deliberate on responsible human capital at the Summit, the reality has been that the entire AI ecosystem and AI supply chain, from generating, annotating, and verifying data for AI training to content moderation, largely runs on the labour of thousands of gig labourers from Kenya to Madagascar, India, Philippines, and Venezuela. This raises issues around digital colonisation and the human cost of AI as well as business and human rights aspects. Researchers argue that leading AI companies from the Global North leverage the weakness around the labour laws in the Global South countries to power their machine while “sidestepping accountability” and obligations. Workers also have weak legal standing, ineffective grievance redressal mechanisms and negligible institutional protection. The Summit could have delivered more in terms of centering the views and addressing these labour concerns.

Deepfakes continue to be an issue 

The impact of AI on information integrity remained a going concern for plenary and discussions at the Summit. The frontier of AI-facilitated information manipulation which until recently included non-consensual intimate imagery, political disinformation, financial fraud has taken a new twist with citizens and governments taking action against major AI companies that commercialise the AI-powered applications which have capabilities to generate sexualised deepfakes of women and children. The lack of centrality, in design and governance, of affected persons’ views coupled with broader gaps makes the enforcement of existing laws on these issues and platform accountability difficult. The Summit discussions touched these ‘open wounds’ but there were no concrete ways forward. 

The other concern is the subliminal retirement of the word ‘safety’. The shift from the AI Safety Summit at Bletchley Park, Buckinghamshire (2023) to the AI Action Summit in Paris, France (2025) to the AI Impact Summit, reflects a deliberate conditioning of what the policy priorities  choices should be about, and it is not ‘safety’.

Commitments

The Summit also delivered the New Delhi Frontier AI Commitments, a number of voluntary commitments from participating organisations and global frontier AI firms that aimed to democratise AI access and innovation by mirroring a shared vision to ensure that the development and deployment of AI systems are aligned with equity, cultural diversity, and real-world needs, particularly across the Global South. In particular:

  1. The first commitment, “Advancing Understanding of Real-World AI Usage” focuses on real-world AI usage through anonymised and aggregated insights. Participating organisations will work to generate evidence that supports policymaking on the impact of AI on jobs, skills, productivity, and economic transformation. By enabling data-driven analysis of how AI is being deployed across sectors, the initiative aims to help governments and institutions craft informed strategies that maximise benefits while mitigating risks associated with technological change.
  2. The second commitment, “Strengthening Multilingual and Contextual Evaluations” centers on  efforts to ensure effectiveness of AI systems across languages, cultures, and real-world use cases. Organisations will collaborate with governments and local ecosystems to develop datasets, benchmarks, and expertise that support evaluation in under-represented languages and cultural contexts. This effort will improve AI performance for diverse populations and help democratise access to high-quality AI experiences globally, while preserving flexibility in the choice of tools and evaluation methodologies.

Despite a number of risks ranging from national security to loss of control taking centre stage in the corridors, the representation never made the official outcome. The time to act is now and the window is fast closing on Africa.

Key takeaways for Africa:

  1. Governments and regulators should prioritise rights respecting AI-strategy development that mandates human rights impact assessments at all stages of procurement and deployment across all sectors of their economies.
  2. Governments  should ensure effective implementation of AI governance efforts through AI-literacy and capacity-building initiatives targeting public officers and institutions.
  3. Big tech should entrench human rights in the design and roll-out of AI as well as ensure benefit sharing from final products whenever resources such as data for training foundational models are sourced from low income countries.
  4. Civil society organisations and networks such as DRAA, as well as academia, should lead on documentation and evidence building of AI-related harms to facilitate platform accountability, strategic litigation and transparency. 
  5. Civil society also has a key role in advocacy for inclusive models (especially on women and children’s online safety), and raising community awareness on AI and emerging technologies.
  6. Regional institutions such as the African Union should prioritise setting strategic agendas and forums and entry points to deliberate with big tech companies, private, public, civic, and digital actors to name the gap clearly and work structurally to close it.

Leave a comment

Subscribe to Newsletter

Soubscribe to our newsletter to get the latest news and updates.