Development and
Cooperation

Technology

Why the Global South can’t afford tech pessimism

AI technologies wield huge opportunities for people in the Global South, yet their realities are vastly underrepresented in these tools. Our interview with Payal Arora, professor of “Inclusive AI Cultures” at Utrecht University, is an urgent call to use the power of AI for the good of people and the planet, not for Silicon Valley.
D+C, AI generated

Over the past year, the darker sides of the internet and artificial intelligence have become ever more obvious. We’ve seen the concentration of power among a handful of tech oligarchs and risks posed by unchecked AI systems. Your message has always been that AI brings hope and opportunity to people worldwide. Do you still stand by that, given every­thing we’ve witnessed in the past year?

Absolutely. These technologies are transforming the lives of many people, particularly in the Majority World, where people live in resource-constrained, dangerous and often deeply oppressive contexts. Despite all the harms and risks, these tools have become a fundamental part of our public life.

Can you give an example of the opportunities AI offers in oppressive contexts?

Since the return to power of the Taliban, women in Afghanistan have been pushed back to a situation we can’t even imagine. They can’t access education, healthcare, public spaces; they aren’t even allowed to stand by a window. They are deprived of the fundamentals of human existence: of connecting with one another. Suicide rates have surged dramatically. For these women, digital technology has become essential, particularly the empathetic nature of AI tools such as Claude or DeepSeek. They can have a dialogue and educate themselves; they can feel visible and heard.

Another example: More than 60 countries in the world criminalise homosexuality, in many cases even with the death penalty. Queer people living in these countries cannot even speak openly to their families, friends or neighbours. So, they turn to AI tools to ask whether what they feel is normal. And GenAI tools will tell them: Yes, it is normal, and it is perfectly healthy. That voice can save many people from depression and suicide if the essential guardrails are met on the safety and security of their data.

In what sense does AI ease daily life in resource-constrained situations?

Women in India, for example, spend large amounts of time doing care work on top of their professional work. They use AI tools to answer questions such as: “I have fifteen minutes, these items in my fridge, elderly in-laws with dietary restrictions and children with different needs; what can I cook that will work for everyone?” These are mundane but essential ways of coping with a crippling load. Or think of children who are first-generation learners and whose parents cannot help them with homework, which is actually a significant share of pupils worldwide. If their parents cannot afford tutoring, AI can act as their tutor. Yes, AI tools hallucinate, but it is better than having no support at all.

Critics warn of the risks of unregulated AI use. Think, for example, of potentially harmful health advice. Aren’t strict regulations inevitable?

We keep having these conversations about whether to ban social media or restrict AI, without really paying attention to people’s experiences. Why have AI tools like ChatGPT and DeepSeek broken download records? Why do billions of people use them for health advice or education? It’s because most people especially in the Majority World simply do not have adequate access to quality healthcare or education. This is even true in the West: think of how long it takes to get access to psychological therapy through public health systems in European countries – and once you do, the therapist may not even understand your cultural context. AI is not replacing teachers or quality mental healthcare practitioners; it is stepping in where access or quality is lacking. It’s uncomfortable to realise, but in many cases, AI is providing better services than our institutions. Instead of discussing whether or not to ban technology, we should view the rise of AI as a call to reform our institutions.

In short, you say it’s all about making sure these tools serve the people better, right?

Exactly. We should put our energy into improving these tools, particularly as budget cuts to public services demand that we deploy resources smartly. However, we also need to make sure they are safe to use – that they cannot be predatory towards children and that deepfake abuse is addressed, for example. That’s why I’ve been championing a rational optimism concerning these technologies. Pessimism is a privilege for those who can afford to despair. The people who are most pessimistic are often those who are well-off – those who say they need to go offline because they have too many followers or detox because they have five devices. There is a whole ecosystem of academics, researchers and futurists who are making their living selling a binary narrative of doom. It generates clicks, it triggers fear, and it moves people in the entirely wrong direction.

The discussion around social media has been very similar: the platforms have transformed lives but also given way to abuse. Is this the same with AI?

Yes. Social media enabled the MeToo and Black Lives Matter movements. The problem is not the technology – it is the hyper-concentration of power in very few hands. The companies behind these platforms are not driven by public interest, and there is no meaningful mechanism to hold them accountable.

Some countries, like India and the EU member states, are currently making efforts to become less dependent on US tech companies. Can you explain what the India Stack and the EuroStack are, for example?

Such efforts are urgent and overdue given the current geo­political shift. The US is no longer a reliable ally, and even if the current administration changes, the lesson is clear: any entity with a high concentration of power will tend to corrupt and abuse that power. The India Stack is a government-run digital service infrastructure which includes identification verification and digital payment systems, data storage, health record sharing and other essential services. Europe is currently developing EuroStack, which is partly inspired by the India Stack. Both are driven by the shared goal of moving away from dependence on Silicon Valley. That said, both the European and Indian approaches have a significant weakness. 

What do you mean?

A lot of energy goes into building the infrastructure, while little attention is paid to the users. But if people do not find these tools intuitive, they will revert to commercial Generative AI tools. This would be a huge waste of public resources. User experience must be at the heart of the process. US American companies are very good at this: they optimise user engagement for the scaling of their products and services.

AI infrastructure requires significant investments, and US tech companies have vast resources. Can governments or smaller companies compete at all?

We will always have far less resources than Silicon Valley, and they’re not a role model either. The US tech giants run extraordinarily wasteful data centres that consume enormous amounts of resources. The goal should be targeted innovation: how can we consume less and yet build power with greater diversity? Initiatives like Sarvam AI and Lelapa AI are excellent examples.

Both companies develop AI tools for users in the Global South, especially by incorporating local languages and accents. Sarvam AI is being devel­oped in India, with all data gathered and stored locally. Lelapa AI specialises in African languages and focuses on building resource-saving tools.

Sarvam AI is part of India’s broader data sovereignty initiative. This federal approach aims to secure citizen data, given that India has the largest young population in the world, and tech companies are very interested in accessing their data. Lelapa AI takes a more grassroots approach. It is driven by civic organisations and a broad coalition of partners. 

Why is inclusion important in AI?

The Majority World is vastly underrepresented in the data that powers AI systems. Ninety percent of young people worldwide live in the Global South, as do 85 % of the global population. Yet they remain largely invisible in these data­sets: their languages are not supported by the tools, their accents and ways of speaking are absent, and entire villages do not appear on Google Maps.

You hold the chair of “Inclusive AI Cultures” at Utrecht University, and one of your projects is the “Inclusive AI Lab”. Can you tell me more about it?

Our “Inclusive AI Lab” is a Global South and women-led AI initiative. It incubates leaders and helps develop AI tools, products and services that put the global majority at the centre. For example, we are collaborating with Google to build a Gender AI safety protocol that takes to heart the cross-cultural nature of deepfake abuse. We also work with creative tech companies like Adobe to train creative AI accounting for the way people from the Global South are visually represented, while also catering for creators from a variety of backgrounds. If you search for images of African children, for example, the results overwhelmingly depict poverty because, for decades, this content has been created by aid agencies that have perpetuated a singular narrative. Poverty certainly exists in Africa, but it is not the only story. Most African parents and communities would portray their children very differently. Beyond such projects, we work with governments, thinktanks, civic organisations and scholars from the Global South to build fair futures through data sovereignty and agency.

Payal Arora is a digital anthropologist and professor of “Inclusive AI Cultures” at Utrecht University, as well as the Founder of “Inclusive AI Lab.” She is the author of the award-winning books “The next billion users” (Harvard Press) and “From Pessimism to Promise” (MIT Press).
linkedin.com/in/payalarora

Latest Articles

Most viewed articles