Social media

Stop online hate

Social-media companies are failing to moderate their platforms. As a result, online hate speech leads to analogue violence, as conflict zones like Ethiopia and Myanmar show.
A Rohingya man looking at Facebook in 2017. Social media played a significant role in spreading calls for violence against the Rohingya in Myanmar. picture-alliance/NurPhoto/Ahmed Salahuddin A Rohingya man looking at Facebook in 2017. Social media played a significant role in spreading calls for violence against the Rohingya in Myanmar.

Social networks have a proven capacity to fuel spirals of violence, especially in the context of crises and wars. This is one of the human-rights problems of the digital age and has been analysed in Ethiopia and Myanmar by the human-rights organisation Amnesty International. Online use in both countries is dominated by Facebook. Indeed, for many people in the global south, the platform virtually is “the internet”.

During the armed conflict in northern Ethiopia from 2020 to 2022, horrific violence was directed against the civilian population in the Tigray Region. In the report “A death sentence for my father”, Amnesty International documents how Meta – the company behind Facebook, WhatsApp and Instagram – contributed to the barbarity. Among the cases presented is that of chemistry professor Meareg Amare, who was killed after being targeted in Facebook posts. He was one of many victims of violence “primed” on Facebook.

In Myanmar, Facebook played a significant role in the violent expulsion of the Rohingya, as documented in Amnesty International’s report “The social atrocity”. In the months prior to the expulsion in summer 2017, people linked to the Myanmar military and to ultra-nationalist Buddhist groups flooded the network with false information and content inciting violence against the Rohingya. The United Nations’ independent fact-finding mission on Myanmar concluded that the role of social media was “significant” in the atrocities that ensued.

Algorithms often favour emotive content

The spread of violence on social media is facilitated by two fundamental problems. The first is that polarising and emotive content grabs people’s attention, so it is often favoured by the algorithms that decide what users see. Those algorithms are designed to ensure that users stay on the platforms and interact with them for as long as possible. In doing so, they leave behind numerous data traces, making up a digital footprint that permits targeted advertising. The longer users stay on a platform, the more adverts they can be shown.

The second problem is that Facebook – and all the other major social-media platforms – fail to resource and ensure consistency when it comes to moderating and deleting problematic content. This is particularly true in the global south. In Ethiopia, for example, more than 80 languages are spoken but Facebook can moderate in only four of them. There is also insufficient awareness of local contexts. As the situations escalated in both Ethiopia and Myanmar, Meta failed to respond appropriately to numerous warnings from civil-society organisations, human-rights experts and its own Facebook Oversight Board.

Responsibility to respect UN Guiding Principles on Business and Human Rights

According to the UN Guiding Principles on Business and Human Rights, social-media companies have a responsibility to respect human rights. They must therefore urgently

  • implement human rights due diligence, analyse risks and take remedial action;
  • reduce the impacts of algorithmic amplification in all countries, for example by taking steps to limit resharing or group sizes;
  • implement special measures in risk contexts, such as disabling recommendation algorithms;
  • provide trained staff for all the languages used and context-sensitive guidelines for content moderation;
  • set up compensation funds for victims of online violence and violence incited online.

Governments worldwide need to oblige social-media companies to implement human rights due diligence and adjust their business model. This includes banning targeted advertising based on invasive data tracking practices. Last but not least, they need to create and properly resource national regulatory authorities and ensure individual and collective access to legal remedies. If they fail to do so, the events that unfurled in Ethiopia and Myanmar were just the beginning.

Links

Amnesty International, 2023: “A death sentence for my father”. 
https://www.amnesty.org/en/documents/afr25/7292/2023/en/

Amnesty International, 2022: The social atrocity. 
https://www.amnesty.org/en/documents/asa16/5933/2022/en/

Lena Rohrbach is Policy Advisor for Human Rights in the Digital Age and Arms Export Control at Amnesty International Germany.
presse@amnesty.de

Related Articles