Development and
Cooperation

Disinformation

Taiwan is standing up to disinformation

Despite being the target of foreign disinformation campaigns, Taiwan has managed to halt social polarisation. One of the driving forces behind this process was Audrey Tang, Taiwan’s first minister of digital affairs and a former hacker.
Audrey Tang Kaii Chiang/CC BY-NC-SA 4.0
Audrey Tang

Audrey Tang interviewed by Eva-Maria Verfürth

Audrey Tang’s journey began in 2014 when protests erupted over a planned trade deal with China. At the time, she was a computer programmer and a member of the Sunflower Movement opposition group. The group occupied Taiwan’s parliament and succeeded in halting the trade agreement. The movement fostered civic participation in Taiwan and strengthened the Democratic Progressive Party (DPP), which opposes closer ties with China. Following the DPP’s election victory in 2016, Audrey Tang was invited to become a minister without portfolio, and from 2022 to 2024 she served as the country’s first minister of digital affairs. During her time in office, she made transparency and the open-source approach important principles of government, raising trust in democratic institutions and helping to combat disinformation.

In the lead-up to Taiwan’s 2024 presidential elections, the country faced some fairly sophisticated disinformation campaigns, with significant foreign interference observed, particularly from China. The impact was contained, however, preventing it from undermining democracy and public trust in the integrity of the elections. How was this possible?

Our information resilience ecosystem countered these attacks effectively with three pillars:

  • Fast: Civic and official channels published accurate clarifications within the “golden hour”, before rumours become entrenched.
  • Fair: All major platforms adopted strong measures against counterfeit accounts, including mandatory “know your customer” measures on advertisements.
  • Fun: Humour-over-rumour memes outcompeted rage-baiting; and grassroots clarifications achieved a higher level of public engagement than the falsehoods they answered.

Because citizens were able to see the process, trust has remained high: the Freedom House Index, which measures access to political rights and civil liberties, rates Taiwan 94/100. And the Bertelsmann Transformation Index (BTI) 2024 report notes declining long-term polarisation on national identity issues, despite election spikes, with 91 % considering our democratic system at least “fairly good”.

Since you mentioned the “golden hour”: How did the authorities communicate during the election period?

There are channels in place for flagging information manipulation directly to the relevant ministry. The ministry is then required to draft a counter-narrative as soon as possible, ideally within 60 minutes. We have trained our civil servants and politicians to respond quickly, and the government has indeed become very responsive, especially during election times.

Why is a quick response important?

If you respond within an hour, your response will have a pre-bunking effect, reaching most people more quickly than the manipulated information. By contrast, a delayed response only has a debunking effect, which has little impact on public opinion. Pre-bunking is always more effective than debunking.

Debunking means correcting misinformation after it has been spread, while pre-bunking aims to reach people before they encounter disinformation. But can the pre-bunking effect be achieved even after the misinformation has been published?

Yes, provided you have an effective monitoring system and can respond within 60 minutes. Anticipating such attacks is even better. I deep-faked myself in 2022 to show what’s coming and the need to “always verify before forwarding.”

Another tool being used in Taiwan to combat disinformation is the Cofacts fact-checking platform – a chatbot where people can post information they have come across and ask the fact-checking community whether it is true or false. How exactly does this work?

It’s like a Wikipedia for fact-checking. Whenever someone would like information to be fact-checked, they can report it on LINE, which is an end-to-end encrypted messaging service similar to WhatsApp. This report is added to a transparent, collaboratively maintained database that everyone can access. It is not run by the government.

A 2023 study by Cornell University showed that Cofacts often responds more quickly to queries and that the answers are generally just as accurate as those from professional fact-checkers. Who are the people behind Cofacts?

Cofacts serves as a hub, based on crowdsourcing but backed by civil-society groups and fact-checking professionals such as the Taiwan FactCheck Center. Through crowdsourcing, it quickly becomes clear which topics are likely to go viral and are worth fact-checking. Professional fact-checking teams will focus on these. However, the platform is open to everyone, and volunteers play a significant role in the fact-checking process. The Cofacts team has also trained a language model that can step in even before the first professional team is deployed. The chatbot provides an instant, rapid response based on previous cases – for example, it can determine whether an online incident resembles a memetic virus that has occurred previously and is merely a mutation.

Fact checking

Combating fake news

How do you ensure people actually participate?

Contributing to Cofact is part of the media-competency training provided in schools and in our lifelong learning programmes. For instance, students may be asked to fact-check live during a presidential debate. This experience is important because going through the process of assessing which sources are trustworthy can “inoculate” a mind – meaning it can make people more resilient to disinformation and conspiracy theories – whereas simply seeing the facts does not. Furthermore, once students realise that they can contribute to society, they are more likely to participate in collaborative fact-checking outside of school.

In 2019, Taiwan implemented a general curriculum reform. How did fact-checking make it onto the school curriculum?

The goal of media education shifted from media literacy to media competency. Literacy is about how you handle information as an individual, whereas competency means being able to contribute to the common understanding. Prior to 2019, the overarching educational objective was for pupils to comprehend and reproduce standard responses. Nowadays, standard answers are something that AI can handle better than students and teachers can. Instead, we want to encourage interaction and creativity. Students learn how to spark their own curiosity, how to collaborate with people from different backgrounds and how to view collaboration as a win-win situation rather than as a zero-sum game. These are values that only humans can provide, even when AI handles all the tasks involving standard answers. They are based on mutual understanding and care.

In the latest OECD PISA rankings, Taiwanese students were among the top five performers in all three categories: maths, reading and science. Would you say that the educational reforms have been successful?

The PISA rankings show that we have not sacrificed STEM performance (science, technology, engineering and mathematics) in favour of emphasising civic literacy. This is the best possible outcome. At the same time, we have made progress in other areas: Taiwanese students ranked first for civic knowledge in the 2022 International Civic and Citizenship Education Study (ICCS). They also scored highly in terms of civic engagement and their ability to contribute to environmental sustainability, social issues and human rights, as well as their level of trust in government agencies.

The 2014 uprising was a decisive moment for you personally. At that time, public trust in the president was very low, but it has increased significantly since then. What happened in the meantime?   

2014 was a watershed moment. Social-media use reached a record high and people could freely express their opinions online. However, “engagement through enragement” algorithms eventually caused social division. Our approach was to make the government’s decision-making process transparent, providing people with real-time open data. We also introduced online citizen assemblies where people could deliberate on important issues. In addition, the civic collaborative platform g0v has engaged in open consultation through online tools and in-person meetings since 2014. We recovered from the trust crisis not by asking people to trust the government, but by encouraging public services to trust the people.

Social-media platforms are huge hubs for disinformation. What measures could stop misinformation from spreading?

Firstly, ensure that your online ecosystem incorporates a fact-checking component. Secondly, don’t allow bots to run rampant on social media – freedom of expression does not entail the right to create an unlimited number of bots. Bots should not have freedom of speech. Therefore, ensure that platforms use secure “know your customer” technology (KYC) or digital signatures to verify users’ authenticity. Thirdly, social-media companies should be held liable for any damage caused if they do not comply with the rules. For example, if a social-media platform in Taiwan perpetuates scams and does not take them down even after they have been flagged, the parent company is liable for any damages suffered by users. This ensures that the company shares the burden of harm. Facebook, YouTube and TikTok have already implemented robust KYC procedures. We have also drafted a law requiring digital authentication for social-media users. Social media do not inevitably polarise people; it is a consequence of the platform’s design.

How did citizens react to the idea of a digital authentication law?

This rule was crowdsourced by an online citizen assembly held in March 2024. Citizens who were selected through stratified random sampling discussed the issue of information integrity. The outcome was that people were clearly against government-led content moderation, but they voted in favour of digital authentication.

What would you recommend that other countries do to improve their information ecosystem?

Firstly, they should regard broadband internet access as a human right. In Taiwan, the Universal Service Fund provides broadband bidirectional connection, even in rural areas and at the top of Yushan, which is almost 4,000 metres high. Otherwise, you exclude a large number of people from participating in civic life online. Starting 2019, we adopted an education system that prioritises the contributions of pupils, teachers and other learners to the common good. It is important to foster a sense of agency and empower individuals to contribute to society. It doesn’t matter if they get things wrong sometimes. As long as enough people participate, the ecosystem of knowledge usually converges on common ground after a while.

And what about technologies?

The more open source technologies and software are, the more likely people are to adapt them to their actual needs. Governments can invest in public code, which is open-source technology coupled with a code of conduct and deployment. Another option is for governments to bring technology enthusiasts who are working on free and open-source content together with community organisers and educators. Both communities share the same ethos, and when they empower each other, something beautiful emerges – a civic technology community. The most successful open-source infrastructure relies on communities with a strong offline connection – literally people meeting every week. These face-to-face interactions build the civic muscles, the trust between people. My main suggestion is not to bypass your existing community-level face-to-face associations and civic groups, but rather to tap into them. This enables informal ties between civil-society groups to grow into stronger bonds over time.

Taiwan has gained recognition for the way it has handled disinformation about the Coronavirus, with one strategy being to debunk health misinformation using memes. How can humour help to tackle disinformation?

Humour played a crucial role in our successful fight against Covid-19 by creating “uncommon ground” and fostering societal resilience rather than polarisation. During the early stages of the pandemic in 2020, we faced conflicting information about mask effectiveness. Some people claimed “only N95, the highest-grade mask, is useful because of our SARS experience,” while others spread the message that “wearing a mask actually harms you.” This created a polarising debate when the science was still uncertain. Our response was a public service announcement featuring a cute Shiba Inu dog putting her paw to her mouth with the message: “wear a mask to remind each other to keep your unwashed, dirty hand from your face.” This humour-based approach worked because it reframed the issue. Instead of engaging with the polarising mask debate, the meme redirected attention to hand hygiene, which is a very non-polarising topic that everyone could agree on.

Also, it created shared experience: People who laughed at the Shiba Inu message formed a common emotional connection that transcended their ideological differences about masks. This is not to trivialise the issue. Of course, a virus is very serious, but we want to make it serious in a way that enables it to elicit societal resilience.

The effectiveness was measurable. We monitored tap water usage and saw it increased after the campaign. The humour allowed people to respond constructively despite their initial positional or ideological differences, building unity rather than division during a crisis. This exemplifies Taiwan’s broader “humour over rumour” strategy, using wit and shared laughter to build bridges across divides while still addressing serious public health challenges.

Audrey Tang is Taiwan’s cyber ambassador and former minister of digital affairs.
audreyt@audreyt.org  
plurality.net

Latest Articles

Most viewed articles