Artificial intelligence

Why AI is likely to reinforce common narratives

Innovative AI systems are able to write astonishingly good manuscripts. Some chatbots are accessible online. This technology will have far-reaching impacts – and they will make formal education more, not less important. Chatbots, moreover, will probably reinforce conventional narratives.
Berliner  Innenstadtverkehr: Am Lenkrad verarbeiten Menschen komplexe Informationen.  picture-alliance/dpa/Lena Lachnit Berliner Innenstadtverkehr: Am Lenkrad verarbeiten Menschen komplexe Informationen.

Artificial intelligence (AI) has been a hot topic for quite some time. So far, I have nonetheless found it unimpressive. AI programmes are generally based on huge datasets and use algorithms that detect how most human users respond to specific signals in specific situations. They mimic the human mind by replicating the average results humans deliver in those settings.

For many purposes, this is perfectly fine. We actually want a self-driving car to perform like the average driver because the average driver normally does not cause accidents. Drunk and sleepy drivers do, and reckless speeders who disregard traffic rules are dangerous too.

The big challenge of making self-driving cars is to collect and process all the information a human driver’s senses provide. Humans depend on eyesight, hearing and other senses (including smelling, to detect a gas leak, for example). They must monitor various tools and survey a complex road environment. Installing enough cameras and sensors to collect all information is difficult. On that basis, however, it makes sense to use average human behaviour as the guideline.

Chatbots

In the past four months, a new kind of AI programme has been making headlines around the world. Chatbots are AI systems that generate language. It is possible to correspond with them and ask them to perform some tasks. The most prominent one is probably ChatGPT. It was created by Open AI, a not-for-profit company with a for-profit subsidiary based in the USA. The most prominent investor is Microsoft.

ChatGPT was launched late last year. Its user numbers soon exceeded 100 million and are still growing fast online. Anyone who registers on the website can ask the chatbot questions and will get answers. You can tell the programme to write a letter, an essay or even a poem. The results are surprisingly good. People who want to register may have to wait before they get access because of huge and fast growing demand.

However, ChatGPT is still stuck in the conventional AI mould. The programme does not do any kind of creative thinking. It basically rows word behind word according to a statistical evaluation of its huge database. If you don’t trust me, check out what the prominent linguist Noam Chomsky wrote with two co-authors in the New York Times. The headline was: “The false promise of ChatGPT”.

At this stage, it is reasonable to say that chatbots write the kind of manuscript an informed average internet user would write when weighing in on a given subject. ChatGPT has passed some high school exams in Germany, but the results were mediocre, not excellent. That should not come as a surprise. After all, the programme is designed to opt for the most typical solution human beings would come up with. It is neither designed to deliver the most relevant nor the most intelligent solution.

Searching for information

For most practical purposes, average results are probably good enough. Quite likely, many internet users will stop using search engines like Google, because it is more comfortable to interact with a chatbot. You ask for information, and you get ­answers without having to choose from a long list of potentially interesting websites.

Microsoft has integrated a chatbot into Bing, its search engine. The combination results in an interesting tool, not least because the answers include the sources they were derived from. Moreover, you can ask Bing to stray beyond conventional thinking by requiring “unusual” or “non-conformist” results. Making sense of the results, however, is obviously more challenging than simply accepting average ones. We should thus expect chatbot researches to largely confirm mainstream thinking – most likely with a Californian bias.

On the other hand, fact checking will become even more important. Chatbots do not have an understanding of truth or reality. They are driven by probability garnered from existing databases. If there is a lot of false information and fake news out there (and there is), that will leave a mark on chatbot responses.

The impacts will be huge. In secondary and tertiary education, students will use chatbots, and their supervisors will have to figure out whether they fulfilled the task they were given or whether an AI programme did all the work. Moreover, sensible curricula will have to include how to use AI.

In journalism, we will face challenges too. Some smart contributors will use AI to get their work done faster, letting the chatbot provide the first draft and then editing it. Less competent persons, however, will let a chatbot do the entire job, and it will be the editorial team’s job to detect what factual errors may have crept in.

Staying stuck in our silos

What worries me most, however, is that journalism generally reproduces conventional narratives. I suspect AI generated journalism will further compound this problem. After all, AI programmes are not designed to “think outside the box”. They are actually not designed to think at all. They merely process data.

In recent years, however, the demands to think outside the box have been increasing. The reason is that our species is facing huge and interrelated challenges, so we need innovative approaches. The sad truth, of course, is that human thinking tends to be stuck in silos no matter how often we say that we want to transcend them. We long for cross-cutting solutions, but tend to stick to the patterns of thought that we are used to. Common narratives reinforce those patterns.

They often are not helpful. Consider the standard explanation for why right-wing populism has been gaining strength around the world. It is that left-behind working class people are rebelling against global elites. Why then is Fox News – based in New York and owned by a multinational corporation dominated by Rupert Murdoch, an Australian-born billionaire – the main proponent of right-wing populism in the USA? His News Corporation also runs British media outlets that supported Brexit. Moreover, other newspapers that supported Brexit similarly belong to plutocrats, some of whom, like Murdoch, do not live in Britain.

The common narratives regarding right-wing populism do not tell us why oligarchs with international lifestyles promote narrow-minded nationalism.

Super-rich persons do so because they see that global cooperation leads to international regulation that can put a check on their power. Examples include environmental protection and the enforcement of tax laws. Accordingly, they support nationalism and attack “globalists”. I think the term “plutocrat populism” fits their attitude, but it has not gained much traction.

Asking the right questions

The more AI programmes figure in public life, the more discerning we will have to become. Knowledge-related work will increasingly be about asking interesting questions. It takes a good education to do that. Specialist knowledge will often matter too.

AI will certainly have far-reaching impacts. Sooner or later, bureaucracies will use it to handle standard processes. Most of the time, the AI system will take the proper decision, though there will be snags too. We know, for example, that face-recognition systems, whether produced in California or China, tend to do a poor job of telling black people from one another. The main reason is that there are too few black faces in the datasets the programmes are based on. That may change over time, but some mistakes are baked into the technology. Average ­decision-making is, by definition, mediocre, and sometimes wrong.

We can expect chatbots to become better fast as the databases they rely on are growing. They will soon include information gathered in interaction with masses of internet users. To some extent, that will make chatbots look more like experts. After all, experts are the people who generate most content related to the areas they specialise in, so deriving answers from those entries should more or less result in average expert opinions, excluding lay persons’ views. That may explain why the latest ChatGPT version seems to pass exams with better results than previous ones.

At the same time, chatbots can make it easier to produce fake news, building on disinformation and propaganda. I fear that many people will not notice that they are being misled while others happily use AI to mislead them.

Only when used by well-informed people, AI will deliver above-average. In general, it will reinforce long-established patterns of thought and prejudices. To rise to the sustainability challenge, humankind must use the new technology wisely. Simply trusting it is the wrong approach.

Link
Chomsky, N., Roberts, I. and Watumull, J., 2023: The false promise of ChatGPT.
https://www.nytimes.com/2023/03/08/opinion/noam-chomsky-chatgpt-ai.html

Hans Dembowski is editor-in-chief of D+C/E+Z.
euz.editor@dandc.eu

Related Articles

Sustainability

The UN Sustainable Development Goals aim to transform economies in an environmentally sound manner, leaving no one behind.