How AI fails us

According to a group of technology experts, the current paradigm among AI engineers misunderstands what human intelligence is really about. The implications are dangerous.

The authors argue that the currently dominant vision of AI is misdirecting resources towards unproductive and dangerous goals. Their paper “How AI fails us” was published in December 2021 by Harvard University’s Carr Center for Human Rights Policy. It spells out that AI development is currently based on the vision of “intelligence as a single, distinct, autonomous quality.” More simply put, engineers are trying to create a machine with cognitive capacities not only superior to human beings, but independent of human beings too. The authors call this approach “actually existing artificial intelligence” or AEAI for short.

The expert team states that real human intelligence differs very much from what a tiny, but well-endowed engineering community is striving for. They stress that human intelligence is interactive, collective and cooperative, so it is wrong to think that humankind’s problems could be easily resolved if only the top leaders were intelligent enough. In human societies, healthy solutions are brought about by meticulous debate, not central planning. The expert team therefore wants engineers to develop digital tools that enhance – rather than replace – human exchange and cooperatve decision making. They call their alternative approach AEDP – actually existing digital plurality.

Only three companies matter

The expert team points out that the cutting-edge AI community is indeed very small. According to the paper, only three US companies really matter, and each of them is closely affiliated to one of the three multinational giants Microsoft, Google and Facebook. Other institutions simply do not command the required resources. Competition among the AI labs is fierce, and all three companies also fear that Chinese developers may get ahead of them. The implication is that they prioritise progress over diligent risk assessment and risk management. Speed, in other words, beats safety. The focus on achieving a singular and autonomous ‘general intelligence’ involves a drive towards concentrating resources, data and investment into an ever-shrinking set of organisations and people.

The leading AI companies want to bring about a final invention to replace human intelligence with more powerful AI, which would be autonomous and centralised. Such machine intelligence, the authors warn, would necessarily:

  • disregard pluralism,
  • systematically opt for technocratic impositions and
  • centralise decision making.

History, however, is filled with failed, often disastrous examples of this extreme concentration of productive resources, the authors warn.

The experts teach at various US universities or work for Microsoft, as lead author Divya Siddarth does. The team rejects the idea that humans and machines must compete with each other – and that, where possible, machines should replace humans. Such thinking ultimately means that workers are displaced, human capacities are made redundant and social costs keep rising. Instead, the authors want engineers to grasp opportunities for improving human productivity. They point out that such productivity gains have been small in recent decades in spite of fast digital change.

Digital plurality

The message is that humanity should not provide vast resources for research and development to small, centralised groups in the pursuit of extremely narrow goals. The expert team demands digital plurality, which – in their eyes – is both the ethical and the effective approach to human progress.

The paper provides several examples of digital plurality in practice. Examples include citizen science initiatives like eBird, which allows bird watchers around the world to contribute their observations to the science of ornithology. Wikipedia and crypto currencies are considered good practice too.

Moreover, the paper praises Taiwan’s Digital Democracy project. It is designed to enable citizens to see how the state operates and involve them in public affairs. The minister in charge is Audrey Tang who became famous as a civil-society activist whose organisation used digital technology to demand that the government become more transparent and responsive.

Siddarth, D., et al, 2021: How AI fails us.

Roli Mahajan is a freelance journalist based in Lucknow in North India.

Related Articles


Achieving the UN Sustainable Development Goals will require good governance – from the local to the global level.


The UN Sustainable Development Goals aim to transform economies in an environmentally sound manner, leaving no one behind.