Stories about AI become representations of humanity


In the popular science narrative about AI, technology is becoming increasingly human-like while the humaness is primarily simplified to a cognitive capacity.

The construction of a machine that surpasses and outmanoeuvres humans has been repeatedly dramatized in film and literature, but it is a narrative that has gradually shifted from the science fiction shelves of the bookstore to the popular science section. The mystification of AI technology has created room for a lucrative genre where authorities in computer science, philosophy, and neurology, together with science journalists, translate technical achievements and look into the future with dramatizations of possible utopias and dystopias.

Will AI help us solve global challenges and give us more free time, or will today's computers develop a capacity and autonomy that eventually becomes an existential threat to humanity? Advanced technical systems and models are not easy to understand if you are not educated and don’t work in the area. The popular science genre thus serves an important function in making accessible what occurs in (often commercially driven) laboratories and explaining its potential consequences for a general public that lacks further knowledge in programming machine learning or neural networks. This demand for knowledge is cross-fertilised with a drama that sells, where exactly are we heading?

In the popular science narrative about AI, technology is becoming increasingly human-like while the humaness is primarily simplified to a cognitive capacity. The metaphors used to describe the development of technology is based on a neuropsychological conceptual framework. Computers learn, they remember and think. They are increasingly described as being able to contextualise, associate, and abstract, all in the pursuit of becoming more human than human. For computers to be programmed with cognitive capacities of "understanding" and “sense-making", the observable must be defined and categorised, and everything must become things. A cat must be able to be observed as being a cat regardless of how and where it is depicted, pronounced or sounds, and all its possible connotations and cultural meanings need to be accessible to the increasingly hard-working processor.

The human is often portrayed as limited and something that is on the verge of its capacity, we humans are slow, emotion-driven, whimsical and unpredictable - attributes that are easy to program away in a computerised version of ourselves.

A watershed moment in popular science stories about the future of AI appears to revolve around the belief in computers' ability to develop what is called "general intelligence". Computer science has already succeeded in creating expert programs that can solve advanced mathematical problems and beat champions in games like chess, GO, and Jeopardy while being hopelessly incompetent in other areas. If computers of the future are to become superintelligent, they must somehow be able to transfer their capabilities from one area of expertise to another, in the same easy way as humans can.

The next essential step in development can be described as the ability to program computers with freedom of movement, self-reflection, awareness, and "common sense". However, the neuropsychological metaphors leads seemingly to a dead end and the ambition discloses how little we know, understand, and can explain the way humans are being in the world. The narrative reaches a point where the limitations of the machines become clear, while the human aspect becomes increasingly mysterious and abstract. We all have experiences of how difficult it can be to understand ourselves. No wonder it becomes challenging to transfer such an ability into a machine.

Thomas Wahl External link, opens in new window.