Text
AI and the ‘chastity belt phenomenon’
Looking across the media over the last few weeks it is clear that there is something of a moral panic around AI and in particular Chat GPT.
Debate rages over whether such technologies are genuinely intelligent (spoiler: they are not), which swathes of work will be eradicated (the limits of probability-based predictive writing, using existing text, will soon become apparent – so less than feared I suspect), whether AIs will auto-multiply and take over the planet (no, they have no interest in landgrabs, or in anything for that matter, not even their own existence – they are machines) and whether we are presently nurturing a generation that will be incapable of writing and thinking (possibly, but I probably don’t give them enough credit).
The focus of debate has been very much on the amazing capability of the technology and latterly, what it is and is not capable of. As a consequence of this, the debate is missing an important point. For the most part, the debate, and the debaters, have allowed themselves to be drawn into what has previously been termed technology determinism. In so being, debaters have forgotten that technology is as much social as it is material; that it does not have independent existence outside of the social world; upon which it is entirely dependent. A banned film, for example, ceases to exist in the society that has banned it, at least in any meaningful way – even if does remain in archival storage and on the odd hard drive. And you don’t really see many chastity belts or PDAs around these days!
Technology determinism, as a way of thinking, views technology as existing independently of human intervention and so concerns itself primarily with the ‘impacts’ of that technology. In the case of Chat GPT this is straight forward - What are its likely impacts on jobs? What is the likely outcome for society more generally? These impacts can of course be very real. We have already seen, reported in the Swedish press, one suicide in which an AI chat-bot user became so convinced that they were taking advice from a real person (they had anthropomorphised the technology themselves, ‘it’ made no such claim) that they committed suicide. They did so after making a pact with the AI bot, that if they ended their life, the AI would go on to save the planet. The thought of our children interacting with a technology that appears adult and caring, but cannot tell the difference between the child’s interest in hockey and in self-harming, and will happy to encourage either - so long as the child keeps chatting - are of course chilling. But these sorts of examples are also where we need to broaden our understanding of technology.
The focus of our concern, in this case, should not be on the technology or the state of mind of the victim, but on the technology’s makers. Social media is a good example of this. We know with hindsight that the only interest of the algorithms behind social media are to keep us clicking. Clicking means profit and the owners of the technology are, as Shohana Zuboff puts it, often ‘radically indifferent’ as to why someone has actually clicked. In other words, it is commercial interests that are tied up with the development and deployment of this technology and who have delivered a technology into a context where it is allowed it to operate without regulation, safety protocols or human over-sight. Clearly it is not enough, or really that interesting, to look at the technology and its capabilities. The debate as to whether such technologies are likely to become sentient and pose a danger to human-kind are a faintly ridiculous distraction – one that technologists have done much to encourage. What is interesting, and where we should be focusing, is the social infrastructure around the technology.
It is safe to say firms don’t build and circulate to technologies for the sole purpose of advancing their own or other’s knowledge. Quite rightly, they have an interest in being paid for their work - either by the user or the advertisers who get access to the user, depending on their business model. The motivations of the Chinese government and the CIA/NSA in developing surveillance technology are rather clear and subject to straightforward critique. However, the ecosystems of firms which give rise to AI technology for commercial purposes have motivations and interests that are rather more opaque and difficult to disentangle. There are multiple possible business models that will support these technologies and the firms involved will have very different values and uncrossable red-lines developing them. The firms also contain not just hard-nosed sales executives, but also engineers, mothers and concerned citizens. It follows, that if we want to understand the technology and its likely impacts, we need to ask such questions as: How is someone going to get paid for working on it? What are the values that are allowing and not allowing certain things to be made? How will the technology be used? How is it anticipated that it will be used? What moral constraints are there on the technology’s use? What is the nature of the demand for the technology? What role is government playing? It is these forces that will shape the technology’s future, not the technology. There are vastly more technologies that could, in theory, have existed but do not, than technologies that do exist and vast numbers of technologies that have disappeared. Societal interests, vested or otherwise, always, eventually, bend and adapt technology to its’ will.
Chris Ivory
CATEGORIES