The public discourse of AI vs. the depicion in industry
It is striking how the public discourse radically differs from the depiction of AI in industry: here the rise of AI is mainly represented as an opportunity for optimisation, where humans, if properly informed and educated, will act as beneficiaries, enablers, and arbiters of the technology.
About six months after the introduction of ChatGPT, we were alerted to the fact that powerful artificial intelligence is not only a thing of the future but is taking place here and now. We can see that much public discussion and debate is often tinged with technological determinism and centres on dystopian outcomes. In our private lives, according to this discourse, we stand defenceless against disturbing phenomena like deepfakes, i.e., digitally manipulated media, potentially precluding any certainty we might previously have had about the goings-on in the world. We are similarly portrayed as helpless in the possible advent of artificial general intelligence (AGI), i.e. technologies that are able to perform any intellectual tasks that humans can do, destined to eventually surpass us, leaving us dominated by the very thing that we brought into existence. In other words, AI is likely to develop in terrifying directions and there is not much we can do if/when that happens. What is most striking about this public discourse is not the dystopian future it describes (although that is interesting too), but how radically it differs from the depiction of AI in industry: here the rise of AI is mainly represented as an opportunity for optimisation, where humans, if properly informed and educated, will act as beneficiaries, enablers, and arbiters of the technology.
We have explored this representation of AI by studying Human Research Management (HRM) magazines, where the target audience is managers and knowledge workers within HR. HR is often cited as a function that can benefit greatly from the implementation of AI, not merely in the automation of administrative tasks but also in decision-making, in as well as streamlining and enhancing the recruitment processes. HR managers and HR professionals are responsible for championing the use of new technologies, while at the same time advocating for employees and ensuring that ethical dimensions of technological change are accounted for. They seem well aware that the incorporation of technologies such as AI is as much a social process as a technological one. Perhaps it is because HR recognises the importance of the social aspect of the adoption and use of technology, that the agency of the HR professionals presented in HRM magazines is strangely over-emphasised. This can be compared with the public discourse on AI where the agency of humans is equally strangely diminished. In HRM magazines, humans (individual HR professionals) are identified as key to ensuring the proper use of AI. The story told about AI in HR is thus mainly a utopian one where the possible pitfalls of AI use – e.g., loss of control due to the opacity of AI tools, and unethical outcomes because of biased algorithms – can be reliably averted by always keeping a human “in the loop” in the organisation’s ecosystem of AI solutions. And in addition, by ensuring that the HR professional stays up to date and is continuously engaged in learning and is watchful at all times. In brief, AI tools in HR will not turn into black boxes generating questionable outcomes if HR professionals are competent, aware, and unafraid. It is by being this ideal knowledge worker in relation to AI that the HR professional helps realise the promises of AI in HR without any undesirable consequences.
HR professionals, likely in the company with other categories of knowledge workers, thus find themselves inhabiting two contradictory stories, one in public conversation and one in industry, about their place and fate in relation to AI. Concurrently portrayed as largely unguarded victims of deception and replacement, and as potential experts and enablers of technological progress, they are expected to hold off any feelings of fear in their work practice. This is to be able to productively implement, engage with and control a set of technologies that, at the point where public and HR media discourses intersect, in many respects are far more capable than we are. However, in the ideal AI-optimised HR function, HR professionals leave whatever concern they may have about this capability differential at home and view it solely as an organisational advantage.