Text

The myth of the objective algorithm

2018-10-04

The idea of digitalisation is everywhere; in the media and in everyday conversation it appears regularly and attracts a range of opinions. Defined as the integration of digital technologies into everyday life by digitising whatever can be digitised, it raises expectations of improved efficiency, precision, productivity and objectivity in all sorts of everyday decision processes.

However, behind the technologies that drive digitalisation invariably sit algorithms. They are designed to improve the speed and quality of decision-making by moving it away from the subjectivity of human beings towards theoretically more objective machines. This should mean decisions are entirely neutral and are unaffected by circumstances that could disturb and distort the process. But is there really such a thing as an objective algorithm?

An article in Wall Street Journal External link. late in 2018 reported that most US Fortune 500 companies use algorithm-driven software in their recruitment processes. The technology is used to scan applications for key words, identifying what it believes to be the best candidates for specific jobs. It does this by identifying the usage of words that are linked to desirable personality traits rather than simply reviewing competences and skills.

All of this suggests a fundamental shift in recruitment process norms is underway. The initial filter and decision to proceed with a candidate is now automated using the wisdom of the algorithm. Potentially interesting (and perhaps unconventional) candidates that might have caught the eye of a human being will now not get to the second stage without using a set of specific words that may have no obvious link to the role they are pursuing.

This technology may save money in reducing the human input required in shortlisting, but what if the decisions it makes, via algorithms, are wrong? A year ago, automated decision-making hit the headlines in Sweden when social workers from Kungsbacka resigned en-masse in protest suggesting that the software used in their organisation led to the wrong decisions being made. They argued passionately that the task it was performing would be better undertaken by human beings. Some of course may consider this a luddite point of view, but there is undoubtedly a backlash of opinion forming against such automation.

How is it possible then for algorithm-driven software to make the wrong decision? Joy Boulamwini, founder of the Algorithmic Justice League, discovered a crucial flaw which illustrates what can go wrong. When performing research on face recognition software, the technology under review did not recognize her face. After some time she realised why: she was a woman and she was black. The face recognition technology had been developed by white men, and the technology only learned to recognize white, male faces – all other faces were, as far as it was concerned, ‘wrong’.

It seems algorithms are not as objective as we would like to think they are. In his book ”Our Robots. Ourselves. Robotics and the Myths of Autonomy External link.” (2015) robotics researcher David Mindell argues that the idea that the algorithms that govern robot actions could or should be autonomous is unrealistic. Human intentions, ideas and assumptions are always built into machines, he argues. Their creators are effectively ‘technical ghosts in the machines’ with their ideas, prejudices and personal blind spots built in to whatever process they are designing.

We know there are millions of decisions being made every day in the world of public administration, education and commerce. I wonder what rogue assumptions are buried in those algorithms and what are their consequences? Such assumptions, whether conscious or unconscious, are likely to be driving flawed, subjective decisions.

I’m firmly of the belief that technological development is both very positive and very exciting. We must however always bear in mind the fact that underlying algorithms are only as objective as the human beings who create them. They will always express, in some way or another, whether conscious or unconscious, the assumptions, prejudices and ‘blind spots’ of the individuals who developed them. We must always test algorithmic logic exhaustively to ensure they are fit for purpose and truly objective.

 

Annette Hallin