Text

SCOPING ETHICAL AI

written by: Christoffer Larson

2025-05-25

Imagine walking into the office and finding an AI assistant that has already sifted through resumes, flagged performance trends, and proposed new development paths for you. Sounds promising. Who would not welcome tools that amplify productivity and uncover hidden insights? We humans tend to be good at pattern-recognition, but AI can outpace us.

Yet, as we and organizations embrace this technology, questions about fairness, transparency, and employee wellbeing inevitably arise, particularly when AI assumes responsibility for sensitive processes such as hiring or performance reviews. In the Nordic countries, where strong unions, flat hierarchies, and a culture of trust define everyday work, these concerns are not academic but fundamental to our shared decision making.

In our recent scoping review of 28 empirical studies, we explored how five different ethical frameworks are applied in research AI in working life—and what those findings could mean for Nordic workplaces, where AI is being implemented in a relatively slower pace compared to other Western countries. Below, we translate our key insights into a more conversational format, highlighting practical lessons for HR leaders, managers, union representatives, and anyone interested in building an AI‑powered workplace and what lessons can be learn for workplaces in the Nordic context.

Why the Nordic Model Matters?

The Nordic working life model rests on three pillars:

  1. High union density and collective bargaining. Between 46% (Norway) and 90% (Iceland) of private‑sector employees work under collective agreements—rates that rise even higher in the public sector.
  2. Flat hierarchies and co‑determination. Managers and employees share decision‑making power, fostering trust and open dialogue.
  3. Shared values of transparency, consensus, and employee wellbeing. Across Sweden, Denmark, Finland, Norway, and Iceland, democratic norms shape workplace culture.

Against this backdrop, any AI initiative meant for the workplace should not only pass a cost‑benefit analysis but also the test of collective scrutiny. If a new hiring algorithm promises to fill open positions in half the time, unions and management alike will ask: “Yeah, but, at what cost? Is it fair?”. This collaborative approach slows down implementation compared to more hierarchical systems, but it also builds the social license necessary for technology to be embraced rather than resisted.

Five Ethical Frameworks

To make sense of the academic literature on AI ethics in working life, we used Schumann’s five‑principle ethical framework for HRM and adapted them to the AI context.

  1. Utilitarianism: How do we maximize net benefits (efficiency, cost‑savings) while minimizing harms (bias, loss of trust)?
  2. Rights‑based ethics: Are individual autonomy, dignity, and procedural fairness upheld when AI makes decisions?
  3. Distributive justice: Does AI reinforce or mitigate inequalities in access to jobs, promotions, raises, and other workplace resources?
  4. Ethics of care: How does AI impact human relationships, emotional wellbeing, and the need for empathy and support?
  5. Virtue ethics: What character traits—like integrity, courage, and compassion—should leaders embody to govern AI responsibly?

By reviewing the selected 28 empirical studies from the field of woking life through each of these five perspectives, it became possible to identify not only the ethical issues that receive the greatest attention but also the areas where conflicts and blind spots may have occurred.

From a utilitarian standpoint, AI is frequently commended for its capacity to manage repetitive activities, minimize human error, and uncover patterns in large data sets. However, the same studies that point out these benefits also warn that increased effectiveness may come with a price: unspoken prejudices and a slow decline in trust. For instance, many job seekers complain about the lack of a human presence that might elicit empathy while appreciating the speed at which chatbots handle first communication with them. In a Nordic context, teams might pilot new AI tools side by side with human‑led checkpoints, gathering feedback through joint surveys or interviews of employees and union representatives to ensure that gains in speed never come at the expense of confidence in the process.

From a rights‑based standpoint, every algorithm—no matter how advanced—should honour each individual’s freedom and guarantee fair treatment. When an AI system highlights performance concerns or recommends disciplinary measures, employees deserve a clear, compassionate explanation and the opportunity to respond. By clearly documenting and sharing the reasoning behind every recommendation—whether it affects a development plan or a salary increase—these workplaces can potentially mitigate above mentioned risks. Looking ahead, Nordic workplaces could build transparent notification and appeal protocols into their collctie agreements, so that every automated suggestion—whether it touches a training plan or a salary recommendation—arrives with documented reasoning and a shared review procedure.

From the standpoint of distributive justice, the critical question is whether AI advances equal access to opportunities or inadvertently reinforces existing disparities. Some empirical studies has shown that algorithms trained on historical hiring data can perpetuate the very biases they were meant to remove, disadvantaging candidates from under‑represented groups. To address this potential shortcoming, Nordic organizations could introduce special checks and balances systems to appropriately audit automated decisions.

The ethics of care principle highlights the ways in which AI impacts interpersonal relationships and mental health. When new AI systems were introduced without sufficient explanation or assistance, employees in one research reported feeling more anxious; this is known as "technostress.”. In Nordic settings, this challenge could be met by embedding AI rollouts into existing collective forums: managers, HR professionals, and union members might co‑host interactive workshops that explain not just how the technology works, but why it is being introduced and how it will affect daily routines, ensuring staff feel both informed and valued.

Finally, at its core, virtue ethics asks that everyone guiding AI systems lead with integrity, courage, and compassion. An algorithm can suggest a course of action, but it is a person’s conviction and moral strength that decide whether to follow it. In the years to come, Nordic organizations could encourage leaders and employee representatives to attend shared ethical‑AI training sessions, fostering a culture where anyone feels empowered to stop automated decisions.

As hinted in some of the above paragraphs, true strength in the Nordic model emerges from a long tradition of working side by side—unions and management at the same table—where open conversation and mutual respect color the conversations. By viewing AI through lenses of overall benefit, individual rights, fair distribution, genuine care, and moral responsibility, Nordic organizations can find a balance that goes beyond mere speed or cost savings. The result is not a relentless pursuit of automation and effectivization but a thoughtful, human‑focused path in which every automated recommendation is shaped by shared values, and every person has the opportunity to speak up, or rather, speak against AI-based decisions.

Ultimately, by harnessing AI through the prism of Nordic values—trust, collaboration, and respect—workplaces can be designed that are not just more efficient, but also more equitable, humane, and resilient for everyone involved.


For any questions/comments, kindly email digma@mdu.se