Text

Bidtext

Should AI and Humans Share Responsibility?

Your self-driving car crashes. An AI denies your loan. A trading algorithm loses millions. Who do we blame? New research from Mälardalen University challenges the idea that there's always "one person responsible" – and argues that AI systems themselves must share the burden of accountability.

As artificial intelligence (AI) becomes part of everything from self-driving cars to medical decision support and automated trading, one uncomfortable question refuses to go away:

When something goes wrong, who is responsible – the humans or the machines?

A new research article by Gordana Dodig-Crnkovic (Mälardalen University & Chalmers University of Technology), Gianfranco Basti (Pontifical Lateran University, Vatican City) and Tobias Holstein (Mälardalen University) argues that the answer can’t just be “the humans” or “the AI”. Responsibility has to be shared, and AI systems themselves must be designed to handle part of that responsibility in a structured, transparent way.

“We already live in a world where AI systems act faster and on a larger scale than any human can oversee,” says Gordana Dodig-Crnkovic. “If we continue ignoring the role of AI in socio-technical systems, we miss how much power they already hold.”

As AI systems become more autonomous and operate in complex environments, traditional responsibility models based on a single human decision-maker are no longer sufficient. Accidents, biased decisions, or security failures often emerge from interactions between many humans, organizations, and machines.

What is the research really about?

The paper looks at what happens when we delegate important tasks to intelligent autonomous systems – for example:

  • Software that decides whether a loan is granted
  • Systems that manage electricity distribution
  • Autonomous cars that must react instantly to danger
  • Trading bots operating in financial markets

These systems don’t just follow one fixed script. They learn, adapt, and make decisions in real time based on massive amounts of data. That makes them powerful, but also difficult to fully predict or control.

The authors argue that we need to stop thinking in terms of “a single guilty person” and instead see responsibility as something that is distributed across the socio-technical ecosystem that includes:

  • Designers and developers
  • Companies and regulators
  • Users and operators
  • And the AI systems themselves as functional agents in the system

Here, “AI responsibility” means responsibility for performing assigned tasks within the system — such as monitoring, evaluating risks, or checking decisions — in a controlled and transparent manner. It does not imply moral or legal responsibility, but rather task-level responsibility that supports the overall responsible functioning of the system.

From “bad apples” to “responsible systems”

Traditionally, ethics has focused on individuals: one engineer, one doctor, one driver who can be blamed or praised. This paper instead promotes a functionalist view of responsibility. That means:

  • Responsibility is not just a personal character trait.
  • It is a role within a bigger socio-technical system.
  • The goal is not to find someone to blame, but to steer behaviour in a good direction.

The authors build on a functionalist framework and propose an ethical-by-design system architecture where responsibility is distributed across humans and AI components, coordinated through structured decision loops and transparency mechanisms.

What does “AI ethical by design” mean in practice?

“Ethical by design” means that ethics isn’t something we add after we build an AI system – it is built in from the start and throughout the system’s life cycle. The article describes a two-step approach where, first, the AI learns from humans how to behave well, just like human collaborators. Then, another AI checks if it is actually behaving well before it acts.

This architecture assigns operational responsibility for tasks to AI modules while ensuring that the system as a whole behaves responsibly and remains understandable, monitorable, and correctable.

How does this relate to the UN Sustainable Development Goals?

The research directly supports several of the UN Sustainable Development Goals (SDGs) by promoting AI that is not only powerful, but responsible, fair, and aligned with human values.

SDG 9: Industry, Innovation and Infrastructure
By developing frameworks for ethical AI by design, the work helps build resilient, trustworthy digital infrastructures. Industries that rely on autonomous systems – from transport and energy to finance and manufacturing – need AI that can be trusted to act responsibly, even at high speed and scale.

SDG 10: Reduced Inequalities
AI systems can unintentionally amplify existing social inequalities, for example by discriminating against certain groups. By embedding ethics, transparency, and distributed responsibility into AI, this research contributes to more fair and inclusive systems that reduce, rather than reinforce, bias.

The authors argue that adopting distributed responsibility models is essential for future AI engineering, regulation, and policy. Building AI systems that can take responsibility for tasks within a socio-technical system is critical if AI is to be safe, trustworthy, and aligned with human goals.

READ more in the publication of the Journal of Bioethical Inquiry -> Springer