Text
Preventing Digital Audism: Designing AI for True Inclusion
written by: Jessica Wesstrand
We often hear that technology is neutral and that automation and artificial intelligence (AI) can help us make decisions that are more objective, efficient, and fair. But what if the very assumptions underlying these systems reflect norms that not everyone can meet? What if equality, when based on sameness, becomes a new form of exclusion? Today, for the first time, we have a technological tool built squarely around communication and language—namely AI. Large language models that power chatbots, automated translations, and predictive analytics have been developed by and for written and spoken language. But what happens when someone’s first language is neither written nor spoken? How should we think about communication at work once AI becomes a core part of our daily routines?
The rise of digital tools has had a positive impact on Deaf culture. When video calling first became widely available, Deaf people suddenly could communicate with each other in real time—no longer confined to text‐only exchanges or in‐person interpretation. Video calling also make it possible to bring interpreters into conversations with hearing colleagues, rather than relying on an on-site presence. Even so, full inclusion still often depends on having an in-person interpreter, because real‐time sign language nuances can get lost during video calls. All of this is wonderful! Technology can be a powerful enabler. Yet another side of the story is that these same digital systems can perpetuate discrimination, and I’m going to shed light on the term audism, coined by Tom Humphries in 1975 to describe discrimination against Deaf people based on their hearing ability. As Humphries explained, audism occurs whenever we continually judge a person’s intelligence, happiness, or success by their ability to speak and hear. In practice, audism often manifests as an unspoken assumption that hearing is superior to being Deaf.
Historically, this bias surfaced most starkly through oralism—the educational philosophy, entrenched, in Sweden, from the mid-1800s until the 1970s, which insisted that Deaf children learn to speak and lip‐read rather than use sign language. As a result, Deaf students in Sweden and elsewhere were punished for using sign language and forced to assimilate into an auditory world. Overlapping with this was the eugenics movement—an ideology that sought to “improve” humans by selecting for desirable traits. In Sweden, this meant, among many other things, forced sterilizations of Deaf individuals, a tragic example of how Deaf bodies were deemed inferior and unworthy. In modern times, our strategies to prevent the birth of Deaf children have become more technologically sophisticated. Cochlear implants (CIs) are perhaps the most visible example: Surgically implanted devices that allow Deaf children to perceive sound. Some CI manufacturers and advocacy groups have gone so far as to claim that sign language is unnecessary, arguing that these implants can help Deaf children live a fully “hearing” life. But as disability scholar Fiona Kumari Campbell argues in Contours of Ableism, such technologies don’t merely assist—they encode the idea that bodies deviating from a supposed norm must be fixed. In effect, they reinforce ableist assumptions about which languages, bodies, and ways of existing deserve legitimacy.
We stand at a pivotal moment where the choices we make today about AI will shape the workplaces of tomorrow. Before we let algorithms decide who to hire, how to evaluate performance, or which voices to amplify, we have the responsibility—and the agency—to pause and ask: Who benefits from these systems, and who might be left behind? By critically examining our assumptions about communication, ability, and productivity at the design stage, we can prevent new forms of exclusion from taking root. In other words, we hold the power to ensure that AI tools serve everyone equitably—if only we choose to anticipate potential biases rather than react to them after deployment.
So, when we see AI progressively woven into our daily workflows, from recruitment and onboarding bots to real-time performance dashboards, there’s a risk of reproducing that same bias. Today’s AI systems largely assume a “typical” user who hears, speaks, and processes information in conventional ways. In an office where every workplace assessment, meeting summary, and feedback loop depends on spoken or written input, a Deaf employee can quickly find themselves excluded from the loop. We must recognize that technology is never neutral. It carries embedded assumptions, about the body, language, productivity, and competence, that reflect the values of those who design it. If these values go unchallenged, then AI will not disrupt inequality; it will automate it. We also need to broaden participation in the development and implementation of these systems. Inclusion cannot be an afterthought; it must be foundational. That means involving Deaf professionals, people with various disabilities, and norm-critical researchers in every stage, from concept to coding. Designing for diversity is not a limitation, it is an ethical and creative imperative. If we want a truly inclusive future of work, we must design not just smarter systems, but fairer ones.
For any questions/comments, kindly email digma@mdu.se
CATEGORIES
%20Logga_DIGMA.png)