Skip to main content

When Deafness Is Medicalized: Inside the Culture Clash Over Cochlear Implants

Some fear that, by offering deaf people access to sound, so-called bionic ears could spell the end of the culture built around ASL.
An infant with a cochlear implant.

An infant with a cochlear implant.

Years ago, the university where I teach made budget cuts to the program that provides in-class American Sign Language interpretation. I recall this instance well, if only because the reaction among the university's deaf population was as swift and fierce as any other form of campus protest I'd seen.

There was a reason for the unusual intensity: Cutting ASL interpretation was understood as a cultural insult. Advocates of deaf culture see themselves as a community not unlike an ethnic group. Endowed with a distinct language and set of traditions, they conceptualize deafness not as a physical condition but rather as a social distinction.

More than any other factor, ASL fluency offers the most direct conduit into deaf culture. For this reason, supporters of deaf culture have become particularly concerned about what many see as the most common medical intervention into their way of life: cochlear implants.


A cochlear implant (CI)—sometimes called a "bionic ear"—is a surgically implanted device that offers deaf people access to sound. In some cases, an implant can help a user make out spoken language. The Food and Drug Administration approved CIs for adults in 1985 and for children in 1990. As of 2016 around 96,000 people had received a cochlear implant—36,000 of them children, some as young as 12 months old.

In light of this technology, it's now routine to screen babies for hearing loss shortly after birth. If a child "fails" the test, a complete medical and technological apparatus—comprised of surgeons, audiologists, social workers, and speech therapists—lurches into motion to provide an implant as soon as possible. More often than not it is hearing parents of deaf children—usually mothers—who are asked to negotiate this daunting apparatus. They do so with minimal, if any, knowledge of deaf culture.

In this respect, the story of CIs seems to be yet another instance of technological determinism. If you build it, they'll implant it—cultural concerns be damned. But what if we didn't automatically assume that CIs meant the death of ASL? In her new book, Made to Hear: Cochlear Implants and Raising Deaf Children, Laura Mauldin seeks common ground between the medical and deaf communities. Rather than castigate or glorify the technology, she explores how the process of incorporating CIs plays out in real life. Refreshingly free of jargon, Made to Hear demonstrates the emotionally complicated ways that laypeople interact with prevailing medical paradigms.

Made to Hear: Cochlear Implants and Raising Deaf Children.

Made to Hear: Cochlear Implants and Raising Deaf Children.

Mauldin, a University of Connecticut sociologist who grew up around deaf people and whose work centers on the interaction between medicine and "disability," finds that the lived experiences of parents who choose CIs for their young children (some as early as a year old) do not necessarily follow the common technological mandate dictated by the medical profession. "Social factors," she writes, "play a greater role in outcomes than technological prowess."

This claim holds especially true when it comes to sign language acquisition. Despite clear evidence to the contrary, the medical community generally advises against teaching children ASL while they are in the process of getting a CI, insisting that doing so will diminish their ability to acquire spoken language. Accordingly, most mothers of CI children—middle-class rule followers who want to be considered "good parents" and are in no way inclined to question the medical definition of "disability"—do exactly as they are told.

But others, including some subjects in Mauldin's study, tack around the advice on their own terms and for their own sensible reasons. These parents are of particular interest to Mauldin. One mother of a CI child explained why she chose to challenge the medical advice about ignoring sign language:

Ok, yeah, you fixed her hearing. And? What about when she loses the equipment? How about when the batteries are dead, and it's gone for four hours? How are you talking to her? I am still very proud of myself for maintaining the sign language. We are still using it at home. There are times when I absolutely am not, but when I need to clarify something with her, I go back and, yes, I pair [speech and sign] ... I have to worry about safety. I still have to raise a child. I understand that she has to learn to speak. But I still need to be able to sign "stop."

In an email exchange with Mauldin, I asked her to elaborate on the hidden potential of this resistance-within-acceptance mode of negotiating the CI process. She wrote: "Having a CI does not mean you can't learn both spoken language and sign language. Bilingualism is not only possible, but the norm across the globe, except in the US."

Mauldin laments that this form of bilingualism is inhibited by "strict adherence to English only protocols." These protocols, she regrets, continue to be the conventional wisdom. In a more hopeful vein, though, she notes how "ASL is rather popular at the college level because hearing people are opting to learn it as their foreign language."


The ethical implications of this debate should not be downplayed. Teresa Blankmeyer Burke, a deaf bioethicist and associate professor of philosophy at Gallaudet University, tells me that "the critical issue ... is that language deprivation is a serious moral harm." By language deprivation—in this case denial of ASL to CI members of the deaf community—she means not only "the inability to have any access to language—that is something I think we all would agree on is unethical." She also believes "that a life of partial access to language is harmful as well."

This point is easily over looked until one considers in more specific terms what Burke is talking about. She elaborates:

Children (and adults) with cochlear implants can achieve very high functioning (not all of them), but they will still function in some situations as hard of hearing individuals. There will be some instances where their access to language is incomplete, and this will occur in certain kinds of settings—for example, in rapid-fire repartee, or the recounting of a joke, where the vocalization of the punchline includes a lowered volume as a technique for getting the attention of listeners. These kinds of experiences are very much the thing that bonds humans to one another.

No matter where one stands on this issue, Mauldin, who has worked with Burke, is wise to present CIs as neither miracle devices nor destroyers of culture. But when it comes time to level a measure of blame, she is not afraid to do so. She explained in our email exchange:

The number of pediatric CI recipients is only going to increase. And it's my impression that Deaf communities are coming to terms with the prevalence of CIs and the possibility of identifying both as culturally Deaf and a CI user. Unfortunately, I don't see reciprocity from the CI world; there is no equivalent infiltration of the Deaf community into powerful health-care institutions or State intervention services. So while the CI world is able to largely keep a firm boundary between themselves and Deaf communities, the opposite is not true.

Medical professionals already have a lot on their plate. But if Mauldin is right, then it's time for them to take a more sociological approach to medical solutions. As Burke reminds us, what's at stake for children who suffer language deprivation is nothing less than "the considerable harm of not having the deeply human experience of full and easy access to language." And that's something that touches "all aspects of well-being."