SOUND FROM SILENCE
This article tells the story of how research into the structure of the inner ear was combined with research in the telecommunications industry to produce a device which enables deaf people to hear.
The ancient Greeks had some understanding of sound transmission in the human ear. However, it was not until the sixteenth century that details began to be filled in, a process which is still ongoing (see Early Beginnings). As scientists began to research the structure of the ear, they also attempted to understand how sound is received and interpreted by the ear and the brain. They realized that the hair cells inside the cochlea in the ear played an important role in the whole process of hearing different tones (see How the Inner Ear Recognizes Sound). Meanwhile, physicians were studying deafness and characterizing the various types of hearing loss (see When Hearing is Lost). Beginning in the late 1950s, researchers began to wonder if they could replace the signals from the hair cells that were so important to hearing. Their efforts to create a cochlear implant met with substantial success. By the early 1970s, several groups were working on developing more sophisticated hearing devices. Physicians collaborated with engineers who were working on sending signals along telephone lines (see Cochlear Implant Technology Develops).
In the meantime, scientists were also studying how sound information is encoded in the auditory nerve and the brain. Their research on nerves and stimuli has not yet been fully incorporated into the development of modern hearing aids (see What Does the Cochlea Tell the Brain?). Currently, the function of hair cells can be replaced using cochlear implants, but researchers hope to be able to repair the hair cells someday. Researchers have made significant progress in understanding how these hair cells transmit signals to the brain (see How Hair Cells Work). One of the stranger discoveries made about hearing in recent years has been finding that the cochlea not only receives sounds, but can also produce sounds such as ringing in the ears. Researchers are currently working on determining what causes these “otoacoustic” emissions, as well as studying how hair cells are damaged and how they might be repaired (see The Inner Ear Produces Sound).
George Garcia was 49 when he woke up one morning to a world that had fallen silent. A former Navy air traffic controller, Garcia had gone deaf overnight, most likely as a result of an acute infection combined with years of enduring the sounds of thousands of screaming jet engines.
The doctors informed Garcia (not his real name) that his hearing loss was complete and permanent and that hearing aids would not help. He fell into a deep depression, and began drinking heavily; when the alcohol failed to dull his pain, on three occasions he contemplated suicide.
With the help of his stepson, a preacher, Garcia eventually emerged from his depression and resigned himself to life in the deaf world. He learned sign language and lipreading. He made friends in the deaf community. He stopped drinking and became active in his church. Then, six years after he went deaf, Garcia found new hope. He was offered the opportunity to be a research subject in a test of an early version of a cochlear implant.
Researchers at the Veterans' Administration informed Garcia that cochlear implants are far more sophisticated than ordinary hearing aids, which amplify sound and help people who have some residual hearing. A cochlear implant is essentially an artificial inner ear, intended to take over the job of the cochlea, the snail-shaped organ that translates sound energy into nerve impulses and sends those impulses to the brain for processing. The researchers' hope was that cochlear implants would return hearing to people like Garcia, who have suffered total hearing loss-some of whom were born with the disorder.
Garcia jumped at the chance. In December 1988 surgeons implanted a transmitter in the temporal bone behind his left ear and threaded an array of six electrodes through the spirals of his cochlea. With nervous anticipation Garcia waited a month to recover from the surgery. Only then would the doctors attach the critical external portions of the device: a microphone to receive sounds, a speech processor to convert them to signals recognizable to his auditory nerve, and a transmitter to send the signals to the implant.
Nervous anticipation yielded to elation when Garcia began to hear immediately, but the sound was very mechanical. With diligent practice, however, the sounds of speech began to seem more and more normal. Garcia wore the speech processor during his waking hours. He went from lipreading 45 percent of two-syllable words accurately to hearing 94 percent of two-syllable words. The cochlear implant allows him to have a normal life. He can sit with a group of people and join in the conversation rather than sit at the end of the table and be ignored. He can hear the cat meowing, the dog barking, and his granddaughter saying "Grandpa."
Today, 18,000 people around the world have cochlear implants, and the devices are no longer experimental. But cochlear implants did not spring from a brilliant inventor's mind in a single flash of insight. They were the result of many centuries of basic research by thousands of scientists in fields as disparate as physics, anatomy, neurophysiology, and information science, each of whom contributed pieces of information that led to a significant human benefit.
The story begins in ancient Greece, in the sixth century B.C., when Pythagoras, a philosopher and mathematician, reasoned that sound was a vibration in the air. His successors recognized that sound waves set the eardrum moving, transmitting the vibrations to the interior of the ear. But progress in understanding hearing was slow. The world had to wait another seven centuries for the next major advance in our understanding of hearing. In 175 A.D. a Greek physician, Galen, recognized that nerves transmitted the sensation of sound to the brain.
Eighteen hundred years ago scientists understood that sound entered the interior of the ear via the eardrum and exited on its journey to the brain via the auditory nerve. But it was not until 1543 that scientists began filling in the details of what happens in the middle ear and the inner ear. In that year Andreas Vesalius, a Belgian anatomist and physician, announced his discovery of the malleus and the incus (also called the hammer and anvil), two of the three tiny bones, or ossicles, that transmit sounds coming from the eardrum to the cochlea. The third ossicle, called the stapes or the stirrup, was discovered several years later, and the bony, snail-shaped cochlea was discovered by the Italian professor Gabriello Fallopio in 1561, although he mistakenly thought it was filled with air, not liquid, and that vibrations in this air stimulated the ends of the auditory nerve.
The last major piece of the anatomic puzzle was put into place after microscopic examination of the cochlea. This occurred in 1851, when Italian anatomist Alfonso Corti found a structure, since named the organ of Corti, which spirals along the cochlear duct. With his microscope, Corti also caught a glimpse of the thousands of hair cells that are now known to be the central elements in the hearing apparatus. One surface of the hair cells is covered with tiny extensions called stereocilia, which give the cells a fuzzy appearance. The organ of Corti does not merely report to the brain that a sound has occurred; it also reports on the frequency of the sound. How it does this remained a mystery until early in the twentieth century.
HOW THE INNER EAR RECOGNIZES SOUND
In the seventeenth century G.J. Duvérney, a French anatomist, proposed that the ear used a set of resonators. In the nineteenth century most scientists believed that some form of "resonance" was behind our ability to distinguish pitch. Resonance theory was most fully developed by the German scientist Hermann von Helmholtz. He believed that tuned fibers in the basilar membrane, on which the organ of Corti rests, vibrate in response to particular sound frquencies, just as a specific piano string will begin to vibrate in response to a sound at just the right frequency. He was correct that different frequencies are "heard" by different sections of the organ of Corti, with the parts nearest the ossicles sensitive to high tones and the parts farthest from the ossicles sensitive to low tones, but there were still many unanswered questions about how the cochlea functions. It took an elegant series of experiments by Hungarian physicist Georg von Bèkèsy to shed light on what was going on in the cochlea.
Tiny, opaque, spiral-shaped, and embedded in the temporal bone--the hardest bone in the body--the cochlea is very difficult to study. Beginning his experiments in 1928, von Bèkèsy built enlarged models of the cochlea. He used straight glass tubes-in effect unrolling the cochlea's spiral and making it transparent. Down the middle of a tube he attached a rubber membrane to simulate the basilar membrane, a flexible membrane that separates the cochlea into two segments. He filled the tube with water and introduced sound vibrations through one end, making the fluid within vibrate much as the ossicles of the middle ear make the fluid in the cochlea vibrate.
He noticed that the introduction of each sound sent a wave down the model's basilar membrane. He called this the "traveling wave." Although the traveling wave for any given tone deformed the entire simulated basilar membrane, von Bèkèsy observed that the cochlea is arranged tonotopically--high tones produced the largest deformation at the near end, and low tones produced the largest deformation at the far end. Using techniques two decades ahead of his time, von Bèkèsy confirmed his model by observing the same deformations in the basilar membranes of cochleas he dissected from cadavers. He observed that when the basilar membrane was deformed, the tiny stereocilia on top of the hair cells bent against another membrane called the tectorial membrane. The point at which the basilar membrane was deformed the most was the point at which the stereocilia were bent the most. This, he concluded, is how different tones are "heard" at different points along the organ of Corti. For his seminal work in the biophysics of hearing, von Bèkèsy was awarded the Nobel Prize in Physiology or Medicine in 1961.
WHEN HEARING IS LOST
Meanwhile, generations of physicians had been studying deafness. It became clear that there are two predominant types of hearing impairments. The first, known as conductive hearing loss, is the result of damage to the apparatus that transmits sound energy to the cochlea. The eardrum can be harmed or the ossicles can become encrusted with bony tissue that impedes their movement. As a consequence, all sounds become fainter, as though someone has turned down the ear's volume control. This kind of hearing loss is typically treated with relatively simple hearing aids, which increase the volume. Surgery can be performed to clear the ossicles of obstructions.
The other type of hearing impairment, called sensorineural hearing loss, is most often caused by the destruction of hair cells within the organ of Corti (see sidebar on the causes of hearing loss). Less often it is the result of destruction of the auditory nerve-by a tumor, for example.
COCHLEAR IMPLANT TECHNOLOGY DEVELOPS
Beginning in the late 1950s, researchers began wondering whether it might be possible to replace the electric signals from the missing hair cells in people with sensorineural hearing loss, especially the majority of those people who had intact auditory nerves. The researchers' effort to create a cochlear implant faced a great deal of skepticism and daunting technical obstacles. But they were fortunate to be starting at a time when a good deal was known about the electric signals produced by the organ of Corti and sent down the auditory nerve. From the work of Eberhardt Zwicker, Smith Stanley Stevens, and Gordon Flottorp which culminated in 1957 at Harvard University, researchers knew that the auditory system was able to organize sounds into 24 channels. From animal experiments by Hallowell Davis and Robert Galambos, also at Harvard, it was known that the organ of Corti and the auditory nerve were at the base of this organization and that fibers in one part of the nerve carry information about low tones, the fibers in the next part carry information about slightly higher tones, and so on, in a predictable fashion.
The early experimental implants, however, did not exploit the cochlea's tonotopic organization. Several different research groups started implanting single-channel electrodes into the cochleas of deaf volunteers. The researchers and the volunteers knew that these crude devices would not provide enough information to encode speech. They thought, based on work that had been done by Glenn Wever and C. W. Bray at Princeton University in the 1930s, that the timing of electrical discharges from the electrodes would allow the volunteers to determine the pitch of a sound. Indeed, the volunteers were able to extract a huge amount of auditory information from the single channel. Although their speech perception was poor, they could tell, for example, whether a spoken word had one syllable or two, and they had some sensation of the pitch of a sound by the timing of neural spikes; this was enough to serve as a substantial aid to lipreading.
That surprising success emboldened researchers. By the early 1970s several groups were at work on more sophisticated devices with multiple electrodes. But how many electrodes would they need? The auditory nerve contains 30,000 fibers. Would the researchers have to provide 30,000 electrodes to stimulate all the nerve fibers individually in order to simulate intelligible speech sounds? If so, the project would clearly be impractical. But according to Zwicker and his co-workers, 24 channels were sufficient. In addition, Michael Merzenich of the University of California, San Francisco, simplified the system even further after he uncovered research results from an unexpected source.
Bell Laboratories, which was then the research arm of AT&T;, was concerned with how much information needed to be sent over telephone lines to re-create intelligible speech sounds. Bell scientist James Flanagan, who is now at Rutgers University, determined that the frequencies of speech could be divided into as few as six or seven channels and still be understood. Michael Merzenich and others reasoned that, if only six or seven channels were needed to transmit speech over telephone lines, the same number of electrodes would likely suffice in a cochlear implant.
Would such an implant be safe? Many physicians and researchers thought that a cochlear implant with multiple electrodes would be like putting a telephone pole into the chambers of the inner ear and that it would probably destroy the delicate ganglion cells that transfer signals from hair cells to the brain through the auditory nerve. In a series of animal experiments, however, Merzenich and his colleagues proved that the implant did not harm the ganglion cells. In fact, the cells were reinvigorated by the stimulation.
WHAT DOES THE COCHLEA TELL THE BRAIN?
Another important stream of basic research that has led to refinements in cochlear implants was initiated in 1965 by Nelson Kiang of Harvard University. Kiang examined the impulses traveling down the auditory nerve in response to sound and learned a great deal about how sound information is encoded in the nerve and in the brain. He discovered, for example, that any nerve fiber produces a proportional number of nerve impulses in response to an increasing frequency of sound, although in a random pattern. A single nerve fiber can produce impulses at most only 200 to 300 times a second. Yet speech involves sounds at frequencies of up to 4,000 hertz (cycles per second), and humans can hear frequencies of up to 20,000 hertz. Taken together, the random nature of nerve impulses and their highest rate must mean that an entire population of nerve fibers, all responsive to sound in the same frequency range, must be required to fully encode a single frequency of sound. Donald Eddington of the University of Utah (now at MIT) and Merzenich and his team have attempted to directly simulate these distributed response patterns in their cochlear implant model.
Beginning in the mid-1970s, Murray Sachs and Eric Young, of Johns Hopkins University, studied the responses of the auditory nerve to complex stimuli, such as speech. They determined that the brain is not merely analyzing the various frequencies but making sophisticated use of the temporal patterns of nerve impulses. This sophisticated processing probably underlies our ability to pick out a single conversation in a noisy room and to localize sounds in three dimensions.
These insights have yet to be incorporated into the design of cochlear implants, but a separate line of research has been taken. Blake Wilson, of the Research Triangle Institute in North Carolina, noticed that, because the cochlea is filled with a conductive fluid, the stimulation at one electrode in a cochlear implant spreads to nerve fibers far from its intended target. This cross-talk, as it is called, tends to make a sound muddy and difficult to interpret. He reasoned that the problem might be reduced if the electrodes in a cochlear implant were stimulated sequentially instead of simultaneously. When this scheme, known as interleaving, was introduced into the external speech processors that are part of every cochlear implant. Implant wearers reported greatly increased satisfaction with the devices (modern implants contain up to 22 electrodes--two of the 24 channels observed by Zwicker and colleagues in 1957 are considered unimportant in speech perception).
WHAT IF THE AUDITORY NERVE IS DESTROYED?
People whose auditory nerves have been destroyed, most often by tumors, cannot benefit from cochlear implants--they have no nerve fibers to stimulate. The only alternative is an implant that stimulates the brain directly. Researchers are exploring two alternatives: implanting electrodes into the cochlear nucleus, the part of the brainstem that normally receives input from the auditory nerve; and implanting electrodes into the auditory cortex, a much more complex area of the brain. Work on brainstem and cortical implants is in its infancy, but some researchers have reported encouraging results with volunteers, although the quality of the sound is inferior to that produced in people who are able to use cochlear implants.
HOW HAIR CELLS WORK
Currently, the function of hair cells can be replaced by using cochlear implants. However, research into the function of hair cells may someday allow them to be repaired. It has been suspected since 1851 that hair cells are responsible for translating sound into electric signals that nerves can convey to the brain. But only in the past 30 years have researchers determined how hair cells accomplish this remarkable feat. The latest and most far reaching research on the physiology of hair cells was performed by A. James Hudspeth, now at Rockefeller University, and his colleagues, who initially studied the hearing system of frogs, which have hair cells very similar to those found in the mammalian cochlea. Beginning in 1977, in an exquisitely detailed series of experiments, Hudspeth was able to isolate individual hair cells and penetrate them with minuscule glass electrodes. Hudspeth and his colleagues used the electrodes to record the electric activity within the hair cells as he gently pushed against their stereocilia with a small, precisely controlled probe. They discovered that it does not take much of a push on the stereocilia to get the hair cell to respond. All it takes is a movement of just 100 picometers, 100 trillionths of a meter--a distance smaller than the diameter of some atoms.
Hair cells, like all excitable nerve cells, are tiny batteries, with an excess of negatively charged ions inside and an excess of positively charged ions outside. Moving the stereocilia causes tiny pores on the stereocilia to open, allowing positive ions to rush into the cell, which causes "depolarization." Through a series of biochemical steps, this depolarization causes the hair cell to release neurotransmitter molecules--chemicals that transmit the electric signal from one nerve to another--that drift across a small space to receptors on nerve cells. Contact with the receptors depolarizes nerve fibers and starts an electric signal moving down the auditory nerve toward the brain.
THE INNER EAR PRODUCES SOUND
Experiments in Hudspeth's laboratory and elsewhere have revealed the workings of hair cells in exquisite detail, but the inner ear apparently still holds some surprises. One of the strangest discoveries made about hearing in recent years is the phenomenon of "otoacoustic emissions." In 1977 David Kemp, of the Institute of Otology and Laryngology in London, discovered that the cochlea not only receives sounds but actually produces them as well. Most of us are familiar with the phenomenon of tinnitus, ringing in the ears. It turns out that in many cases tinnitus is not entirely a subjective phenomenon; sometimes when your ears ring, they really ring! Sensitive microphones placed in the ears confirm that something within the cochlea is emitting sounds, acting like a tiny loudspeaker.
Otoacoustic emissions are not merely laboratory curiosities. They have proved important both in the clinic and in basic research. In the hearing clinic, audiologists are finding that devices for measuring otoacoustic emissions are valuable in hearing tests, especially for infants, young children, and other people who are nonverbal. Infants, for example, cannot cooperate in normal hearing tests because they cannot say whether they are hearing anything. All people with normal hearing produce stimulated otoacoustic emissions, so the presence of otoacoustic emissions usually means that an infant can hear, and their absence means that the infant has a hearing disorder.
In the laboratory, scientists are trying to determine what causes otoacoustic emissions and whether they have a role in normal hearing. There are two types of hair cells: inner and outer. Current evidence points to vibrations of the outer hair cells as the source of the otoacoustic emissions. For many years the outer hair cells were a source of mystery. Although there are three times as many outer hair cells as inner hair cells, Heinrich Spoendlin observed in 1966 that over 90 percent of the auditory nerve fibers connect with the inner hair cells. In 1985 William E. Brownell, now at Baylor College of Medicine, discovered that outer hair cells vibrate when exposed to an alternating electric field. This observation, combined with a previous observation that outer hair cells produce an electric field when stimulated by sound, led scientists to realize that the outer hair cells both generate and are stimulated by their own electric field. This positive feedback system probably causes the otoacoustic emissions and may make the inner hair cells more sensitive and better able to detect fine differences in sound frequency. If you can tell the difference between a B and a B flat, you have otoacoustic emissions to thank.
Another promising avenue of research focuses on how hair cells are damaged by loud sounds, infections, and some drugs (see sidebar on the five main causes of hearing loss). The more known about what damages hair cells, the more that can be done to preserve them.
There are even intriguing indications that scientists might learn how to regenerate hair cells in people who have lost them. In mammals and birds, hair cells normally are produced only during embryonic development; and once lost, they are never replaced. But fish and amphibians produce new hair cells throughout their lives; and in 1987 D. A. Cotanche, of the Medical University of South Carolina, and R. M. Cruz, of the University of Virginia School of Medicine, separately discovered that supporting cells in the chicken equivalent of the organ of Corti can replace damaged hair cells even in young chickens.
Scientists around the world are working to determine whether hair cells can be made to regenerate in mammals, especially humans, and, if so, the best way to do it. Although there are as yet no definitive answers, some researchers believe there is a rational basis for hope. If the hope is realized, hundreds of years of basic research on hearing will one day culminate in a true cure for hearing impairment.
SIDEBAR - COCHLEAR IMPLANTS AND DEAF CULTURE
When George Garcia received his cochlear implant (see main story), he was immediately ostracized by his deaf friends, who seemed to regard his decision to have the implant surgery as a kind of personal repudiation. Their reaction was not unique. There is strong opposition to cochlear implants in the deaf community, a fact that many people in the hearing world find surprising.
Many members of the deaf community are content with their unique culture and do not regard deafness as a disorder to be cured. Within the deaf community, particular scorn is reserved for the practice of placing cochlear implants in young children. The National Association of the Deaf, for example, maintains that there is no evidence that deaf children who receive implants early are better able to acquire English or have greater educational success than other deaf children.
But a consensus panel appointed by the National Institutes of Health reached somewhat different conclusions (JAMA 274:1955, 1995). The panel acknowledged that there is far more evidence of the value of cochlear implants in children or adults who were deafened after learning language than in those who were deafened before learning language. Nevertheless, the panel suggested that consideration be given to placing implants in children under age 2. By the age of 2, children have already passed the critical period for auditory input in language acquisition.
The two perspectives can still be better reconciled. Some headway is being made by those, both hearing and deaf, who recognize the value of bilingualism. Deaf people--even those who have excellent results with implants--can continue to be fluent in sign language and remain part of the distinct and rich deaf culture, while at the same time participating more fully with in the larger hearing culture.
SIDEBAR - THE FIVE MAIN CAUSES OF HEARING LOSS
1. Heredity. At least 100 hereditary syndromes can result in hearing loss.
2. Infections, such as bacterial meningitis and rubella (German measles).
3. Acoustic trauma produced by acute or chronic exposure to loud sounds.
4. Prescription drugs, such as streptomycin and tobramycin, and chemotherapeutic agents, such as cisplatin.
This article was written by science writer Robert Finn, with the assistance of Drs. A. James Hudspeth, Jozef Zwislocki, Eric Young, and Michael Merzenich, for Beyond DiscoveryTM:The Path from Research to Human Benefit, a project of the National Academy of Sciences. The Academy, located in Washington, D.C., is a society of distinguished scholars engaged in scientific and engineering research, dedicated to the use of science and technology for the public welfare. For more than a century it has provided independent, objective scientific advice to the nation. The project's web site is accessible at www.beyonddiscovery.org, from which the full text of all articles in this series can be obtained.
Funding for this article was provided by the Markey Charitable Trust; Pfizer Foundation, Inc.; and the National Academy of Sciences.
Copyright 2009 by the National Academy of Sciences. All rights reserved.