
Учебники / Otolaryngology - Basic Science and Clinical Review
.pdf
OBJECTIVE MEASURES OF AUDITORY FUNCTION 379
input (see Chapter 25). The “motor” for this active mechanism is associated with the outer hair cells, which vary in length according to the surrounding electrical potentials.The rapid oscillations in outer hair cell length “pump” energy back into the vibrating basilar membrane by increasing the amplitude of vibration in localized regions. An epiphenomenon of this active feedback mechanism is that the vibrations introduced into the basilar membrane motion by the outer hair cells can be transmitted back out through the middle ear system in a reversal to the usual direction of sound flow. The tympanic membrane, which is usually thought of as analogous to the diaphragm of a microphone, now acts analogously to the cone of a loudspeaker.A sensitive microphone sealed into the exterior auditory meatus can detect the low-level sounds generated by the outer hair cells and subsequently the vibrations of the tympanic membrane. These low-level sounds are known as otoacoustic emissions (OAEs).
Two important points deserve note. First, the active mechanism of the inner ear is associated with normal cochlear function, and therefore any impairment of cochlear function usually results in a failure of the active mechanism. Thus OAEs are a signature of normal cochlear function and generally are not present in cochlear-impaired ears. Second, the detection of OAEs in the external auditory meatus requires not only that the inner ear is functioning normally but also that the transmission pathway back through the middle ear is functioning adequately.Thus middle ear dysfunction is likely to reduce or obliterate successful recording of OAEs. Taken together, these two points indicate that successful recording of OAEs generally points to both normal cochlear function and normal middle ear function, whereas absence of OAEs could be due either to cochlear impairment or to middle ear dysfunction. Additional differential testing is required in the latter case.
Types of OAEs
Although the mechanism of OAE generation is probably singular, different categorizations of OAEs have emerged based on the stimulus characteristics required to elicit them. The two most common OAE categories encountered in clinical practice are transient evoked otoacoustic emissions (TEOAEs) and distortion product otoacoustic emissions (DPOAEs).
As the name implies,TEOAEs are evoked with a transient acoustic stimulus, such as a click. An example of a normal TEOAE recording from a 1-day-old infant is shown in the upper half of Fig. 30-4. (For comparison,
a recording from a newborn with an absent TEOAE is shown in the lower half.) In each half of the figure, panel A shows the time waveform of the click in the ear canal, and panel B (top line) shows the energy distribution of the click in the frequency domain. Following the click presentation, the OAE emerges after a short delay. In this context, the TEOAEs are sometimes referred to as “echoes” (or “Kemp echoes,” after the physicist who discovered them). Panel C shows the time waveform of the TEOAE. The first few milliseconds of the trace are removed to avoid contamination by the stimulus artifact. Although not evident in the truncated response shown in Fig. 30-4, the higher frequencies of the emission emerge first (they have shorter latencies), and the lower frequencies emerge later. This reflects the travel time along the basilar membrane from the basal highfrequency regions to the apical low-frequency regions. Panel D shows the normalized spectrum of the response. Because of the transfer function of the middle ear, the response is usually dominated by the midfrequencies.The signal-to-noise ratio in different frequency bands is summarized in panel E, which allows decisions to be made concerning the presence or absence of a response.
The other common OAE category encountered in clinical practice is DPOAEs. To evoke these, two pure tones are presented to the ear whose relative levels and frequency relations are optimized to generate an intermodulation distortion product. Because of the inherent nonlinearity of normal cochlear function, intermodulation between two primary tones (termed F1 and F2) can generate distortion products that are physically present within the cochlea. The predominant distortion product has the frequency 2F1-F2, and reversed transmission of this tonal distortion product renders it recordable in the ear canal as a DPOAE. Because of the fixed frequency relation between the 2F1-F2 distortion product and the eliciting pair of primary tones, the frequency of the DPOAE can be varied by shifting the frequencies of the primary tones. A systematic probing of different frequency regions using this technique allows for the generation of a distortion product-gram (DPgram), a plot of DPOAE level as a function of primary tone frequency. An example of a DPgram from an adult with high-frequency hearing loss is shown in Fig. 30-5. The connected open circles refer to the levels of the DPOAEs, and the lower hatched area denotes the noise floor levels. Most forms of the DPgram depict both the DPOAE levels and the noise floor because the decision as to whether a valid DPOAE is present includes a consideration of the relative signal-to-noise ratio.

380 CHAPTER 30 PRINCIPLES OF AUDIOMETRY
Figure 30-4 Sample recordings from two newborn infants, one with present transiently evoked otoacoustic emissions (TEOAEs;upper half of figure) and the other with absent TEOAEs (lower half of figure). (A) Time waveform of click stimulus. (B) Spectrum of click
stimulus (upper solid line),TEOAE (lower solid line), and noise floor (shaded area). (C) Time waveform of TEOAE. (D) Normalized spectrum of TEOAE. (E) Signal-to-noise ratios and response reproducibility within defined frequency bands of the response.

OBJECTIVE MEASURES OF AUDITORY FUNCTION 381
Figure 30-5 Distortion product-gram from an ear with a highfrequency hearing loss ( 1.5 kHz). The levels of the distortion product otoacoustic emissions (open circles) are plotted as a function of the frequency of the upper primary tone (F2). Lower hatched area indicates noise floor.
Clinical Applications of OAEs
The main clinical application of OAEs is as a screening tool for identifying sensory dysfunction. Because OAEs are associated with normal cochlear function and are present even at birth, ears of any age with cochlear hearing losses greater than 30 dB HL do not typically exhibit OAEs. Therefore, failure to elicit an OAE after ruling out middle ear dysfunction suggests the presence of a sensory loss. Using OAEs as a screening tool is popular, particularly for neonatal hearing screening programs, because the test is quick and noninvasive. Many clinics are moving toward routinely using OAE testing in protocols for monitoring cochlear health. Most agents and conditions that are detrimental to hearing (e.g., noise exposure, ototoxic medications, and aging) have a primary effect at the level of the cochlea, so it is appropriate to monitor for their effects using OAE testing.The second important function served by OAE testing is confirmation of audiometric configuration. Audiograms that show regions of both normal and impaired hearing (e.g., high-frequency hearing losses) should be mirrored by the DPgram or spectral content of theTEOAE (see Figs. 30-4 and 30–5).That is, OAE energy should be evident in regions where the audiogram indicates normal hearing and should be absent in regions where the audiogram shows regions of hearing loss in excess of 30 dB HL.Thus OAE testing provides
useful confirmation of audiometric configuration in dif- ficult-to-test patients, patients who cannot provide voluntary responses (e.g., very young or sedated patients), and patients whose voluntary responses are suspect due to compromised developmental function. It must be emphasized that once hearing losses are mild to moderate, OAE testing cannot provide an indication of severity of cochlear loss; a hearing loss could be moderate or profound, yet the OAE test will simply indicate an absence of response.
Advantages and Disadvantages of OAEs
In addition to being an objective measure of cochlear function, an advantage of OAEs is that their measurement is quick, noninvasive, and not generally subject to the state of the patient in terms of sleep, sedation, or alertness.They also provide a wide-band test of peripheral auditory function, unlike some evoked potential measures (see Evoked Potential Audiometry).
A main disadvantage of OAEs is that they do not provide a robust indication of degree of hearing loss.That is, once hearing losses exceed the mild range, OAEs cannot provide an indication of severity of cochlear loss. Because OAEs reflect outer hair cell function, it is possible to measure robust OAEs in the presence of a hearing loss that has a neural basis.This general condition is known as auditory neuropathy. Another disadvantage of OAEs is that their recording is dependent on middle ear status. Thus failure to record an OAE requires further differential testing, including an acoustic immittance battery, for interpretation. Finally, the measurement of OAEs is highly subject to ambient noise levels, generated by the patient (e.g., respiratory noise) or by external conditions (e.g., ventilators).
EVOKED POTENTIAL AUDIOMETRY
Basis of Evoked Potentials
The process of transducing the vibrations of sound into discrete neural impulses in the cochlea involves electrochemical mechanisms. Thereafter, the flow of auditory information toward the cortex consists of neural code, and this neural activity, like all neural activity, is essentially electrical in nature. The electrical potentials generated by cochlear and neural structures are volume-conducted to the surface of the skull/scalp and, under certain conditions, can be detected using contact electrodes. The electrodes themselves cannot distinguish between electrical activity associated specifically with audition and electrical activity associated with other neural or myogenic systems (although the

382 CHAPTER 30 PRINCIPLES OF AUDIOMETRY
electrodes can be positioned on the skull to optimize the recording of activity from generators more specific to the auditory pathway). However, the minute electrical activity associated specifically with audition can be extracted from the ubiquitous neural “noise” by a variety of signal processing techniques, including synchronizing an averaging process to the auditory stimulus.That is, if the surface-recorded (far field) electrical activity is sampled only at the precise moment that an auditory stimulus is delivered, and if this sampling is repeated many times and summed, then only the stimulusrelated activity will add together, whereas the remaining activity (which is random with respect to the auditory stimulus) will tend to cancel out. Evoked potential audiometry is accomplished using a signal-averaging computer to extract sound-related sensory and neural activity from the milieu of ongoing neural and myogenic activity.
Types of Evoked Potentials
The evoked potentials associated with audition can be subcategorized either on the basis of their temporal relation to the stimulus or on the basis of the nature of the stimulus that elicits them. The time of occurrence of an evoked potential relative to the onset of the stimulus is known as its latency.
The evoked potentials associated with cochlear activity are the cochlear microphonic (CM), the summating potential (SP), and the whole-nerve, or compound, action potential (WNAP/CAP).The CM is an alternating current (AC) potential that represents summed outer hair cell responses to sound; the SP is a direct current (DC) potential that arises because of the
nonlinearity of cochlear function; and the WNAP represents the summed, synchronous “firing” of primary auditory neurons in response to the onset of stimulation. Potentials like the WNAP that depend on synchronous neural firings for their detection are best elicited with transient stimuli such as clicks and tone bursts. The clinical measurement of gross cochlear potentials is known as electrocochleography (ECochG).The recording requires an electrode in the vicinity of the cochlea, placed either extratympanically or transtympanically.
The synchronous firing of primary auditory neurons also can be recorded as an evoked potential using electrodes on the outer surface of the scalp. If the recording window is extended out to 10 to 20 msec poststimulus, then evoked potentials associated with other auditory nuclei in the brainstem can also be recorded. Because these short-latency evoked potentials reflect activity through the auditory periphery and the brainstem, they are known collectively as the auditory brainstem response (ABR). The ABR is a sequence of several vertex-positive potentials, or waves, of which the first five (labeled with roman numerals I–V) are the most commonly examined. Responses from a 5-year-old child with one normal ear and one nonfunctional ear are shown in Fig. 30-6.
If the poststimulus recording window is extended still further, evoked potentials associated with cortical activity can be recorded. These include the middle latency responses (MLRs), with latencies out to 80 msec, and the late potentials with latencies out to almost half a second. A variety of specific responses falls under the latter category, such as the P300 and mismatch negativity (MMN) responses. Other types of evoked potentials are measured using stimulation at special rates (e.g., the 40 Hz response that enhances evoked potential recording)
Figure 30-6 Auditory brainstem response recordings from a 5-year-old child with one
normal ear and one unresponsive ear.

SUMMARY 383
or to continuous tones (frequency following response). This discussion will confine itself to ECochG and theABR.
Clinical Applications of Evoked Potentials
Some clinics make use of ECochG recordings to aid in the diagnosis of diseases associated with endolymphatic hydrops such as Meniere’s disease. In these conditions, there is evidence that the ratio of the amplitude of the SP to the amplitude of the WNAP is enhanced.The procedure also may be used as a cross-check in the evaluation of infants and young children suspected of CN VIII dysfunction or auditory neuropathy.
The ABR has two major clinical applications: threshold estimation and differential diagnosis of retrocochlear dysfunction. For threshold estimation, the component of interest is wave V. As the intensity of the stimulus is lowered, the amplitude of this component is reduced, and its latency increases. However, it remains detectable down to stimulus levels that are within 10 to 15 dB of behavioral threshold depending on the type of stimulus being used.The most common stimulus used in the ABR is a click.This elicits highly synchronous neural responses, but the response is dominated by the midfrequency region. Tone burst stimuli can provide more frequency-specific threshold information, but there is a trade-off between the frequency specificity of the stimulus and its ability to evoke synchronous neural firing (and therefore a measurable response). In particular, latency-intensity response functions to low-frequency tone bursts typically are less clearly defined. Stimuli also can be delivered through a bone conductor and a comparison between the air-conducted response and the bone-conducted response can be helpful in determining the type of hearing loss. Threshold measures using the ABR allow an estimation of audiometric sensitivity to be formed that does not rely on the active cooperation of the patient.
The ABR traditionally has played a role in site of lesion testing.A retrocochlear lesion, such as an acoustic neuroma, can affect the morphology of the response waveform. Because the latencies of the ABR peaks for a given stimulus and intensity are highly repeatable, and similar across individuals within an age group, departure from normative values can contribute to a diagnosis of retrocochlear dysfunction. The increasingly widespread use of imaging techniques, such as magnetic resonance imaging (MRI), has impacted the role of ABR in site of lesion testing. However, it is worthwhile remembering that, whereas MRIs assess structure, the ABR is inherently a test of function.Therefore, its routine use in the comprehensive workup beyond MRI is justified.
Advantages and Disadvantages
of Evoked Potentials
As with OAEs, the ABR is not sensitive to the state of the patient in terms of sleep or sedation. Measurable ABRs can be obtained even from premature infants as young as 30 weeks gestation. Thus the ABR can be used to assess objectively auditory function in a wide variety of patient groups.
A disadvantage of the ABR is that it is inherently a test of synchronous neural response and therefore not an actual test of hearing. Its recording requires a quiet and still patient, and, for young children, this may require the use of conscious sedation. This often compounds the logistics and cost of administering the test because of patient-monitoring requirements. Whereas the clearest response is usually measured using the ubiquitous click stimulus, the response to this stimulus reflects only a portion of the cochlea. Finally, the ABR can be subject to electrical interference from other external sources.
SUMMARY
This chapter has provided an overview of the test battery commonly used to evaluate hearing sensitivity and auditory function. The information generated by these measurement tools is substantial, but like all assessment procedures, they must be applied appropriately and interpreted carefully. Moreover, obtaining these measurements is often only the first step. Deciding on an optimal course of action such as hearing aid selection or cochlear implantation and following through with appropriate rehabilitation strategies are all necessary components of the complete audiological practice. Optimal patient management requires mutual understanding and effective collaboration between the audiologist, the otolaryngologist, and related health care professionals.
SUGGESTED READINGS
Hall JW III. Handbook of Auditory Evoked Responses. Boston: Allyn & Bacon; 1992
Hall JW III. Handbook of Otoacoustic Emissions. San Diego: Singular Publishing Group; 2000
Hood L. Clinical Applications of the Auditory Brainstem Response. San Diego: Singular Publishing Group; 1998
Katz J, ed. Handbook of Clinical Audiology. Philadelphia: Lippincott Williams & Wilkins; 2002
Musiek FE, Rintelmann WF, eds. Contemporary Perspectives in Hearing Assessment. Boston:Allyn & Bacon; 1999
Robinette MS, Glattke TJ, eds. Otoacoustic Emissions: Clinical Applications. NewYork:Thieme Medical Publishers; 2002

384 CHAPTER 30 PRINCIPLES OF AUDIOMETRY
SELF-TEST QUESTIONS
For each question select the correct answer from the lettered alternatives that follow.To check your answers, see Answers to Self-Tests on page 716.
1.Behavioral assessment of hearing in an infant should not be attempted until the child has reached a developmental age of approximately
A.1 month
B.6 months
C.12 months
D.2 years
2.The appropriate behavioral assessment procedure for a typically developing 12-month-old infant is
A.Visual reinforcement audiometry
B.Play audiometry
C.Immittance audiometry
D.Otoacoustic emissions
3.The appropriate behavioral assessment procedure for a typically developing 3-year-old child is
A.Visual reinforcement audiometry
B.Play audiometry
C.Immittance audiometry
D.Otoacoustic emissions
4.Middle ear disease, in addition to creating a conductive hearing loss, may also reveal elevated pure-tone bone conduction responses at or around ____ Hz, due to loss of the normal middle ear participation in the bone conduction response.
A.250 Hz
B.750 Hz
C.2000 Hz
D.6000 Hz
5.The bone conduction shift described in question 4 is referred to as
A.Mixed hearing loss
B.Interaural attenuation
C.Kemp’s notch
D.Carhart’s notch
6.A flat tympanogram would not be caused by
A.A patent tympanostomy tube
B.Middle ear effusion
C.Occlusion of the probe tip
D.Endolymphatic hydrops
7.Otoacoustic emissions have the advantage of
A.Providing an accurate indication of degree of hearing loss
B.Providing a wide-frequency band test of cochlear function
C.Not being affected by middle ear status
D.Being measurable in noisy environments
8.A disadvantage of auditory brainstem response (ABR) testing is
A.The response is obscured by sedation.
B.The response reflects only synchronous neural activity.
C.The ABR cannot be measured in infants until they are 6 months of age.
D.The test requires the use of transtympanic electrodes.

Chapter 31
HEARING AIDS, BONE-
ANCHORED HEARING AIDS,
AND COCHLEAR IMPLANTS
ADRIEN A. ESHRAGHI, SUSAN B. WALTZMAN, JOSEPH G. FEGHALI,
THOMAS R. VAN DE WATER, AND NOEL L. COHEN
HEARING AIDS |
SPEECH PROCESSOR AND |
|
TECHNOLOGY |
CODING STRATEGIES |
|
CANDIDACY |
||
MEDICAL EVALUATION |
||
SURGICAL PROCEDURE |
||
CANDIDACY |
||
OUTCOME ASSESSMENT |
||
BONE-ANCHORED HEARING AIDS |
||
SUMMARY |
||
CANDIDACY |
||
SUGGESTED READINGS |
||
MEDICAL AND AUDIOLOGICAL EVALUATION |
||
SELF-TEST QUESTIONS |
||
SURGICAL PROCEDURE |
||
FOLLOW-UP |
|
|
COCHLEAR IMPLANTS |
|
|
TECHNOLOGY |
|
|
STRUCTURE OF A COCHLEAR IMPLANT |
|
HEARING AIDS
TECHNOLOGY
The basic role of a hearing aid is to amplify auditory stimuli. A transducer, the microphone, which converts the signal from mechanical to electrical energy, picks up the incoming auditory signal.The electrical current is then amplified and transmitted to a receiver, which reconverts the electrical current into acoustic stimuli. The devices are battery-powered. There are many other elements to hearing aids that allow both the dispenser and the patient to adjust the device according to specific needs. Patient adjustment options include
volume controls, on/off switches, and telephone and noise suppression switches, to name a few.The dispenser can alter the frequency response characteristics and program other acoustic variables to customize the device to the specific auditory needs of the patient. Currently, the most commonly used hearing aids are either in the ear (ITE), in the canal (ITC), completely in the canal (CIC), or behind the ear (BTE). Although the BTE was for a long time the most widely prescribed model, the other smaller and more cosmetically appealing types are gaining increasing popularity as they become more sophisticated and can provide benefit to a wider population.

386 CHAPTER 31 HEARING AIDS, BONE-ANCHORED HEARING AIDS, AND COCHLEAR IMPLANTS
A primary determinant of hearing aid selection relates to the circuitry that would best serve the needs of the patient. The simplest circuit, a linear design, amplifies the incoming signal in a predetermined manner regardless of the input level. A simple compression circuit reduces the loudness of selected sounds above a certain predetermined level that serves to minimize distortion. More developed compression circuits can more aggressively select the incoming signals to be modified. Although hearing aids have employed analog speech-processing circuits, more recent technology allows for analog processing with digital programming capability; that is, the adjustments are digitally driven, but the speech processing remains an analog function. Fully digital hearing aids that do employ digital signal processing, however, are becoming more accessible.The flexibility of the circuitry permits the device to store several different programs that can be used in different listening conditions. Recently developed multimicrophone, multimemory digital hearing aids deliver improved hearing in suboptimal listening situations.
MEDICAL EVALUATION
When possible, diseases of the external, middle, and inner ears, as well as diseases; of the auditory nerve and central auditory pathways, should be treated appropriately prior to receiving medical clearance for the use of a hearing aid.
Several medical conditions require additional testing to rule out significant disease; for example, acoustic neuroma in patients with significantly asymmetric and unexplained senorineural hearing loss, active autoimmune inner ear disorders, and Meniere’s disease. However, these conditions are not necessarily absolute contraindications to the use of hearing aids. For example, it is permissible for a patient with a known acoustic neuroma to use a hearing aid if there are no treatments planned for the foreseeable future.
Otorrhea and chronic suppurative otitis media can interfere with a patient’s ability to wear a hearing aid. Such conditions should be treated. Occasionally, these conditions persist, making it impossible for some patients to wear a hearing aid, and require the use of alternative treatments (e.g., bone-anchored hearing aids or other amplification modalities).
Hearing aids typically are dispensed by otolaryngologists, audiologists, and other hearing aid specialists. Some clinical situations require the direct involvement of an otolaryngologist or an otologist in the fitting process.This direct physician involvement is of particular importance in patients with open mastoid cavities,
tympanic membrane perforations, and severe exostosis. In all these situations, ear mold material can be trapped in undesirable locations, necessitating surgical removal.
CANDIDACY
Who is a candidate for a hearing aid? The simple answer to this question is anybody with a confirmed hearing loss who is experiencing difficulty hearing. Unfortunately, the answer is not quite so straightforward. The best criteria for candidacy remain lifestyle, level of frustration, audiometric confirmation of a hearing loss, and a high level of motivation.
The next question relates to prescribing and fitting the most appropriate hearing aid. Because there are numerous models, a decision has to be made as to which type will provide the highest level of benefit to the patient. Although the major determinant should always be the extent and nature of the hearing loss, the cosmetic concerns of the individual also must be taken into account. Digitally programmable and fully digital hearing aids offer better opportunities to a greater segment of the population, but they are not without exception the only rational choice.An aid with more usual circuitry may be better able to serve the auditory and financial needs of the patient because digital technology can be up to 2.5 times more expensive than its analog counterpart.
A related issue is binaural fitting: in the majority of cases, two (bilateral) hearing aids provide better fidelity, speech understanding in noise, and localization ability than does one (unilateral) hearing aid. Exceptions to this rule are individuals with unilateral and/or markedly asymmetric hearing losses. The benefits of two hearing aids should be clearly and thoroughly explained to patients, although some will opt to obtain one aid at first and a second aid to follow. Because many insurance companies do not pay or reimburse for hearing aids, finances as well as psychological barriers may play a significant role in the decision-making process. The dispensing audiologist needs to be cognizant and respectful of the patient’s economic situation and do whatever is possible to alleviate the strain of the circumstance. In summary, in addition to the nature and extent of the hearing loss, social and economic factors constitute an equal part of the equation in the prescription of hearing aids.
As with any prosthetic device, patient and family expectations play a significant role in the ability to adapt to, and make maximum use of, a hearing aid. During the initial evaluation phase, the patient should be advised that sound via a hearing aid is not the same as normal hearing. Nor will the adjustment to the new sound necessarily be either rapid or easy. The sound quality does not mirror

BONE-ANCHORED HEARING AIDS 387
that of the normal auditory system, and when speech discrimination is poor, a hearing aid may not address the problem well enough to satisfy the needs of the individual. Furthermore, the lack of ability to understand speech in noisy situations is one of the most frequent complaints of the new hearing aid user. In fact, the ability to overcome background noise and deliver an intelligible speech signal remains one of the most challenging technological problems facing hearing aid researchers.
Hearing aid users often require additional assistance in certain listening situations, usually when background noise prevents recognition of the principal input stimulus. Additionally, there are those with mild hearing losses who function well under most listening conditions but experience difficulty in specific instances, including movies, lectures, and conferences. Assistive listening devices can facilitate listening under these difficult circumstances.These aids provide sound, television, and telecommunications enhancement and signal-alerting capabilities. Various types of systems exist: frequency modulation (FM), induction loop, infrared, and a hardwired amplification device. The decision as to the most appropriate type of device depends on the specific needs of the individual, geographical requirements, and environmental surroundings.The devices can be coupled to the hearing aid, or, for those who do not use an aid, they are stand-alone amplification systems, which provide benefit in a specific listening situation such as television viewing. In addition, visual enhancement and alerting systems including closed captioning and lights can be used as a substitute for alarms, doorbells, telephones, and so on. Informing and counseling the hearingimpaired individual on the types and benefits of these very useful devices can serve to improve both the quality and safety of daily living.
BONE-ANCHORED HEARING AIDS
The use of bone-anchored hearing aids (BAHAs) started 20 years ago in Sweden, and there are now close to 7500 patients worldwide who have been implanted and fitted successfully with this device.
The general idea of a convention bone conduction hearing device is that the bone-conducted sound bypasses the impaired and/or diseased external or middle ears.With BAHA, there is direct bone conduction, without skin and soft tissues being part of the vibration transmission path.The system is composed of the fixture and a bone-anchored abutment (Fig. 31-1A) that is placed during surgery, and the sound processor that can be either an ear-level device (Fig. 31-1B) or a body-worn aid that is fitted 3 to 4 months after surgery.
A
B
Figure 31-1 (A) Bone-anchored hearing aid (BAHA) abutment fixed in place and ready to be connected to a processor unit. (B) An ear-level BAHA processor unit attached to the abutment.
CANDIDACY
Overall, the cochlear hearing threshold (reserve) should be better than 45 dB for the BAHA ear-level device (Fig. 31-1B) and not worse than 65 dB for the bodyworn aid. The size of the air–bone gap is of no significance because the BAHA bypasses the ossicular chain. BAHA candidates are patients who have a conductive or mixed hearing loss and who can still benefit from sound amplification.
Some particular indications for BAHA candidacy are the following:
•Single-sided deafness, where BAHAs can offer improved speech recognition in noise and reduction of the head shadow effect
•A chronically draining ear, where the use of an air conduction hearing aid aggravates the infection, causes a feedback problem, causes poor wearing

388 CHAPTER 31 HEARING AIDS, BONE-ANCHORED HEARING AIDS, AND COCHLEAR IMPLANTS
comfort, or poor sound quality (In fact, patients with recurrent otitis externa, draining otitis media, or a radical mastoidectomy cavity can all benefit from a BAHA.)
•Congenital ear canal malformations where the cochlea is intact and functional
There are a few contraindications for the use of a BAHA to keep in mind:
•Poor hygiene (in children, the responsibility falls to the parents)
•For patients in the United States, age of less than 5 years
•If sufficient bone volume and bone quality is not present for the successful anchoring of the BAHA abutment within a patient’s skull
MEDICAL AND AUDIOLOGICAL EVALUATION
Medical evaluation to check a patient’s hygiene and for presence of any disease affecting the skin of the scalp is required when assessing a patient for a BAHA device.
Audiolological preoperative measurements are acquired by pure-tone audiometry and speech audiometry (a score of better than 60% phonetically balance (PB) word list is recommended).
The test rod is performed with a plastic bar to which a bayonet or snap coupling is attached at one end. It is intended to assess the candidate preoperatively, to educate the patient, and to demonstrate to the prospective patient the expected result.
SURGICAL PROCEDURE
A skin incision is performed using the BAHA dermatome (or manually); a skin flap is made with the subcutaneous tissue cut down to the periosteum. The periosteum at the fixture site should be incised and removed from the bone, and the fixture site is prepared using the surgical guide for drilling. This drilling guide indicator is also used during the actual tapping and fixture insertion. A cover screw placement may be used to protect the inner hole of the fixture temporarily if a two-stage procedure is planned. In this case, the fixture is left for a period of 3 to 6 months, during which time osseointegration takes place before the abutment can be fitted to the BAHA unit.
There are two prerequisites for establishing and maintaining a reaction-free skin penetration. The skin surrounding the fixture should be hairless to help keep the fixture site clean, and the skin flap must be very thin to avoid any movement of skin around the abutment.
After finalizing the soft tissue preparation, a hole is punched over the fixture site with a 4 mm biopsy punch. The abutment is then correctly fitted to the fixture. Finally, the healing cap is attached to the abutment to fix the dressing in place and prevent formation of a hematoma. The fixture has to osseointegrate with the bone for 3 months before fitting the BAHA sound processor.A standard mastoid dressing is left in place for 1 or 2 days. Seven days after the healing cap is removed, a new dressing is placed for 7 days. Patients with twostage surgery can be fitted 1 month after the second stage. Patients with one-stage surgery are fitted after 3 months. Warning: Early loading may result in loss of the fixture.
FOLLOW-UP
A daily cleaning routine is very important to maintain the integrity of the site and to prevent a reaction with the skin.A follow-up program of twice-a-year inspection of the site is sufficient; the skin and the stability of the abutment can be checked at those times.
The fixture and the abutment can be left in place if the patient has to undergo a magnetic resonance imaging (MRI). scan However, the sound processor unit should be removed prior to the MRI procedure.
COCHLEAR IMPLANTS
Despite the extent of the development of new hearing aids, a critical problem remains for those children and adults with severe to profound sensorineural hearing loss who receive little or no benefit from amplification by either conventional hearing aids or BAHAs. The purpose of the cochlear implant is to bypass the traditional form of sound transmission and transduction by using direct electrical stimulation of the auditory neurons and the auditory nerve. No matter how sophisticated a hearing aid, its basic mode of operation is still the amplification of incoming sound. In contrast, a cochlear implant attempts to replace the function of the auditory hair cells that has been lost by the damaged cochlea. In a normal-hearing ear, the hair cells within the cochlea act as a transducer of mechanical energy into electrical energy capable of stimulating a patterned discharge from the eighth cranial nerve (CN VIII). An actual decrease in the number of hair cells or in hair cell function, resulting in a significant hearing loss, causes the cochlea to lose its ability to execute the transduction function that results in CN VIII nerve stimulation and consequently hearing (see Chapter 26). The implant replaces the function of the lost hair cells by converting