Journal: Hearing research
Cochlear implant users show a profile of residual, yet poorly understood, musical abilities. An ability that has received little to no attention in this population is entrainment to a musical beat. We show for the first time that a heterogeneous group of cochlear implant users is able to find the beat and move their bodies in time to Latin Merengue music, especially when the music is presented in unpitched drum tones. These findings not only reveal a hidden capacity for feeling musical rhythm through the body in the deaf and hearing impaired population, but illuminate promising avenues for designing early childhood musical training that can engage implanted children in social musical activities with benefits potentially extending to non-musical domains.
We investigated the contribution of the middle ear to the physiological response to bone conduction stimuli in chinchilla. We measured intracochlear sound pressure in response to air conduction (AC) and bone conduction (BC) stimuli before and after interruption of the ossicular chain at the incudo-stapedial joint. Interruption of the chain effectively decouples the external and middle ear from the inner ear and significantly reduces the contributions of the outer ear and middle ear to the bone conduction response. With AC stimulation, both the scala vestibuli Psv and scala tympani Pst sound pressures drop by 30 to 40 dB after the interruption. In BC stimulation, Psv decreases after interruption by about 10 to 20 dB, but Pst is little affected. This difference in the sensitivity of the BC induced Psv and Pst to ossicular interruption is not consistent with a BC response to ossicular motion, but instead suggests a significant contribution of an inner-ear drive (e.g. cochlear fluid inertia or compressibility) to the BC response.
Amazing progress has been made in providing useful hearing to hearing-impaired individuals using cochlear implants, but challenges remain. One such challenge is understanding the effects of partial degeneration of the auditory nerve, the target of cochlear implant stimulation. Here we review studies from our human and animal laboratories aimed at characterizing the health of the implanted cochlea and the auditory nerve. We use the data on cochlear and neural health to guide rehabilitation strategies. The data also motivate the development of tissue-engineering procedures to preserve or build a healthy cochlea and improve performance obtained by cochlear implant recipients or eventually replace the need for a cochlear implant.
With increasing age, the risk of developing chronic health conditions also increases, and many older people suffer from multiple co-existing health conditions, i.e., multimorbidity. One common health condition at older age is hearing loss (HL). The current article reflects on the implications for audiological care, when HL is one of several health conditions in a multimorbidity. An overview of health conditions often co-existing with HL, so called comorbidities, is provided, including indications for the strength of the associations. The overview is based on a literature study examining cohort studies that were published in the years 2010-2018 and examined associations of hearing loss with other health conditions, namely Visual impairment, Mobility restrictions, Cognitive impairment, Psychosocial health problems, Diabetes, Cardiovascular diseases, Stroke, Arthritis, and Cancer. This selection was based on previous publications on common chronic health conditions at older age and comorbidities of hearing loss. For all of these health conditions, it was found that prevalence is larger in people with a HL and several longitudinal studies also found increased incident rates in people with a HL. The examined publications provide little information on how hearing loss should be managed in the clinical care of its comorbidities and vice versa. The current article discusses several options for adaptations of current care. Nonetheless, solutions for an integrated audiology care model targeting HL in a multimorbidity are still lacking and should be subject to future research.
This paper presents evidence for a strong connection between the development of speech and language skills and musical activities of children and adolescents with hearing impairment and/or cochlear implants. This conclusion is partially based on findings for typically hearing children and adolescents, showing better speech and language skills in children and adolescents with musical training, and importantly, showing increases of speech and language skills in children and adolescents taking part in musical training. Further, studies of hearing-impaired children show connections between musical skills, involvement in musical hobbies, and speech and language skills. Even though the field is still lacking large-scale randomised controlled trials on the effects of musical interventions on the speech and language skills of children and adolescents with hearing impairments and cochlear implants, the current evidence seems enough to urge speech therapists, music therapists, music teachers, parents, and children and adolescents with hearing impairments and/or cochlear implants to start using music for enhancing speech and language skills. For this reason, we give our recommendations on how to use music for language skill enhancement in this group.
Under certain conditions, sighted and blind humans can use echoes to discern characteristics of otherwise silent objects. Previous research concluded that robust horizontal-place object localisation ability, without using head movement, depends on information above 2 kHz. While a strong interaural level difference (ILD) cue is available, it was not clear if listeners were using that or the monaural level cue that necessarily accompanies ILD. In this experiment, 13 sighted and normal-hearing listeners were asked to identify the right-vs.-left position of an object in virtual auditory space. Sounds were manipulated to remove binaural cues (binaural vs. diotic presentation) and prevent the use of monaural level cues (using level roving). With low- (<2 kHz) and high- (>2 kHz) frequency bands of noise, performance with binaural presentation and level rove exceeded that expected from use of monaural level cues and that with diotic presentation. It is argued that a high-frequency binaural cue (most likely ILD), and not a monaural level cue, is crucial for robust object localisation without head movement.
Musicians are at risk of hearing loss due to prolonged noise exposure, but they may also be at risk of early sub-clinical hearing damage, such as cochlear synaptopathy. In the current study, we investigated the effects of noise exposure on electrophysiological, behavioral and self-report correlates of hearing damage in young adult (age range = 18-27 years) musicians and non-musicians with normal audiometric thresholds. Early-career musicians (n = 76) and non-musicians (n = 47) completed a test battery including the Noise Exposure Structured Interview, pure-tone audiometry (PTA; 0.25-8 kHz), extended high-frequency (EHF; 12 and 16 kHz) thresholds, otoacoustic emissions (OAEs), auditory brainstem responses (ABRs), speech perception in noise (SPiN), and self-reported tinnitus, hyperacusis and hearing in noise difficulties. Total lifetime noise exposure was similar between musicians and non-musicians, the majority of which could be accounted for by recreational activities. Musicians showed significantly greater ABR wave I/V ratios than non-musicians and were also more likely to report experience of - and/or more severe - tinnitus, hyperacusis and hearing in noise difficulties, irrespective of noise exposure. A secondary analysis revealed that individuals with the highest levels of noise exposure had reduced outer hair cell function compared to individuals with the lowest levels of noise exposure, as measured by OAEs. OAE level was also related to PTA and EHF thresholds. High levels of noise exposure were also associated with a significant increase in ABR wave V latency, but only for males, and a higher prevalence and severity of hyperacusis. These findings suggest that there may be sub-clinical effects of noise exposure on various hearing metrics even at a relatively young age, but do not support a link between lifetime noise exposure and proxy measures of cochlear synaptopathy such as ABR wave amplitudes and SPiN. Closely monitoring OAEs, PTA and EHF thresholds when conventional PTA is within the clinically ‘normal’ range could provide a useful early metric of noise-induced hearing damage. This may be particularly relevant to early-career musicians as they progress through a period of intensive musical training, and thus interventions to protect hearing longevity may be vital.
There is no standard diagnostic criterion for tinnitus, although some clinical assessment instruments do exist for identifying patient complaints. Within epidemiological studies the presence of tinnitus is determined primarily by self-report, typically in response to a single question. Using these methods prevalence figures vary widely. Given the variety of published estimates worldwide, we assessed and collated published prevalence estimates of tinnitus and tinnitus severity, creating a narrative synthesis of the data. The variability between prevalence estimates was investigated in order to determine any barriers to data synthesis and to identify reasons for heterogeneity.
We determined the absolute hearing sensitivity of the red fox (Vulpes vulpes) using an adapted standard psychoacoustic procedure. The animals were tested in a reward-based go/no-go procedure in a semi-anechoic chamber. At 60 dB sound pressure level (SPL) (re 20 μPa) red foxes perceive pure tones between 51 Hz and 48 kHz, spanning 9.84 octaves with a single peak sensitivity of -15 dB at 4 kHz. The red foxes' high-frequency cutoff is comparable to that of the domestic dog while the low-frequency cutoff is comparable to that of the domestic cat and the absolute sensitivity is between both species. The maximal absolute sensitivity of the red fox is among the best found to date in any mammal. The procedure used here allows for assessment of animal auditory thresholds using positive reinforcement outside the laboratory.
Echolocation offers a promising approach to improve the quality of life of people with blindness although little is known about the factors influencing object localisation using a ‘searching’ strategy. In this paper, we describe a series of experiments using sighted and blind human listeners and a ‘virtual auditory space’ technique to investigate the effects of the distance and orientation of a reflective object and the effect of stimulus bandwidth on ability to identify the right-versus-left position of the object, with bands of noise and durations from 10-400 ms. We found that performance reduced with increasing object distance. This was more rapid for object orientations where mirror-like reflection paths do not exist to both ears (i.e. most possible orientations); performance with these orientations was indistinguishable from chance at 1.8 m for even the best performing listeners in other conditions. Above-chance performance extended to larger distances when the echo was artificially presented in isolation, as might be achieved in practice by an assistive device. We also found that performance was primarily based on information above 2 kHz. Further research should extend these investigations to include other factors that are relevant to real-life echolocation.