Concept: Dichotic listening
Differences in cognitive control in children and adolescents with combined and inattentive subtypes of ADHD.
- Child neuropsychology : a journal on normal and abnormal development in childhood and adolescence
- Published over 8 years ago
The aim of the present study was to investigate the ability of children with attention deficit/hyperactivity disorder-combined subtype (ADHD-C) and predominantly inattentive subtype (ADHD-PI) to direct their attention and to exert cognitive control in a forced attention dichotic listening (DL) task. Twenty-nine, medication-naive participants with ADHD-C, 42 with ADHD-PI, and 40 matched healthy controls (HC) between 9 and 16 years were assessed. In the DL task, two different auditory stimuli (syllables) are presented simultaneously, one in each ear. The participants are asked to report the syllable they hear on each trial with no instruction on focus of attention or to explicitly focus attention and to report either the right- or left-ear syllable. The DL procedure is presumed to reflect different cognitive processes: perception (nonforced condition/NF), attention (forced-right condition/FR), and cognitive control (forced-left condition/FL). As expected, all three groups had normal perception and attention. The children and adolescents with ADHD-PI showed a significant right-ear advantage also during the FL condition, while the children and adolescents in the ADHD-C group showed a no-ear advantage and the HC showed a significant left-ear advantage in the FL condition. This suggests that the ADHD subtypes differ in degree of cognitive control impairment. Our results may have implications for further conceptualization, diagnostics, and treatment of ADHD subtypes.
It is well known that the planum temporale (PT) area in the posterior temporal lobe carries out spectro-temporal analysis of auditory stimuli, which is crucial for speech, for example. There are suggestions that the PT is also involved in auditory attention, specifically in the discrimination and selection of stimuli from the left and right ear. However, direct evidence is missing so far. To examine the role of the PT in auditory attention we asked fourteen participants to complete the Bergen Dichotic Listening Test. In this test two different consonant-vowel syllables (e.g., “ba” and “da”) are presented simultaneously, one to each ear, and participants are asked to verbally report the syllable they heard best or most clearly. Thus attentional selection of a syllable is stimulus-driven. Each participant completed the test three times: after their left and right PT (located with anatomical brain scans) had been stimulated with repetitive transcranial magnetic stimulation (rTMS), which transiently interferes with normal brain functioning in the stimulated sites, and after sham stimulation, where participants were led to believe they had been stimulated but no rTMS was applied (control). After sham stimulation the typical right ear advantage emerged, that is, participants reported relatively more right than left ear syllables, reflecting a left-hemispheric dominance for language. rTMS over the right but not left PT significantly reduced the right ear advantage. This was the result of participants reporting more left and fewer right ear syllables after right PT stimulation, suggesting there was a leftward shift in stimulus selection. Taken together, our findings point to a new function of the PT in addition to auditory perception: particularly the right PT is involved in stimulus selection and (stimulus-driven), auditory attention.
Previous studies indicate that positive mood broadens the scope of visual attention, which can manifest as heightened distractibility. We used event-related potentials (ERP) to investigate whether music-induced positive mood has comparable effects on selective attention in the auditory domain. Subjects listened to experimenter-selected happy, neutral or sad instrumental music and afterwards participated in a dichotic listening task. Distractor sounds in the unattended channel elicited responses related to early sound encoding (N1/MMN) and bottom-up attention capture (P3a) while target sounds in the attended channel elicited a response related to top-down-controlled processing of task-relevant stimuli (P3b). For the subjects in a happy mood, the N1/MMN responses to the distractor sounds were enlarged while the P3b elicited by the target sounds was diminished. Behaviorally, these subjects tended to show heightened error rates on target trials following the distractor sounds. Thus, the ERP and behavioral results indicate that the subjects in a happy mood allocated their attentional resources more diffusely across the attended and the to-be-ignored channels. Therefore, the current study extends previous research on the effects of mood on visual attention and indicates that even unfamiliar instrumental music can broaden the scope of auditory attention via its effects on mood.
Recent findings suggest that both peripheral and central auditory system dysfunction occur in the prodromal stages of Alzheimer Disease (AD), and therefore may represent early indicators of the disease. In addition, loss of auditory function itself leads to communication difficulties, social isolation and poor quality of life for both patients with AD and their caregivers. Developing a greater understanding of auditory dysfunction in early AD may shed light on the mechanisms of disease progression and carry diagnostic and therapeutic importance. Herein, we review the literature on hearing abilities in AD and its prodromal stages investigated through methods such as pure-tone audiometry, dichotic listening tasks, and evoked response potentials. We propose that screening for peripheral and central auditory dysfunction in at-risk populations is a low-cost and effective means to identify early AD pathology and provides an entry point for therapeutic interventions that enhance the quality of life of AD patients.
If a representation of an auditory attention channel was present in the auditory cortices but not in the subcortical structures, it would be predicted that the early event-related brain potential (ERP) would disagree with the late ERP in selective attention effects. To examine this idea, the present study recorded the auditory brain stem response (ABR) as an early ERP and also the negative difference, the processing negativity and the irrelevant positive difference waves as late ERPs during dichotic listening. Each participant experienced two dichotic conditions: (i) 500-Hz standard tones to the left ear and 1000-Hz ones to the right ear (L500/R1000), (ii) 1000-Hz standard tones to the left ear and 500-Hz ones to the right ear (L1000/R500). In a control task, participants performed visual detection and ignored auditory stimuli. Although the negative difference and processing negativity were found to be identical between the two dichotic conditions, the ABR demonstrated a significant difference between relevant and irrelevant tasks only for the L500/R1000 condition. A response preference to lower-frequency tones was found for behavioural measures and late ERPs but not for the ABR. These results suggest difficulty in representing attention channels in the auditory brain stem. In addition, a weak effect of dichotic sound combination in behaviours corresponded only with earlier ERPs.
Evaluation of dichotic listening to digits is a common part of many studies for diagnosis and managing auditory processing disorders in children. Previous researchers have verified test-retest relative reliability of dichotic digits results in normal children and adults. However, detecting intervention-related changes in the ear scores after dichotic listening training requires information regarding trial-to-trial typical variation of individual ear scores that is estimated using indices of absolute reliability. Previous studies have not addressed absolute reliability of dichotic listening results.
Previous findings have suggested that auditory attention causes not only enhancement in neural processing gain but also sharpening in neural frequency tuning in human auditory cortex. The current study was aimed to reexamine these findings and investigate whether attentional gain enhancement and frequency sharpening emerge at the same or different processing levels and whether they represent independent or cooperative effects. For that, we examined the pattern of attentional modulation effects on early, sensory-driven cortical auditory-evoked potentials (CAEPs) occurring at different latencies. Attention was manipulated using a dichotic listening task and was thus not selectively directed to specific frequency values. Possible attention-related changes in frequency tuning selectivity were measured with an electroencephalography adaptation paradigm. Our results show marked disparities in attention effects between the earlier N1 CAEP deflection and the subsequent P2 deflection, with the N1 showing a strong gain enhancement effect, but no sharpening, and the P2 showing clear evidence of sharpening, but no independent gain effect. They suggest that gain enhancement and frequency sharpening represent successive stages of a cooperative attentional modulation mechanism, which appears to increase the representational bandwidth of attended versus unattended sounds.
Spatial perceptual rightward bias which was originally described in Dichotic Listening studies seems to be a general phenomenon. This bias is age dependent, being evident in children with developing executive functions, and emerging again at older age as a function of aging and the declining executive functions. In the two studies presented here we compared the performance of young and elderly adults in spatial divided attention tasks with auditory and visual stimuli when the stimulus detection performance was measured in separate sessions in a laboratory setting (Study I), to performance when the same types of stimuli were mixed with a task in which the subject’s primary objective was to drive a car in a virtual environment (virtual reality; Study II). The aim was to see if the perceptual bias could be detected and also to look at how it would differ in these two situations. 90 right-handed subjects (50 young and 40 elderly) participated in Study I and 84 subjects (64 young and 20 elderly) participated in Study II. Study I showed the rightward bias to be more evident in the elderly subjects in both modalities and in more demanding tasks. Study II revealed that in the triple task the spatial perceptual bias was evident in both modalities for the elderly participants when the conditions were more demanding. An interesting finding concerning the right-side perceptual bias was the simultaneous occurrence of left-side driving errors, i.e. crossing the lane border to the left especially by the elderly. Both of these biases may reflect the asymmetries of the attention-related neuronal networks.
The dichotic-listening paradigm with verbal stimuli is a widely employed behavioral task for the assessment of hemispheric asymmetry for speech and language processing. Participants with assumed left-hemispheric dominance report the right-ear stimulus with higher probability than the left-ear stimulus. However, there is substantial between-subject and trial-to-trial variability observed in the paradigm, motivating scrutiny of the task set-up and theoretical models. Here, we give an in-depth discussion of specific features of stimulus material and experimental parameters, as well as the conditions of stimulus/response selection, which explain a significant proportion of intra- and inter-individual variability. Carefully considering these factors should be at the heart of any experimental planning when using the dichotic-listening paradigm to achieve an optimal testing situation for measuring laterality and avoid confounds in between-subject and between-group comparisons.
In recent years, hemispheric lateralization of alpha power has emerged as a neural mechanism thought to underpin spatial attention across sensory modalities. Yet, how healthy aging, beginning in middle adulthood, impacts the modulation of lateralized alpha power supporting auditory attention remains poorly understood. In the current electroencephalography (EEG) study, middle-aged and older adults (N = 29; ~40-70 years) performed a dichotic listening task that simulates a challenging, multi-talker scenario. We examined the extent to which the modulation of 8-12 Hz alpha power would serve as neural marker of listening success across age. With respect to the increase in inter-individual variability with age, we examined an extensive battery of behavioral, perceptual, and neural measures. Similar to findings on younger adults, middle-aged and older listeners' auditory spatial attention induced robust lateralization of alpha power, which synchronized with the speech rate. Notably, the observed relationship between this alpha lateralization and task performance did not co-vary with age. Instead, task performance was strongly related to an individual’s attentional and working memory capacity. Multivariate analyses revealed a separation of neural and behavioral variables independent of age. Our results suggest that in age-varying samples as the present one, the lateralization of alpha power is neither a sufficient nor necessary neural strategy for an individual’s auditory spatial attention, as higher age might come with increased use of alternative, compensatory mechanisms. Our findings emphasize that explaining inter-individual variability will be key to understanding the role of alpha oscillations in auditory attention in the aging listener. This article is protected by copyright. All rights reserved.