Transcranial focused ultrasound (FUS) is capable of modulating the neural activity of specific brain regions, with a potential role as a non-invasive computer-to-brain interface (CBI). In conjunction with the use of brain-to-computer interface (BCI) techniques that translate brain function to generate computer commands, we investigated the feasibility of using the FUS-based CBI to non-invasively establish a functional link between the brains of different species (i.e. human and Sprague-Dawley rat), thus creating a brain-to-brain interface (BBI). The implementation was aimed to non-invasively translate the human volunteer’s intention to stimulate a rat’s brain motor area that is responsible for the tail movement. The volunteer initiated the intention by looking at a strobe light flicker on a computer display, and the degree of synchronization in the electroencephalographic steady-state-visual-evoked-potentials (SSVEP) with respect to the strobe frequency was analyzed using a computer. Increased signal amplitude in the SSVEP, indicating the volunteer’s intention, triggered the delivery of a burst-mode FUS (350 kHz ultrasound frequency, tone burst duration of 0.5 ms, pulse repetition frequency of 1 kHz, given for 300 msec duration) to excite the motor area of an anesthetized rat transcranially. The successful excitation subsequently elicited the tail movement, which was detected by a motion sensor. The interface was achieved at 94.0±3.0% accuracy, with a time delay of 1.59±1.07 sec from the thought-initiation to the creation of the tail movement. Our results demonstrate the feasibility of a computer-mediated BBI that links central neural functions between two biological entities, which may confer unexplored opportunities in the study of neuroscience with potential implications for therapeutic applications.
The articular release of the metacarpophalangeal joint produces a typical cracking sound, resulting in what is commonly referred to as the cracking of knuckles. Despite over sixty years of research, the source of the knuckle cracking sound continues to be debated due to inconclusive experimental evidence as a result of limitations in the temporal resolution of non-invasive physiological imaging techniques. To support the available experimental data and shed light onto the source of the cracking sound, we have developed a mathematical model of the events leading to the generation of the sound. The model resolves the dynamics of a collapsing cavitation bubble in the synovial fluid inside a metacarpophalangeal joint during an articular release. The acoustic signature from the resulting bubble dynamics is shown to be consistent in both magnitude and dominant frequency with experimental measurements in the literature and with our own experiments, thus lending support for cavitation bubble collapse as the source of the cracking sound. Finally, the model also shows that only a partial collapse of the bubble is needed to replicate the experimentally observed acoustic spectra, thus allowing for bubbles to persist following the generation of sound as has been reported in recent experiments.
How the human auditory system extracts perceptually relevant acoustic features of speech is unknown. To address this question, we used intracranial recordings from nonprimary auditory cortex in the human superior temporal gyrus to determine what acoustic information in speech sounds can be reconstructed from population neural activity. We found that slow and intermediate temporal fluctuations, such as those corresponding to syllable rate, were accurately reconstructed using a linear model based on the auditory spectrogram. However, reconstruction of fast temporal fluctuations, such as syllable onsets and offsets, required a nonlinear sound representation based on temporal modulation energy. Reconstruction accuracy was highest within the range of spectro-temporal fluctuations that have been found to be critical for speech intelligibility. The decoded speech representations allowed readout and identification of individual words directly from brain activity during single trial sound presentations. These findings reveal neural encoding mechanisms of speech acoustic parameters in higher order human auditory cortex.
Crocodilians are among the most vocal non-avian reptiles. Adults of both sexes produce loud vocalizations known as ‘bellows’ year round, with the highest rate during the mating season. Although the specific function of these vocalizations remains unclear, they may advertise the caller’s body size, because relative size differences strongly affect courtship and territorial behaviour in crocodilians. In mammals and birds, a common mechanism for producing honest acoustic signals of body size is via formant frequencies (vocal tract resonances). To our knowledge, formants have to date never been documented in any non-avian reptile, and formants do not seem to play a role in the vocalizations of anurans. We tested for formants in crocodilian vocalizations by using playbacks to induce a female Chinese alligator (Alligator sinensis) to bellow in an airtight chamber. During vocalizations, the animal inhaled either normal air or a helium/oxygen mixture (heliox) in which the velocity of sound is increased. Although heliox allows normal respiration, it alters the formant distribution of the sound spectrum. An acoustic analysis of the calls showed that the source signal components remained constant under both conditions, but an upward shift of high-energy frequency bands was observed in heliox. We conclude that these frequency bands represent formants. We suggest that crocodilian vocalizations could thus provide an acoustic indication of body size via formants. Because birds and crocodilians share a common ancestor with all dinosaurs, a better understanding of their vocal production systems may also provide insight into the communication of extinct Archosaurians.
Although brain imaging studies have demonstrated that listening to music alters human brain structure and function, the molecular mechanisms mediating those effects remain unknown. With the advent of genomics and bioinformatics approaches, these effects of music can now be studied in a more detailed fashion. To verify whether listening to classical music has any effect on human transcriptome, we performed genome-wide transcriptional profiling from the peripheral blood of participants after listening to classical music (n = 48), and after a control study without music exposure (n = 15). As musical experience is known to influence the responses to music, we compared the transcriptional responses of musically experienced and inexperienced participants separately with those of the controls. Comparisons were made based on two subphenotypes of musical experience: musical aptitude and music education. In musically experiencd participants, we observed the differential expression of 45 genes (27 up- and 18 down-regulated) and 97 genes (75 up- and 22 down-regulated) respectively based on subphenotype comparisons (rank product non-parametric statistics, pfp 0.05, >1.2-fold change over time across conditions). Gene ontological overrepresentation analysis (hypergeometric test, FDR < 0.05) revealed that the up-regulated genes are primarily known to be involved in the secretion and transport of dopamine, neuron projection, protein sumoylation, long-term potentiation and dephosphorylation. Down-regulated genes are known to be involved in ATP synthase-coupled proton transport, cytolysis, and positive regulation of caspase, peptidase and endopeptidase activities. One of the most up-regulated genes, alpha-synuclein (SNCA), is located in the best linkage region of musical aptitude on chromosome 4q22.1 and is regulated by GATA2, which is known to be associated with musical aptitude. Several genes reported to regulate song perception and production in songbirds displayed altered activities, suggesting a possible evolutionary conservation of sound perception between species. We observed no significant findings in musically inexperienced participants.
Recent neuroscience research suggests that tinnitus may reflect synaptic loss in the cochlea that does not express in the audiogram but leads to neural changes in auditory pathways that reduce sound level tolerance (SLT). Adolescents (N = 170) completed a questionnaire addressing their prior experience with tinnitus, potentially risky listening habits, and sensitivity to ordinary sounds, followed by psychoacoustic measurements in a sound booth. Among all adolescents 54.7% reported by questionnaire that they had previously experienced tinnitus, while 28.8% heard tinnitus in the booth. Psychoacoustic properties of tinnitus measured in the sound booth corresponded with those of chronic adult tinnitus sufferers. Neither hearing thresholds (≤15 dB HL to 16 kHz) nor otoacoustic emissions discriminated between adolescents reporting or not reporting tinnitus in the sound booth, but loudness discomfort levels (a psychoacoustic measure of SLT) did so, averaging 11.3 dB lower in adolescents experiencing tinnitus in the acoustic chamber. Although risky listening habits were near universal, the teenagers experiencing tinnitus and reduced SLT tended to be more protective of their hearing. Tinnitus and reduced SLT could be early indications of a vulnerability to hidden synaptic injury that is prevalent among adolescents and expressed following exposure to high level environmental sounds.
Echolocation is the ability to use sound-echoes to infer spatial information about the environment. Some blind people have developed extraordinary proficiency in echolocation using mouth-clicks. The first step of human biosonar is the transmission (mouth click) and subsequent reception of the resultant sound through the ear. Existing head-related transfer function (HRTF) data bases provide descriptions of reception of the resultant sound. For the current report, we collected a large database of click emissions with three blind people expertly trained in echolocation, which allowed us to perform unprecedented analyses. Specifically, the current report provides the first ever description of the spatial distribution (i.e. beam pattern) of human expert echolocation transmissions, as well as spectro-temporal descriptions at a level of detail not available before. Our data show that transmission levels are fairly constant within a 60° cone emanating from the mouth, but levels drop gradually at further angles, more than for speech. In terms of spectro-temporal features, our data show that emissions are consistently very brief (~3ms duration) with peak frequencies 2-4kHz, but with energy also at 10kHz. This differs from previous reports of durations 3-15ms and peak frequencies 2-8kHz, which were based on less detailed measurements. Based on our measurements we propose to model transmissions as sum of monotones modulated by a decaying exponential, with angular attenuation by a modified cardioid. We provide model parameters for each echolocator. These results are a step towards developing computational models of human biosonar. For example, in bats, spatial and spectro-temporal features of emissions have been used to derive and test model based hypotheses about behaviour. The data we present here suggest similar research opportunities within the context of human echolocation. Relatedly, the data are a basis to develop synthetic models of human echolocation that could be virtual (i.e. simulated) or real (i.e. loudspeaker, microphones), and which will help understanding the link between physical principles and human behaviour.
- Proceedings of the National Academy of Sciences of the United States of America
- Published about 5 years ago
The perception of the pitch of harmonic complex sounds is a crucial function of human audition, especially in music and speech processing. Whether the underlying mechanisms of pitch perception are unique to humans, however, is unknown. Based on estimates of frequency resolution at the level of the auditory periphery, psychoacoustic studies in humans have revealed several primary features of central pitch mechanisms. It has been shown that (i) pitch strength of a harmonic tone is dominated by resolved harmonics; (ii) pitch of resolved harmonics is sensitive to the quality of spectral harmonicity; and (iii) pitch of unresolved harmonics is sensitive to the salience of temporal envelope cues. Here we show, for a standard musical tuning fundamental frequency of 440 Hz, that the common marmoset (Callithrix jacchus), a New World monkey with a hearing range similar to that of humans, exhibits all of the primary features of central pitch mechanisms demonstrated in humans. Thus, marmosets and humans may share similar pitch perception mechanisms, suggesting that these mechanisms may have emerged early in primate evolution.
In the past years, a few methods have been developed to translate human EEG to music. In 2009, PloS One 4 e5915, we developed a method to generate scale-free brainwave music where the amplitude of EEG was translated to music pitch according to the power law followed by both of them, the period of an EEG waveform is translated directly to the duration of a note, and the logarithm of the average power change of EEG is translated to music intensity according to the Fechner’s law. In this work, we proposed to adopt simultaneously-recorded fMRI signal to control the intensity of the EEG music, thus an EEG-fMRI music is generated by combining two different and simultaneous brain signals. And most importantly, this approach further realized power law for music intensity as fMRI signal follows it. Thus the EEG-fMRI music makes a step ahead in reflecting the physiological process of the scale-free brain.
There is an increasing concern that anthropogenic noise could have a significant impact on the marine environment, but there is still insufficient data for most invertebrates. What do they perceive? We investigated this question in oysters Magallana gigas (Crassostrea gigas) using pure tone exposures, accelerometer fixed on the oyster shell and hydrophone in the water column. Groups of 16 oysters were exposed to quantifiable waterborne sinusoidal sounds in the range of 10 Hz to 20 kHz at various acoustic energies. The experiment was conducted in running seawater using an experimental flume equipped with suspended loudspeakers. The sensitivity of the oysters was measured by recording their valve movements by high-frequency noninvasive valvometry. The tests were 3 min tone exposures including a 70 sec fade-in period. Three endpoints were analysed: the ratio of responding individuals in the group, the resulting changes of valve opening amplitude and the response latency. At high enough acoustic energy, oysters transiently closed their valves in response to frequencies in the range of 10 to <1000 Hz, with maximum sensitivity from 10 to 200 Hz. The minimum acoustic energy required to elicit a response was 0.02 m∙s-2 at 122 dBrms re 1 μPa for frequencies ranging from 10 to 80 Hz. As a partial valve closure cannot be differentiated from a nociceptive response, it is very likely that oysters detect sounds at lower acoustic energy. The mechanism involved in sound detection and the ecological consequences are discussed.