Infants are first able to discriminate speech sounds during the ________ stage.

Did you know that babies are able to recognize and process sounds related to language from a very early age? Infants seem to respond to sounds produced by their mother while still in the uterus. This auditory system is not fully developed at birth but is ready to function. Neonates are able to discriminate between the different sound levels and duration, different phonemes and constants of all the languages they are exposed to. However when they turn 12 months of age this ability disappears and they are only able to discriminate the phonemes of their native language. This phenomenon is explained through Jusczyk’s Head Turn Experiment.

Jusczyk tested two groups of American babies aged 6 months and 9 months. The experiment recorded how long the baby looked at either the right or the left speaker when listening to a word list in either English or Dutch. The result showed that the 9-month-old babies preferred the English list and the 6-month-old babies had no preference. Next the same word lists were passed through a low-pass filter to only let the low frequencies out. This resulted in the both groups showing no preference. This concludes that 9-month-old babies are aware of the phonemes in their own language as they start to use both prosodic and phonotactic cues to discriminate individual speech sounds of their language.

Why do the 6-month-old babies have no preference? Jusczyk changed the word lists to English and Norwegian because Dutch and English have the same prosodic pattern, whereas English and Norwegian do not. The results were that with or without a low-pass filter, the 6-month-old babies preferred the English list. Conclusion; 6-month-old babies are not aware of sound sequences but can discriminate different prosodic patterns and prefer their own languages pattern. The reason could be the belief that babies at birth hear at a low frequency because the middle ear is still filled with fluid and so cannot differentiate different speech sounds.

There is a vast difference seen in the auditory sensory abilities of an infant aged 6 months and a child aged 12 months. From being able to discriminate different prosodic patterns in different languages to being able to differentiate the different phonemes, eventually children reaching one year of age become better at discerning phonemes in their native language and poor at other languages. For example, Japanese speaking adults cannot discriminate the English sounds of ‘la’ and ‘ra’, whereas English speaking adults have no problem and English speaking adults cannot discriminate the Japanese sounds ‘i’ and ‘ii’, whereas Japanese adults can. However these sounds and sounds from other languages such as the Hindi ‘da’ and ‘da’ can be discriminated by an infant in the first few months after birth. In the end, infants lose this ability to start to develop language and speech as they start to form concepts and begin categorising speech in their native language.

We use “cookies” to customize your experience with the Site, content and Services and to store your information so you do not have to re-enter them each time you visit the Site or access the Content or Services. By clicking “Accept All”, you consent to the use of ALL the cookies. However, you may visit "Cookie Settings" to provide a controlled consent.

People's ability to perceive speech sounds has been deeply studied, specially during someone's first year of life, but what happens during the first hours after birth? Are babies born with innate abilities to perceive speech sounds, or do neural encoding processes need to age for some time?

Researchers from the Institute of Neurosciences of the University of Barcelona (UBNeuro) and the Sant Joan de Déu Research Institute (IRSJD) have created a new methodology to try to answer this basic question on human development.

The results, published in the Nature's open-access journal Scientific Reports, confirm that newborn neural encoding of voice pitch is comparable to the adults' sabilities after three years of being exposed to language. However, there are differences regarding the perception of spectral and temporal fine structures of sounds, which consists on the ability to distinguish between vocal sounds such as /o/ and /a/. Therefore, according to the authors, neural encoding of this sound aspect, recorded for the first time in this study, is not found mature enough after being born, but it needs a certain exposure to the language as well as stimulation and time to develop.

According to the researchers, knowing the level of development typical in these neural encoding processes from birth will enable them to make an "early detection of language impairments, which would provide an early intervention or stimulus to reduce future negative consequences."

The study is led by Carles Escera, professor of Cognitive Neuroscience at the Department of Clinical Psychology and Psychobiology of the UB, and has been carried out at the IRSJD, in collaboration with Maria Dolores Gómez Roig, head of the Department of Obstetrics and Gynecology of Hospital Sant Joan de Déu. The study is also signed by the experts Sonia Arenillas Alcón, first author of the article, Jordi Costa Faidella and Teresa Ribas Prats, all members of the Cognitive Neuroscience Research Group (Brainlab) of the UB.

Decoding the spectral and temporal fine structure of sound

In order to distinguish the neural response to speech stimuli in newborns, one of the main challenges was to record, using the baby's electroencephalogram, a specific brain response: the frequency-following response (FFR). The FFR provides information on the neural encoding of two specific features of sound: fundamental frequency, responsible for the perception of voice pitch (high or low), and the spectral and temporal fine structure. The precise encoding of both features is, according to the study, "fundamental for the proper perception of speech, a requirement in future language acquisition."

To date, the available tools to study this neural encoding enabled researchers to determine whether the newborn's baby was able to encode inflections in the voice pitch, but it did not when it came to the spectral and temporal fine structure. "Inflections in voice pitch contour are very important, especially in tonal variations like in Mandarin, as well as to perceive the prosody from speech that transmits emotional content of what is said. However, the spectral and temporal fine structure of sound is the most relevant aspect in language acquisition regarding non-tonal languages like ours, and the few existing studies on the issue do not inform about the precision with which a newborn's brain encodes it," note the authors.

The main cause of this lack of studies is the technical limitation caused by the type of sounds used to conduct these tests. Therefore, authors have developed a new stimulus (/oa/) whose internal structure (increasing change in voice pitch, two different vocals) allows them to evaluate the precision of the neural encoding of both features of the sound simultaneously using the FFR analysis.

An adapted test to the limitations of the hospital environment

One of the most highlighted aspects of the study is that the stimulus and the methodology are compatible to the typical limitations of the hospital environment in which the tests are carried out. "Time is essential in the FFR research with newborns. On the one hand, because recording time limitations determine the stimuli they can record. On the other hand, for the actual conditions of the situation of newborns in hospitals, where there is a frequent and continuous access to the baby and the mother so they receive the required care and undergo evaluations and routine tests to rule out health problems," authors add. Considering these restrictions, the responses of the 34 newborns that were part of the study were recorded in sessions that lasted between twenty and thirty minutes, almost half the time used in common sessions in studies on speech sound discrimination.

A potential biomarker of learning problems

After this study, the objective of the researchers is to characterize the development f neural encoding of the spectral and temporal fine structure of speech sounds over time. To do so, they are currently recording the frequency-following response in those babies that took part in the present study, who are now 21 months old. "Given that the two first years of life are a critical period of stimulation for language acquisition, this longitudinal evaluation of the development will enable us to have a global view on how these encoding skills mature over the first months of life," note the researchers.

The aim is to confirm whether the observed alterations -after birth- in neural encoding of sounds are confirmed with the appearance of observable deficits in infant language development. If that happens, "that neural response could be certainly considered a useful biomarker in early detection of future literacy difficulties, just like detected alterations in newborns could predict the appearance of delays in language development. This is the objective of the ONA project, funded by the Spanish Ministry of Science and Innovation," they conclude.