You started learning to speak before being born
Why do we speak? Why don't we just communicate with gestures, since sign language is just as delicate and detailed a way to communicate as the spoken language?
One major reason spoken language may have become dominant over gesture lies in how our senses develop before birth. Long before we utter our first word, our brains are tuning in to the sounds of language.
Hearing begins much earlier than seeing. Already around the sixth month of pregnancy, the human fetus starts responding to sound.
The womb acts like a filter that allows many sound frequencies to pass through. As a result, babies begin absorbing the rhythm and melody of speech while still in utero.
In contrast, vision remains weak even after birth. In the first six months of a baby’s life, vision is very blurred. It takes up to three years for infants to see clearly, while their hearing is already active and guiding early brain development.
Listening in the womb
Inside the womb, the fetus floats in fluid surrounded by the mother's body.
Sounds from the outside world, especially the mother’s voice, are muffled but still present. Even with some reduction in volume, the tones and rhythms of speech come through.
The mother's heartbeat and digestion form a background noise, but higher-frequency elements like those of language are still detectable.
Studies have shown that babies recognize their mother’s voice at birth. They can even distinguish between different languages, showing preferences for the language they heard before birth.
Infants respond to melodies and stories they were exposed to in utero, suggesting that their brains have already begun to learn about speech patterns and vocal structure before they are born.
Crying with an accent
Newborn cries can reflect the melody of the language they heard in the womb.
Babies born to French-speaking mothers, for example, cry with a rising tone, similar to the intonation of French. In contrast, German newborns cry with a falling pitch.
This pattern appears in other languages, too. Babies exposed to Mandarin Chinese, where tone shapes meaning, produce more melodically complex cries than those hearing non-tonal languages like German.
This suggests that the fetus not only hears but also begins to mimic aspects of the surrounding language, even before birth. These early sound patterns may help infants signal their needs more effectively to caregivers familiar with the local language rhythm.
That´s so darn interesting - and may explain why it's so hard to learn a new language later in life.
The fetal brain and sound learning
Studies suggest that by the last weeks of pregnancy, the brain networks involved in hearing and even early language processing are already taking shape.
Modern imaging tools, like fMRI brain scans of unborn babies, have revealed just how responsive the prenatal brain is to sound.
In the third trimester of pregnancy, the auditory regions of the fetal brain show activity when stimulated by the mother’s voice. Studies have also shown that the fetus can detect changes in sound patterns, reacting differently to new or unexpected sounds.
This early ability to pick out regularities lays the groundwork for later language learning.
From frogs to humans
The idea that sound exposure before birth influences later behavior is not unique to humans.
In animals like insects, frogs, and birds, vibrations or calls can trigger early hatching or shape survival strategies.
Some bird species teach their chicks a unique "password" while still in the egg, helping parents identify their own young after hatching.
In humans, the parallels are clear. Preterm infants exposed to recordings of their mother’s voice and heartbeat show enhanced brain development compared to those hearing only hospital noise.
Mimicking speech before birth
In addition to hearing, unborn babies may begin practicing how to speak in the bomb.
Ultrasound images have captured fetuses moving their lips in response to their mother’s speech, matching the rhythm and shape of syllables.
Though likely unconscious, these movements suggest that the fetus is mapping sound onto motor patterns, moving their mouths. This process may help prepare the baby’s vocal system to produce sounds from birth.
The ability to imitate sounds before birth points to a built-in-the-brain system for matching hearing and speaking, and that is a foundation for spoken language.
So, the pro-auditory environment in the womb, combined with a brain ready to detect patterns and a body beginning to mirror speech movements, may naturally lead to a preference for vocal communication after birth.
Pssst - remember to subscribe to our free newsletter!
About the paper that inspired:
First Author: Alexis Hervais-Adelman, Switzerland
Published: PLoS Biology, April 2025
Link to paper: https://journals.plos.org/plosbiology/article?id=10.1371/journal.pbio.3003141
Comments ()