The first four years of human life are critical for language development, and yet the speed at which children learn to talk is highly variable.
Most kids are saying single words by 18 months and progressing to phrases and sentences around two or three, but others are chatting away far earlier.
For years now, psychologist Elika Bergelson from Harvard University has wondered what individual differences lead to this spectrum of abilities.
Her recent observational study on the subject has now turned up surprising results.
Enrolling 1,001 children under the age of four, she and an international team of psychologists were surprised to find no link between how much a kid vocalized during the day and their gender, socioeconomic status, or level of exposure to multiple languages.
Instead, experimenters found that the chattiest children in the global study were those that tended to hear more adult talk in turn, along with age and clinical factors such as prematurity or dyslexia.
“Slower language development has often been attributed to parents from lower-socioeconomic backgrounds providing less input to their children (viewed from a middle-class Western-centric perspective), leading to calls for behavioral interventions aiming to increase it,” write Bergelson and colleagues.
“Proponents of such interventions might highlight our correlation between adult talk and child speech; critics might instead underscore our finding that socioeconomic status was not significant in our main analyses nor in every other reanalysis we attempted.”
More studies are needed to figure out which interpretation is correct.
The results of the study cannot reveal why child speech and adult talk are so intertwined, but there are a few possible explanations. Children who produce more speech may elicit more talk from adults, or it may be that more talkative environments promote more speech among children.
The latter scenario would suggest that getting adults to talk more around their children may prove beneficial.
Bergelson’s research spans 12 countries and 43 languages, and incorporates both regional and urban areas.
The audio for the study was collected via wearable recorders, which were placed on children of various stages of development and abilities aged two months to four years.
Using machine learning to sift through more than 40,000 hours of recordings, Bergelson and her colleagues tested a variety of factors that might influence how often and how early a child babbles or says syllables, words, or sentences.
Ultimately, the team found no compelling association between how many vocalizations a child made in the day and their gender or socioeconomic status (as determined by their mother’s education level).
Instead, the results reveal kids under four produce around 66 more vocalizations per hour with every year of development.
That’s to be expected, as age is closely tied to cognitive development, but there was another factor that also showed a surprisingly strong effect: kids who heard adults talking more, tended to talk more themselves.
On average, for every 100 adult vocalizations heard by a child within an hour, that kid produced 27 more vocalizations.
This ‘adult talk’ effect grew by 16 vocalizations with every year of child development.
To put the significance of that effect in perspective, children in the study who showed non-normative language development, like dyslexia, produced 20 fewer vocalizations per hour compared to their peers. Every year, that gap widened by about 8 vocalizations an hour.
The current study only counted sounds that were made during the day, which means it didn’t examine the sophistication of language used by children.
As a result, socioeconomic or gender factors may still impact some elements of language development that are overlooked in this study.
Kids who are raised in higher socioeconomic households, for instance, could have parents that read more to them, possibly boosting their vocabulary or grammar development.
Bergelson acknowledges that her recent study took a “coarse-grained” approach that may miss some finer details.
“It’s the algorithm’s estimate of how much speech the kid is hearing or producing,” she told Christy DeSmith of the Harvard Gazette.
“But I think it’s a complementary approach to what otherwise is very, very time-consuming and sample-limiting work.”
The study was published in PNAS.