When Gerald McRoberts
launches into his
best rendition of baby talk, it's clear he's
had a lot of practice.
"Hi bay-bee.How're ya doin'?Such a good bay-bee,"' he
says, drawing out the vowels, his
voice becoming lilting as a melody and containing enough ups and downs in pitch to make an opera singer envious.An assistant professor of psychology at Lehigh University in Bethlehem, McRoberts has only one child.
spent years listening to other parents talk to their babies in the name of research.
The question he's
trying to answer: How do babies, those wondrous little creatures of genes and love, go from "ga ga" to talking a blue - or pink - streak in little more than a year?
"We're trying to get a window on those earliest moments of language development," says the researcher, who, with his
student assistants, now has about a half-dozen studies under way or in the planning.McRoberts
, whose lab recruits Lehigh Valley babies from newspaper birth announcements, says his
field of interest is almost as tiny as the subjects of the studies - there are only a handful of major researchers now working in the United States or abroad.
But interest in their work is growing.
McRobert's current studies are being funded by grants from the prestigious National Institutes of Health.And, he's recently fielded calls from journalists from as far away as South Africa, after a study he co-authored before coming to Lehigh suggested that moms and dads talk to their babies differently.
Published in The New Scientist, the study detailed the development of a computer program that can identify baby talk by analyzing its sound patterns.
Called BabyEars, the program can successfully distinguish baby talk - what researchers called "infant-directed speech" - from other speech 80 percent of the time.
BabyEars also is remarkably adept at determining the emotional content of baby talk, being able to tell 65 to 75 percent of the time whether the speakers were showing approval, trying to get a baby's attention or trying to prohibit a baby from doing something.
But because the program wasn't as good at detecting the emotional content of male utterances as female ones, McRoberts
and co-investigator Malcolm Slaney concluded that moms make clearer distinctions in the way they convey emotions in their baby talk than dads - at least in the artificial setting of the researchers' lab.
"For some reason, people glommed onto that finding," says McRoberts
, noting that news of the study was picked up by National Public Radio, Reuters news service and the BBC
quickly notes that the researchers didn't study whether babies noticed the male-female differences, and adds that the researchers weren't even studying the characteristics of baby talk when they did their work.
Instead, they were using baby talk to see if a computer could be taught to respond to people's emotions as well as their words.Because parents convey emotion in their baby talk without even thinking about it, the researchers thought its sound patterns would be a good model for other emotion-laden speech."The big picture was that our project was part of a bigger project to develop a robot that would respond to the tone of people's voices," explains McRoberts, a post-doctoral researcher at Stanford University when the study was done.
"It was envisioned as a kind of puppy dog that would respond not only to commands, such as 'Come' or 'Sit' or 'Stop' or 'Go,' but would also respond appropriately if you spoke in a pleasing way or if you spoke in a harsh or accusing way."Sponsored by Interval Research Corp. of Palo Alto, Calif., a company started by Microsoft co-founder Paul Allen to explore cutting-edge information technology, the robot project never came to fruition, and Interval no longer exists, McRoberts says.
But babies clearly love it when people speak baby talk to them, and McRoberts
began to wonder why.
began to wonder how and when babies go from paying attention to the emotional messages conveyed by the sound patterns of baby talk to paying attention to language's verbal content.
On those questions, research at Lehigh
is beginning to shed some light.
In one round of studies, Lehigh researchers are examining how babies react to repetition of words and phrases.
They're finding that before babies are 5 to 6 months old, they don't seem to notice repetition.But once they reach that age, they listen longer to repetitive baby talk, especially that with immediate, word-for-word repetition.
That's about the same time that babies start to speak two or more words together, "a big milestone in language development," McRoberts
says.The finding suggests that the development in the brain may play a role in the development of more sophisticated speech.
The researchers are now trying to find out if the left hemisphere starts to take over even earlier.
Still another Lehigh study has examined whether babies prefer adult baby talk or talk by siblings."Interestingly, babies with siblings pay attention longer to siblings.
Babies without them don't seem to care," McRoberts
Recently awarded a three-year, $100,000 NIH grant for upcoming work, McRoberts
, 51, says the agency is interested in the studies because it has only recently been possible to study babies' development of language before they learn to talk.
A tiny video camera and a computer that allows researchers to monitor babies' reactions to sounds from another room is at the heart of his
lab's ability to conduct such studies, McRoberts
says sometimes the hardest part of the research is coming up with enough babies at the right ages, inasmuch as parents receive only tokens, such as a book or toy, for participation.But many of those who agree to participate return time and again, and even enroll their second children.