[dropcap style=”font-size:100px; color:#992211;”]A[/dropcap]
s Tom Selleck’s character in Three Men and a Baby pointed out (albeit in regard to a different emotional state): it’s not what you say, it’s the tone of voice you use.
In his case, he was reading the sports news to an infant, keeping the child docile with subdued tones.
The fun part is reversing the process. How about The Owl and the Pussycat delivered in faceripping death/crust metal growler tones for a deep impact on childhood development?
Even if they don’t understand the words, infants react to the way their mother speaks and the emotions conveyed through speech. What exactly they react to and how has yet to be fully deciphered, but could have significant impact on a child’s development. Researchers in acoustics and psychology teamed up to better define and study this impact.
Peter Moriarty, a graduate researcher at Pennsylvania State University, will present the results of these studies, conducted with Michelle Vigeant, professor of acoustics and architectural engineering, and Pamela Cole professor of psychology, at the Acoustical Society of America and Acoustical Society of Japan joint meeting held Nov. 28-Dec. 2 in Honolulu, Hawaii.
The team used functional magnetic resonance imaging (fMRI) to capture real-time information about the brain activity of children while they listening to samples of their mothers’ voice with different affects — or non-verbal emotional cues. Acoustic analysis of the voice samples was performed in conjunction with the fMRI data to correlate brain activity to quantifiable acoustical characteristics.
“We’re using acoustic analysis and fMRI to look at the interaction and specifically how the child’s brain responds to specific acoustic cues in their mother’s speech,” Moriarty said. Children in the study heard 15 second voice samples of the same words or sentences, but each conveyed either anger, happiness, or were neutral in affect for control purposes. The emotional affects were defined and predicted quantitatively by a set of acoustic parameters.
“Most of these acoustic parameters are fairly well established,” Moriarty said. “We’re talking about things like the pitch of speech as a function of time … They have been used in hundreds of studies.” In a more general sense, they are looking at what’s called prosody, or the intonations of voice.
However, there are many acoustic parameters relevant to speech. Understanding patterns within various sets of these parameters, and how they relate to emotion and emotional processing, is far from straight forward.
“You can’t just talk to Siri [referring to Apple’s virtual assistant] and Siri knows that you’re angry or not. There’s a very complicated model that you have to produce in order to make these judgements,” Moriarty explained. “The problem is that there’s a very complicated interaction between these acoustic parameters and the type of emotion … and the negativity or positivity we’d associate with some of these emotions.”
This work is a pilot study done as an early stage of a larger project called, The Processing of the Emotional Environment Project (PEEP). In this early stage, the team is looking for the best set of variables to predict these emotions, as well as the effects these emotions have on processes in the brain. “[We want] an acoustic number or numbers doing a good job at predicting that we’re saying, ‘yes, we can say quantitatively that this was angry or this was happy,'” Vigeant said.
In the work to be presented, the team has demonstrated the importance of looking at lower frequency characteristics in voice spectra; the patterns that appear over many seconds of speech or the voice sample as a whole. These patterns, they report, may play a significant role in understanding the resulting brain activity and differentiating the information relevant to emotional processing.
With effective predictors and fMRI analysis of effects on the brain, the ultimate goal of PEEP is to learn how a toddler who has not yet developed language processes emotion through prosody and how the environment effects their development. “A long term goal is really to understand prosodic processing, because that is what young children are responding to before they can actually process and integrate the verbal content,” Cole said.
Toddlers, however, are somewhat harder to image in an fMRI device, as it requires them to be mostly motionless for long periods of time. So for now, the team is studying older children aged 6-10 — though there are still some challenges of wriggling.
“We’re essentially trying to validate this type of procedure and look at whether or not we’re able to get meaningful results out of studying children that are so young. This really hasn’t been done at this age group in the past and that’s largely due to the difficulty of having children remain somewhat immobile in the scanner.”
Source: Eurekalert/Acoustical Society of America
Image: Pixabay/Geralt
Some of the news that we find inspiring, diverting, wrong or so very right.