Musician Performs Emotional “Quarantine Concert” from Truck Bed

california singer-songwriter surprised neighbors, parents with performance

By Jonny Lupsha, Wondrium Staff Writer

A California musician played a “drive-by show” for his quarantined neighbors, Good News Network reported. Singer-songwriter Tanner Howe played guitar and sang from the bed of his truck as onlookers enjoyed his performance from their homes and lawns. Why does music affect us emotionally?

Close up of guitar being played
Music is widely known for having a universal appeal and for changing our mood and/or reminding us of past emotions. Photo by Photosite / Shutterstock

Coronavirus lockdown measures haven’t defeated the spirit of music. Huntington Beach musician Tanner Howe surprised friends and family with a one-man concert performed from the bed of his truck, earlier this month.

“Since Howe is a singer-songwriter from Huntington Beach, California, he and his family put together a list of songs, decorated the truck, and brought cameras to record the reactions,” the Good News Network article said. “Initially, they were only planning to visit their grandparents in Long Beach and three other family members and friends in the Orange County area—but then they decided to stop and play for their neighbors along the way, hoping that it would brighten their day in self-isolation. Needless to say, it surely did, and the results are incredibly heartwarming.”

The emotion expressed in music has a wide range of psychological effects on the brain, which are concrete enough that it has led to much scientific research on the subject.

Music and Language Processing

One of the most compelling theories about how the brain processes and interprets music is that some of the brain mechanisms involved in it are also involved with the processing of everyday spoken language. Since both use our auditory receptors for communication, there must be some degree of overlap, but the question is, how much?

“One view is that the sharing is minimal and just reflects the fact that they share a common subcortical pathway for basic sound analysis; that’s the set of auditory processing structures between the ear and the cerebral cortex,” said Dr. Aniruddh D. Patel, Professor of Psychology at Tufts University. “We know that the brain uses the same circuits in these brainstem and midbrain areas to analyze basic acoustic features of speech, music, and any other type of sound, such as environmental sounds. But it’s possible that in the cerebral cortex, where more complex cognitive processing takes place, there could be minimal overlap in the circuits that process music and language.”

This first view of music and language processing has plenty of momentum since music and language are so different. Language conveys far more specific ideas with words than music does with notes, as anyone who’s ever read a recipe or driving directions knows. However, the other school of thought also has plenty of evidence to support it.

Emotion: The Case for Music and Language Overlap

Despite the differences in music and language, both share the very important trait of using audio signals to express emotion.

“When we speak, we don’t just convey words and phrases; we convey attitudes and emotions by the way we say those words and phrases,” Dr. Patel said. “The pace and loudness of our voice, the way pitch moves up and down, the rhythm of our syllables, and the way we articulate our speech sounds all work together to express an emotional tone. These elements of language are called speech prosody.”

It goes without saying that we can tell how someone feels by listening to their speech prosody. Even if they wish to hide their emotions, they often shine through in their tone of voice and speed of speech. Dr. Patel said that it’s believed that speech prosody has its roots in ancient emotional circuitry. This is evidenced by the fact that we can often interpret emotions from foreign languages. Despite the language barrier, we can tell when someone speaking a different language sounds happy or angry or annoyed, for example.

“Researchers have done detailed sound analysis of voices expressing different basic emotions, like happiness or sadness, and have found some consistent acoustic cues that distinguish different emotions,” Dr. Patel said. “Happy-sounding speech tends to be relatively fast, with medium-to-high loudness, has a high average pitch, and wide pitch range, and a brighter sound, and a crisp articulation, and emphasizes upward pitch movements. Sad-sounding speech is slower and quieter, lower in average pitch, with a narrow pitch range, and a darker sound quality, and duller articulation, and emphasizes downward pitch movements.”

These auditory qualities are all used the same way in music to express the same emotions. Bright and fast and high-pitched tones inevitably sound happier; whereas, dark and slow and narrow-ranged tones exude melancholy. This emotional congruence between speech and music is key to understanding the theory that both are processed similarly in the brain.

Dr. Aniruddh D. Patel contributed to this article. Dr. Patel is a Professor of Psychology at Tufts University. He received his Ph.D. in Organismic and Evolutionary Biology from Harvard University, where he studied with Edward O. Wilson and Evan Balaban. His research focuses on the cognitive neuroscience of music.