Know Your Brain: Superior Temporal Sulcus — Channeling The “Social Brain”

We like to think we came up with that idea — pride ourselves on the privacy of our thoughts and dreams, and perhaps a little on our own ability to embellish. We like to think of genius as something that comes in sudden bursts of inspiration — magic that only we can untap for ourselves, and that all the great thinkers were loners who lived in their own heads. As usual, science has other ideas.

No one is an island — the reason you should ask not for whom the bell tolls, and nothing further reinforces the notion like modern neuroscience. We do not develop in isolation, nor can we thrive in it. Your brain is a prime example. As we grow, the neurons throughout our brains will forge over 100 trillion connections, or synapses, as we meet and interact with new people and acquire new skills. Your brain is the original social network — its own functioning dependent on the community it grows up in. Long before you develop the ability to speak, your brain is working to imitate the sounds it absorbs — with the help of the superior temporal sulcus, situated within the brain’s temporal lobe, the part of your brain involved in processing visual memory and emotion. We are, by nature, collaborators.

The brain’s “social river,” the superior temporal sulcus (STS) is a long, narrow groove situated just above the ears on the surface of the brain, rising just above the crown of the head. It collects information from neurons throughout the brain — think of these as small rivulets emptying into it — a continuous flow of sounds and images. Studies using functional MRI technology reveal that brain activity peaks in the STS upon hearing human voices — which it deciphers from background noises. It’s also why you can tell apart the voices of friends and strangers on your phone or talking just outside your door. It also allows you to distinguish faces from objects that aren’t human and piece together words of a coherent sentence — why you can process those words of a novel on your Kindle and understand the plot, and recognize spam email better than your inbox filter. In turn, the STS is also used to detect social cues — to interpret and mirror the emotions of those around us, crucial functions that only adapt due to our day-to-day interactions with other people.

How much influence do these external forces have on shaping the brain? A 2011 study published by Proceedings of the Royal Society B: Biological Sciences suggested that people who made more friends on Facebook had larger brains than their peers. The researchers were not able to determine if it was sending out a flurry of friend requests that bulked up their brains, or if those with a larger than average brain size just tend to seek out human interaction more. A subsequent study published that year in Science followed 23 different macaques. One group was assigned to live on their own with a companion, or in groups ranging from three to seven other primates. The study found that monkeys living in the larger groups had higher volumes of gray matter in regions for processing social information.

“The superior temporal sulcus, or the amygdala, are implicated in humans and macaques, suggesting that the brain networks involved in processing social information in humans has evolved from a network that was already performing computations related to social cognition in rhesus macaques,” said Jerome Sallet, a University of Oxford researcher who conducted the study. Primates tended to thrive better in settings where they came to interact and rely on each other.

To help with processing faces, the STS relies on two distinguished neural patterns that were uncovered by researchers at Ohio State University in 2016. One pattern picks up on details such as brow movement, and the other looks for movement of the lips. Using the functional MRI signals of their test subjects, the researchers were able to develop a machine-based algorithm determining what their subjects were looking at based entirely on key muscle movements in their faces.

“Humans use a very large number of facial expressions to convey emotion, other nonverbal communication signals and language,” says Aleix Martinez, a cognitive scientist at Ohio State who is also a professor of computer engineering. “Yet, when we see someone make a face, we recognize it instantly, seemingly without conscious awareness. In computational terms, a facial expression can encode information, and we’ve long wondered how the brain is able to decode this information so efficiently.”

Faces can be deceptively complex — we may not appreciate it at the time, but we’re being offered a great deal of information each time we look at one. Not only are they an indicator of that person’s mood, but also of what they are looking at. Other faces also clue us in on how to respond. If a person is fearful — we may look for signs of imminent danger and retreat. It’s also the reason that we tend to laugh more (and louder) in group settings. We may even look to each other to determine if laughter is acceptable at a given moment.

So far, Martinez and his team of researchers have been able to build a machine that can predict emotional human responses with an accuracy rate of 60 percent, since we all tend to use similar expressions for experiencing extreme feelings of joy or sadness. During the functional MRI session, 10 college students served as test subjects. They were given a series of faces with mixed emotion — a total of 1,000 separate images: “happily surprised,” “angrily surprised,” and “fearfully surprised” — emotions that, for example, may consist of raised eyebrows, though other muscles throughout the face may differ. Regardless of their reactions, the nine subjects all showed increased brain activity in the STS during the exercise. An algorithm that effectively models human facial recognition of emotion may soon be on the way.

The study consisted of participants with typical neural functioning, but the study’s co-author, Julie Colomb, director of the Vision and Cognitive Neuroscience Lab at Ohio State, believes that their data may be helpful when it comes to treating other disorders, particularly spectrum disorders in which patients have difficulty with learning and processing emotions. “This work could have a variety of applications, helping us not only understand how the brain processes facial expressions, but ultimately how this process may differ in people with autism, for example,” she said.

What remains to be seen is whether the STS is made up of a series of fine-grained surfaces that carry out its functions, or if the length of this so-called social river links all of its commands in one linear sequence. The upper bank of the STS is more sensitive to voices than other sounds, but this trait is often deficient in people with spectrum disorders. Specialized patches of cortex within the STS are what allow us to see where another person is looking, to share in their gaze. Other less pronounced areas of the STS recognize physical gestures and overlap with areas that process words — perhaps a reason why we can quickly adapt to sign language. The posterior STS, a cluster at the back of the groove, tends to be less developed in people with spectrum disorder.

“There are several socially relevant functions that we know are occurring there that appear to be different in autism,” said Kami Koldewyn, a lecturer in psychology at Bangor University of Gwynedd, Wales. As small as the STS structure may be, unstable connections here can offset connectivity in the rest of the brain. The path of brain signals quickly becomes like a tangle of stored-away Christmas lights with signals overlapping in the wrong places as time goes by. Fortunately, it’s also not impossible to “untangle,” and neuroscience is already looking at new solutions.

The hormone oxytocin has shown some effectiveness in treating certain cases of autism — a chemical released by the posterior pituitary. It’s the same chemical released into the bloodstream during social bonding, and increases activity in the STS. Neurofeedback training may be yet another option — although it remains controversial — in which patients use video images to take control of their own brain waves. Now, researchers are looking at using transcranial magnetic stimulation to target the STS region, in which magnetic coils are placed near the head and produce small electrical currents to stimulate the brain. Already, the coils have shown some success in helping people who report neuropathic pain from spinal injuries and multiple sclerosis, and may offer some promise for the future.

Be the first to comment

Leave a Reply

Your email address will not be published.