Brain-Sight: Can Touch Allow Us to “See” Better Than Sight?

Which of the following procedures do you think would produce the most accurate representation of an object: tracing the object; looking at the object while drawing it; or, with your eyes closed, touching and feeling the object and then drawing it, without having ever seen it? Most educators and parents would insist that the range in the quality of the three renditions would match the order in which they are presented.

When we listen to a song, we hear the melody, the beat, the lyrics, the instruments, and the voice that make the music. Looking at the squiggly lines on a piece of paper, the letters form words, the words stretch into sentences, and the sentences make up paragraphs. Once read, collectively, not separately, each contributes to meaning.

While it is customary to assert that we see with our eyes, touch with our hands, and hear with our ears, we live in a simultaneous universe where sensory events and their constituent elements have a natural tendency to overlap.

Inside the brain, complex layers of interconnected sensory networks merge together seamlessly to produce a single experience. Horizontal lines, vertical lines, color, shape, size, motion, direction, etc., are fused together, leaving no perceptual holes in an experience. Just as more than 30 separate brain modules participate in constructing a single visual image, when one sensory system has been activated, the other senses do not assume the role of an uninvolved spectator. Susan Kovalik and Karen Olsen have identified 19 human senses, which often combine to produce a perception. It would have been significantly disadvantageous for our senses to evolve completely disconnected from one another, each standing in a queue to deliver disjointed information to the brain. Instead, multiple inputs from divergent brain circuits are processed to generate a single unified experience. The various elements that make up a perception frequently involve pathways located in multiple brain regions. The ability to create constructs in the mind’s eye involves far more complex brain-processing than mere vision. Instead, our perceptions are a collaboration of networks widely distributed throughout the brain.

As we navigate our way around planet Earth, 18 square feet of human skin envelops our bodies. It accounts for 12 to 15 percent of the weight in the average adult human body, constituting the largest of all bodily organs. The skin is a tight-fitting elastic spacesuit, not only serving 24/7 as a reliable defensive barrier, but also doubling as a highly sensitive information-gathering data-recorder.

The protective functions of human skin are inarguable, but does our skin have a more subtle learning purpose? Recent experiments have shown that touch is as important as vision to learning and the subsequent retention of information. Haptics, a relatively new field of study, is revealing how the sense of touch affects the way we interact with the world. It is also suggesting that if educators engage more of the human senses in their classrooms, students might not only learn faster but will more easily recall information by commingling unrelated sensory modalities.

While we are accustomed to saying that we see with eyes, we actually see with the specialized cells in the occipital lobe located in the posterior region of the brain. As we know, blind individuals can learn to read, walk, talk, and recognize objects and people without using the retinal-cortical pathways. Sighted individuals can produce visual images in the brain through the sense of touch, where we use our minds, rather than our eyes, to visualize.

The lateral occipital cortex and the right lateral fusiform gyrus are known to be crucial in object recognition. However, input from more distant cortical areas, including the sensory motor cortex and the association cortex, provides additional information for constructing visualizations. New research is suggesting that the areas of the cerebral cortex that are activated when we merely look at illustrations or pictures of specific objects are also activated when we touch the same objects. Dr. Harriett Allen at the University of Birmingham has demonstrated that some areas in the lateral occipital cortex formerly thought to process vision alone can be activated by touch alone. There now appear to be multiple areas in the brain that underlie object recognition. They are highly interconnected in such a fashion that damage inflicted on one area can render other areas vulnerable to their natural ability to recognize objects.

1 Trackback / Pingback

  1. Maximize brain activity using print – Adam Gordon Creative Print

Leave a Reply

Your email address will not be published.