Brain-Sight: Can touch allow us to “see” better than sight?

Which of the following procedures do you think would produce the most accurate representation of an object: tracing the object; looking at the object while drawing it; or, with your eyes closed, touching and feeling the object and then drawing it, without having ever seen it? Most educators and parents would insist that the range in the quality of the three renditions would match the order in which they are presented.

When we listen to a song, we hear the melody, the beat, the lyrics, the instruments and the voice that make the music. Looking at the squiggly lines on a piece of paper, the letters form words, the words stretch into sentences, and the sentences make up paragraphs. Once read, collectively, not separately, each contributes to meaning.

While it is customary to assert that we see with our eyes, touch with our hands, and hear with our ears, we live in a simultaneous universe where sensory events and their constituent elements have a natural tendency to overlap.

Inside the brain, complex layers of interconnected sensory networks merge together seamlessly to produce a single experience. Horizontal lines, vertical lines, color, shape, size, motion, direction, etc., are fused together, leaving no perceptual holes in an experience. Just as more than 30 separate brain modules participate in constructing a single visual image, when one sensory system has been activated, the other senses do not assume the role of an uninvolved spectator. Susan Kovalik and Karen Olsen have identified 19 human senses, which often combine to produce a perception. It would have been significantly disadvantageous for our senses to evolve completely disconnected from one another, each standing in a queue to deliver disjointed information to the brain. Instead, multiple inputs from divergent brain circuits are processed to generate a single unified experience. The various elements that make up a perception frequently involve pathways located in multiple brain regions. The ability to create constructs in the mind’s eye involves far more complex brain-processing than mere vision. Instead, our perceptions are a collaboration of networks widely distributed throughout the brain.
As we navigate our way around planet Earth, 18 square feet of human skin envelops our bodies. It accounts for 12 to 15 percent of the weight in the average adult human body, constituting the largest of all bodily organs. The skin is a tight-fitting elastic spacesuit, not only serving 24/7 as a reliable defensive barrier, but also doubling as a highly sensitive information-gathering data-recorder.

The protective functions of human skin are inarguable, but does our skin have a more subtle learning purpose? Recent experiments have shown that touch is as important as vision to learning and the subsequent retention of information. Haptics, a relatively new field of study, is revealing how the sense of touch affects the way we interact with the world. It is also suggesting that if educators engage more of the human senses in their classrooms, students might not only learn faster but will more easily recall information by commingling unrelated sensory modalities.

While we are accustomed to saying that we see with eyes, we actually see with the specialized cells in the occipital lobe located in the posterior region of the brain. As we know, blind individuals can learn to read, walk, talk and recognize objects and people without using the retinal-cortical pathways. Sighted individuals can produce visual images in the brain through the sense of touch, where we use our minds, rather than our eyes, to visualize.

The lateral occipital cortex and the right lateral fusiform gyrus are known to be crucial in object recognition. However, input from more distant cortical areas, including the sensory motor cortex and the association cortex, provides additional information for constructing visualizations. New research is suggesting that the areas of the cerebral cortex that are activated when we merely look at illustrations or pictures of specific objects are also activated when we touch the same objects. Dr. Harriett Allen at the University of Birmingham has demonstrated that some areas in the lateral occipital cortex formerly thought to process vision alone can be activated by touch alone. There now appear to be multiple areas in the brain that underlie object recognition. They are highly interconnected in such a fashion that damage inflicted on one area can render other areas vulnerable to their natural ability to recognize objects.

TOUCH
You are sound asleep. An insect whose weight is calibrated in the milligrams makes a gentle but annoying path across your skin. Although a precise measurement of the degree of pressure exerted by these miniature footsteps is next to impossible, it is sufficient enough to trigger a sensory warning alarm, and you suddenly awaken to find yourself hunting for the potentially dangerous trespasser.

Two layers of unassuming skin, barely a few millimeters thick, rest deceptively above a complex network of sensory detectors whose dial is always set to High Alert. Our fear of disease-carrying creatures is a well-earned aversion sponsored by a lengthy and horrifying history with deadly insects. Could it be that our genetic database maintains memory records of the role played by the tiny flea during the bubonic plague, or of the more contemporary link between mosquitoes and malaria? The senses are on guard even as we slumber.

Exquisitely tailored to fit our body, the skin shrouds the muscles, body tissues, bodily fluids and internal organs, keeping the external world where it belongs—beyond our personal periphery. Our internal systems, muscles, soft tissues and blood are shielded safely from injury and the countless dangers posed by toxic microscopic invaders.

Although the skin has the miraculous power to mend itself, even a tiny intrusion into the two- to three-millimeter-thin covering from the proboscis of a mosquito is reported immediately. Details of incursions across the paper-thin border are instantly transmitted to the executive center in the brain, which coordinates a typically aggressive response to the security breach. Any intrusion warrants our conscious attention or a timely defensive response, sometimes occurring in reverse order where our reaction precedes our conscious awareness.

Foreign objects, toxins, air, fluids and living organisms can seldom penetrate the boundary of the human body’s watertight seal. Even when unconscious, sensory systems are seldom completely offline; instead, they remain poised to capture any vital changes around us, even those which are seemingly minor. But what else can be learned by our highly sensitive skin receptors?

The sense of touch is the composite of three sensory qualities—temperature, pain and pressure, which can be experienced individually or in various combinations. Characteristically, the broad swatches of the human skin are classified as either hairy or glabrous (hairless). These categories are best represented by the palms and backs of our hands. Together, they put us in instantaneous contact with the outer world.

When this skin is pressed, poked, vibrated or stroked, there are specialized corpuscles that respond to the four stages of perception: detection, amplification, discrimination (among several stimuli), and adaptation (the reduction in response to a stimulus—e.g., we are only consciously aware of our clothes during the moments we put them on). Over five million touch receptors for experiencing light or heavy pressure, warmth or coldness, pain, etc., cover the body, sending essential information to the brain via a massive sensory expressway. However, the distribution of receptor cells is undemocratically concentrated into those parts of the body that are most involved in direct tactile perception, which partially explains why hands-on learning is so incredibly effective as an educational tool. Wherever the hands go, that is where the brain focuses its attention. For decades, these receptor fields were thought to be fixed and unchanging. Instead, cortical representations and sensory projections are rapidly reorganized following injury or surgical alteration to specific areas of the body.

When it comes to sensory acuity, the hand is to the human sense of touch what the fovea is to our sense of vision. Respectively, both house exceptionally sensitive receptive fields that quickly send the brain a wealth of sensory information with optimum levels of detail and discrimination. The corresponding brain areas for touch and sight dedicate a substantial amount of cortical real estate to each of these senses.

As the hands and fingers move across an object, receptor cells respond to the infinitesimal indentations created on the surface of the skin, giving us priceless data disclosing the shape, texture, hardness and form of that particular object. Interestingly, reading braille does not require abnormally sensitive fingers. On an otherwise completely flat surface, the human fingertip can detect a raised dot 0.04 mm wide and measuring only 0.006 mm high. A typical braille dot is nearly 170 times that height, rendering it an easy read for our fingers.

There are two main layers of the human skin, each of which performs distinctly different functions. The wafer-thin 0.05 to 1.5 mm epidermis, which varies in thickness according to the particular location on the body, is the outermost visible layer of our skin. Its greatest measurement of 1.5 mm is found on the soles of the feet and the palms of the hands. The 0.3 mm to 3.0 mm–thick dermis is the larger, inner-layered counterpart. Comparable to a well-disguised basement-level speakeasy of the Prohibition era, very little in the world of tactile perception transpires on the surface layer. The “happening place” is the lively second layer, where nearly all of the sensory action occurs. Processing in the dermis is quite active, not passive.

The ability to interpret a sensation to our skin rests solely on the number of densely packed mechanoreceptors residing in a given area. Sensitivity to pressure varies considerably throughout the vast exterior of the body. Regions that are highly sensitive correlate directly with a massive number of receptors compressed into a small geographical area. Over 100 mechanoreceptors per cubic centimeter are found in the face and fingertips. By contrast, only 10 to 15 detectors are found beneath the same measure of skin in the back, torso, thigh or calf. More importantly, these sensory disparities are reflected in the amount of cortical real estate taken up by neurons representing each of these areas in the somatosensory cortex.

The largest receptors are the onion-shaped Pacinian corpuscles, which encode vibration and changes in pressure indicated by skin indentations. The tiny, egg-shaped Meissner’s corpuscles, about one-tenth the size of their Pacinian cousins, are located in the dermis on the ridges of hairless skin—the soles of our feet and the raised portions of our fingertips. Over 9,000 receptors are densely packed into each square inch, where they encode the slightest stimulation and the smallest fluctuation to the skin. These two types of receptors respond instantly if activated, but adapt quickly to that initial change and cease to fire if the stimulus remains continuous.

Hair connects to touch receptors and plays a central role in information gathering. When hair in a thin-skinned area is slightly bent or pulled, we are alerted by sensory receptors lodged at the base of each individual hair. An external object may be closing in on us, possibly in an attack mode. The term hair-trigger response is not a metaphor; rather, it serves as a physiological signal designed to assure our safety and survival.

BRAIN-SIGHT
Learning is conventionally described as a sophisticated cognitive responsibility which involves the brain, not the skin. Though most noted for its function as the body’s sentry, the skin also serves the process of learning. The multilayered, sensory-rich membrane evolved over the millennia not only to examine objects, maintain our body temperature and capture valuable data concerning environmental dangers and opportunities, but it also assists in giving meaning to experiences, by means of neural assimilations.

Exteroception (perceiving the outside world) is achieved by interpreting incoming sensory information, including tactile sensations derived by identifying such features as contour, size, pattern, texture, etc., which gives an object perceptual constancy. An object’s full identity is extracted from our memory. Through tactile sensory input, we can perceive the qualia (Latin for “aspects”) of an object. It is the qualia that we use to explain the qualitative or subjective features in objects, events, experiences, etc., which enrich our visualizations, allowing us to “get the picture.” Combined with eyesight, touch informs us of the what and the where of objects within our sight and reach.

Under a research grant from the National Institutes of Health, neuroscientists Antonio and Hanna Damasio at the University of Southern California have identified an area of the brain that sponsors the “mind’s touch.” When shapes are meaningless, we form incomplete perceptions of objects. However, when we have the luxury to combine those features through multiple modalities, we can identify them using a multitude of perspectives, frequently producing an unparalleled dimension of understanding. Suddenly, a picture can emerge, which gets composed by specialized but separate modules that generate a visual image through “brain-sight.”

In the preceding brain-sight activity, it was impossible for any visual information to be transmitted from the retina (in the back of your eyes) to the primary visual cortex in the back of your brain with your eyes closed. However, you still could “see” the object and form a mind’s-eye image through intentional visualization. These procedures demonstrate that “seeing” via the “mind’s touch” actually will activate the same brain areas that would otherwise respond to normal observation. Consequently, a qualitatively better reproduction of the object was produced by brain-sight than by the “seeing and drawing” or “seeing and tracing” re-creations of precisely the same object.

Perhaps the most amazing aspect of this activity is that the first of the three drawings (the brain-sight or “sightless” version) will almost invariably be drawn completely to scale and in perfect proportion. This is why you should use graph paper. This brain-sight experience demonstrates that our traditional view of the singularity of visual perception can no longer be supported.

The somatosensory cortex, where the sense of touch is processed, turns out to be directly connected to the lateral occipital cortex, the brain region responsible for sight. Tactile activations in the lateral occipital cortex turn out to be essential, rather than tangential, to visual recognition. The lateral occipital cortex can be triggered by touch. Multi-modal recognition by these brain regions is what makes brain-sight experiences successful.

Cats, nocturnal animals, and subterranean mammals (e.g., moles and gophers) rely heavily on the sense of touch when scampering about in the darkness. The keen sense of touch in humans allows us to recognize and identify objects that cannot be processed by the visual cortex when we are walking in near or complete darkness, such as in our own home with the lights out late at night. Damage to the posterior parietal areas of the brain can result in agnosia, the inability to recognize common objects by merely feeling them, although the individual may have neither memory loss nor trouble recognizing the same object by sight or by the sound it makes. Such sensory deficits are typically restricted to the contralateral side of the body relative to the damaged hemisphere.

For young children who are struggling with simple arithmetic, a similar strategy using a “brain-sight box” can produce remarkable learning advances. Many young learners find arithmetic difficult, not due to the mathematical complexity but because they have difficulty holding the concept of number in working memory. As a result, number sense is elusive to these young learners, since they cannot maintain visual images of the quantities in their mind’s eye. If children cannot see those precise quantities in their mind’s eye, they cannot manipulate them.

Working with math manipulatives can sometimes be helpful for such children. However, allowing a child to work with math manipulatives inside a brain-sight box will yield faster and longer benefits in their development of number sense. When children engage in exercises where they are working with math manipulatives on a desktop or tabletop, they often will base their recall on the visual experience. Making the transition to pencil-and-paper recordings of their thinking can be a broad cognitive leap.

A parent or teacher can construct a simple brain-sight box for the home or classroom in five to 10 minutes (see illustration TK):

1. Use a cardboard box that is at least 18 x 12 x 6 inches.
2. On the bottom of the box, draw a horizontal line midway between the top and bottom. In the lower half, cut two holes 4 inches in diameter—large enough for a child to insert both hands.
3. Cut the entire top of the box completely off.
4. Stand the brain-sight box upright and place the math manipulatives on the inside of the box, where the child cannot see the manipulatives, but you can.
5. You may now pose simple arithmetical problems for the child to demonstrate and verbalize his or her understanding while manipulating the objects without seeing them.
6. Ask the child to show you five objects; then, Show me two less than five; Show me three objects plus two objects.
7. When a child can perform simple arithmetical operations (addition and subtraction) inside the brain-sight box, pose the same problems for him or her on paper. Making the transition will be surprisingly easier.

ORIGINS OF TOUCH: DEVELOPMENTAL NEUROBIOLOGY
Human embryos begin life as a flat disk with three distinct layers of embryonic cells: endoderm, mesoderm and ectoderm. The endoderm, the innermost layer, ultimately produces the lining of the internal organs. The connective tissues, muscles and vascular system are derived from the mesoderm, the middle layer. The genesis of the nervous system—the brain and spinal column—and the skin is in the outermost layer, the ectoderm.

Before birth, an extensive web of touch receptors traverses throughout the developing fetal body. The actual sense of touch begins to emerge after only two weeks in utero, at which time sensations to the fetal lips and nose start to register for the first time. The spinothalmic nerve tract, which detects touch and pain, is among those sensory systems that develop at the earliest prenatal developmental stages.

All children extract their greatest amount of environmental knowledge by means of firsthand sensory experiences. From the eighth to the 36th month of life, there are more brain connections (synaptic proliferation) forming in the brain than at any other period in one’s lifetime. The young brain is primed to receive a flurry of sensory input from the senses, which aid perceptions as the child adapts to the outside world. A caress, a kiss, an affectionate pat on the buttocks or a pin-prick in the same region all instantaneously set in motion responses from specialized nerve receptors.

Once they begin to crawl, infants and toddlers embark on daily excursions into their new world, investigating anything they can see, touch, grasp, pick up and taste, usually in that precise order. Object manipulation helps young children make sense of the tangible world and lays the foundations required for success in abstract thinking later. These experiences are critical to concept formation in the mind’s eye—ideas that can be mentally manipulated in multiple contexts, both simple and complex.
Scores of tactile experiences activate the amygdala, which adds an emotional dimension to our tactile memories. Neuroscientists at the University of Southern California have discovered that when you look at an object, your brain not only processes what the object looks like but remembers what the object feels like when touching it. Feelings such as warmth and safety or repulsion and fear become inextricably associated with the words cotton, blanket, thorny, etc., divulging precisely how we “feel” about encountering each of them. Eventually, the entire world becomes represented by our past sensory experiences. Human brains capture and store physical sensations and replay them when prompted by viewing a corresponding visual image.

Our intimate familiarity with a loved one involves his or her appearance, smell, the way s/he feels, the sound of the voice, and his/her kiss, all of which collectively separates that individual distinctively from any other. All significant parties to the sensory experience must be accounted for. With ease, we capture the color, shape, smell and texture of a lemon, but when interrupted by the strong scent of an onion we instantly cancel the preconception of lemon.

Humans have evolved the ability to accommodate a symphony of relevant sensory inputs simultaneously. When we hear a loud noise behind us, we turn to see what caused the noise. The visual, auditory and association cortices attempt to make sense of the clamor. Vision becomes symbiotic and additive, rather than separate to hearing, by appending another dimension to the experience.

Researcher Dr. James Shymansky and his colleagues found that approximately 13 percent of all K-12 students are auditory learners, over 90 percent of American academic instruction is delivered through textbooks, reading materials and lectures nearly 95 percent of the time. However, most early learning is self-initiated learning that comes by way of multimodal firsthand explorations, which are the keys to long-term cognitive development. Expanding the number of classroom opportunities where children can exploit the incredible power of their senses will generate deeper learning results and higher levels of student achievement.

1 Trackback / Pingback

  1. Benefits From A Voice Activated Recorder

Leave a Reply

Your email address will not be published.


*