There are two main layers of the human skin, each of which performs distinctly different functions. The wafer-thin 0.05 to 1.5 mm epidermis, which varies in thickness according to the particular location on the body, is the outermost visible layer of our skin. Its greatest measurement of 1.5 mm is found on the soles of the feet and the palms of the hands. The 0.3 mm to 3.0 mm–thick dermis is the larger, inner-layered counterpart. Comparable to a well-disguised basement-level speakeasy of the Prohibition era, very little in the world of tactile perception transpires on the surface layer. The “happening place” is the lively second layer, where nearly all of the sensory action occurs. Processing in the dermis is quite active, not passive.
The ability to interpret a sensation to our skin rests solely on the number of densely packed mechanoreceptors residing in a given area. Sensitivity to pressure varies considerably throughout the vast exterior of the body. Regions that are highly sensitive correlate directly with a massive number of receptors compressed into a small geographical area. Over 100 mechanoreceptors per cubic centimeter are found in the face and fingertips. By contrast, only 10 to 15 detectors are found beneath the same measure of skin in the back, torso, thigh or calf. More importantly, these sensory disparities are reflected in the amount of cortical real estate taken up by neurons representing each of these areas in the somatosensory cortex.
The largest receptors are the onion-shaped Pacinian corpuscles, which encode vibration and changes in pressure indicated by skin indentations. The tiny, egg-shaped Meissner’s corpuscles, about one-tenth the size of their Pacinian cousins, are located in the dermis on the ridges of hairless skin — the soles of our feet and the raised portions of our fingertips. Over 9,000 receptors are densely packed into each square inch, where they encode the slightest stimulation and the smallest fluctuation to the skin. These two types of receptors respond instantly if activated, but adapt quickly to that initial change and cease to fire if the stimulus remains continuous.
Hair connects to touch receptors and plays a central role in information gathering. When hair in a thin-skinned area is slightly bent or pulled, we are alerted by sensory receptors lodged at the base of each individual hair. An external object may be closing in on us, possibly in an attack mode. The term hair-trigger response is not a metaphor; rather, it serves as a physiological signal designed to assure our safety and survival.
Learning is conventionally described as a sophisticated cognitive responsibility which involves the brain, not the skin. Though most noted for its function as the body’s sentry, the skin also serves the process of learning. The multilayered, sensory-rich membrane evolved over the millennia not only to examine objects, maintain our body temperature and capture valuable data concerning environmental dangers and opportunities, but it also assists in giving meaning to experiences, by means of neural assimilations.
Exteroception (perceiving the outside world) is achieved by interpreting incoming sensory information, including tactile sensations derived by identifying such features as contour, size, pattern, texture, etc., which gives an object perceptual constancy. An object’s full identity is extracted from our memory. Through tactile sensory input, we can perceive the qualia (Latin for “aspects”) of an object. It is the qualia that we use to explain the qualitative or subjective features in objects, events, experiences, etc., which enrich our visualizations, allowing us to “get the picture.” Combined with eyesight, touch informs us of the what and the where of objects within our sight and reach.
Under a research grant from the National Institutes of Health, neuroscientists Antonio and Hanna Damasio at the University of Southern California have identified an area of the brain that sponsors the “mind’s touch.” When shapes are meaningless, we form incomplete perceptions of objects. However, when we have the luxury to combine those features through multiple modalities, we can identify them using a multitude of perspectives, frequently producing an unparalleled dimension of understanding. Suddenly, a picture can emerge, which gets composed by specialized but separate modules that generate a visual image through “brain-sight.”
In the preceding brain-sight activity, it was impossible for any visual information to be transmitted from the retina (in the back of your eyes) to the primary visual cortex in the back of your brain with your eyes closed. However, you still could “see” the object and form a mind’s-eye image through intentional visualization. These procedures demonstrate that “seeing” via the “mind’s touch” actually will activate the same brain areas that would otherwise respond to normal observation. Consequently, a qualitatively better reproduction of the object was produced by brain-sight than by the “seeing and drawing” or “seeing and tracing” re-creations of precisely the same object.
Perhaps the most amazing aspect of this activity is that the first of the three drawings (the brain-sight or “sightless” version) will almost invariably be drawn completely to scale and in perfect proportion. This is why you should use graph paper. This brain-sight experience demonstrates that our traditional view of the singularity of visual perception can no longer be supported.
The somatosensory cortex, where the sense of touch is processed, turns out to be directly connected to the lateral occipital cortex, the brain region responsible for sight. Tactile activations in the lateral occipital cortex turn out to be essential, rather than tangential, to visual recognition. The lateral occipital cortex can be triggered by touch. Multi-modal recognition by these brain regions is what makes brain-sight experiences successful.
Cats, nocturnal animals, and subterranean mammals (e.g., moles and gophers) rely heavily on the sense of touch when scampering about in the darkness. The keen sense of touch in humans allows us to recognize and identify objects that cannot be processed by the visual cortex when we are walking in near or complete darkness, such as in our own home with the lights out late at night. Damage to the posterior parietal areas of the brain can result in agnosia, the inability to recognize common objects by merely feeling them, although the individual may have neither memory loss nor trouble recognizing the same object by sight or by the sound it makes. Such sensory deficits are typically restricted to the contralateral side of the body relative to the damaged hemisphere.
For young children who are struggling with simple arithmetic, a similar strategy using a “brain-sight box” can produce remarkable learning advances. Many young learners find arithmetic difficult, not due to the mathematical complexity but because they have difficulty holding the concept of number in working memory. As a result, number sense is elusive to these young learners, since they cannot maintain visual images of the quantities in their mind’s eye. If children cannot see those precise quantities in their mind’s eye, they cannot manipulate them.
Working with math manipulatives can sometimes be helpful for such children. However, allowing a child to work with math manipulatives inside a brain-sight box will yield faster and longer benefits in their development of number sense. When children engage in exercises where they are working with math manipulatives on a desktop or tabletop, they often will base their recall on the visual experience. Making the transition to pencil-and-paper recordings of their thinking can be a broad cognitive leap.