Home > Uncategorized > Can I Touch Your Face?

Can I Touch Your Face?

Stepbrothers: click the image

Imagine feeling around in your kitchen’s miscellaneous junk drawer in the dark—among rubber bands, lighters, pencil sharpeners, and notepads—for a ballpoint pen. Not a pencil, and certainly not a highlighter. But that specific shape of pen. You know what a pen feels like, having felt them and seen them many times before, so the dark gives you no issue and you pull out exactly what you are looking for.

Our bodies have many ways of interacting with surroundings and objects. All senses powerfully work together to interpret what an object is based on its size, weight, texture, color, even smell. Sometimes these senses are isolated, so we rely on solely on sight or exclusively on touch, seemingly very different methods we employ. Having great visual interpretation, as if you have a keen eye for painting styles, seems to not necessarily make you better at identifying a sculptor’s work by touching and feeling the art. (Make sure to wait until the docent has their back turned!) But much like training your body to run faster can help you swim better, training one sense could improve another.

Shape identification based on both visual perception and haptic interaction was studied by Wallraven et al. in order to better understand if our brain separates these types of shape identification, or if there is a cross-over of ability between sight and touch. Visual perception and identification of shapes is seeing an object and determining what it is purely based on your view of it. Haptic interaction uses touch to explore an object, like typing on different computer keyboards.

Subjects being tested in the experiment were trained to recognize two different 3-D printed objects that were smoothly contoured, but distinct from one another. The participants categorized objects as best as they could based on their shape. Some were more similar to the first shape: Shape A, or the second: Shape B. The objects they experienced varied; some were just like Shape A or B, while others shared features from both objects, which lay on either end of a shape-spectrum. As a benchmark, all participants were initially tested. They had to categorize objects purely based on looks, without being allowed to touch the object, and then only by how they felt, as if they were wearing a blindfold. Later, half the subjects were trained to be very good at visual categorization; they could consistently identify an object as more similar to A or B based on viewing it. The other subjects were similarly trained in the haptic, ‘feeling’ manner. All participants explored the objects from a series of angles, so they did not just develop a single ‘template’ of the object: a representation of just a single view or angle.

When tested after training, there was a great deal of cross-over in the abilities of participants! People who got good visually were better using their hands and visa versa.

So improving identification based on vision improves identification based on touch. I would imagine that if the average person wore a blindfold and handed an apple in one hand and an orange in the other (s)he could distinguish the between the two. These are two fruits that most people living in temperate climate are very familiar with, but what about exotic fruits that have only been seen in pictures? The study done by Wallrave et al. suggests that a person who has never been to the Asian tropics and eaten dragonfruit or lychee nut could, when blindfolded, figure out that (s)he was holding one or the other, just based on visual experience with such exotic fruits.

Does this mean that we could all get to know our neighbors a little better by following suit of the blind neighbor in “Stepbrothers” and touching each other’s faces? (See video above). Imagine meeting someone and asking to feel their face, so you could recognize them in passing, or spot them in a Facebook picture. Not only would the social repercussions (and potential germ-related health problems) outweigh benefit, but another study done by Wallraven—concerning the connection between sight and touch of faces—suggests that we interpret faces differently based when seen versus felt.

Humans have a very special way of recognizing faces that is very powerful. We holistically process faces, meaning we see them a whole entity, not feature by feature. We recognize each other based on how a nose fits with a toothy grin outlined by smile lines. Holistic processing is unique to much practiced, very proficient processes—paramount for often used face recognition. The study shows that as we use a less practiced method of facial interaction—touch—we have a much harder time identifying differences without using powerful features, like a nose, as a distinguishing factor. This makes it so when we physically explore faces (how often do we do that really though?) we comprehend them feature by feature: as a nose here and eyebrows there.

Some of the other most telltale signs of who a person is are their hair, skin type and skin tone. A serious haircut can completely change the way a person looks. (Find image of haircut?)  In this study, factors like hair and skin color were omitted, as the faces were 3-D printed in plastic. Unlike the previous study, where objects were shown to participants from all different angles, the faces presented were just the front of the face, not the entire head. This could have also contributed to weaker recognition of the faces because a ‘template’ model of the original benchmark face was used. A template means that the mind is simply trying to match features as closely as possible, so truer, deeper interpretation is not performed.

Hair made a big difference for this guy

 

One factor that was identified through touch was age. In Wallraven’s study on processing of faces, age was consistently recognized with merely a touch due to the increased texture on older faces. He writes “Nevertheless, it seems as if in this case age (which was apparent mainly through wrinkles on the face) could reliably also be extracted” (Wallraven 2014). While subjects could easily tell if the face they touched was older or younger, a factor I am curious to know more about is discrimination of different faces that are in the same age range. A paper by Harrison and Hole (2009) tells about humans learned bias toward recognizing and identifying differences in faces of others that are similar ages (partially due to constant contact). I wonder if, using touch, we could avoid the own-age bias and more easily tell people apart. Due to the weaker nature of haptic identification, I imagine it would in fact be harder to tell one old face from another, using only what our fingertips can feel as a guide.

While I do not suggest feeling every friend’s face or touching every object in sight, there is merit to tactile exploration of new objects. Getting to know an object through many senses, especially sight and touch, can familiarize the object and aid future recognition and use. If you have new lab instruments or tools to use, looking at images before getting your hands on them could help identification and use proficiency. Make like a baby and pick up and explore your surroundings! Tactile exploration can help visual identification, and vice versa, but stick to unfamiliar objects, because it will not help as much with your friend’s face.

References:

Wallraven, Christian, Heinrich Bulthoff, Steffen Waterkamp, Loes Van Dam, and Nina Gaibert. “The Eyes Grasp, the Hands See: Metric Category Knowledge Transfers between Vision and Touch.” Springer Link(2013). Springer Link. Springer. Web. 23 Nov. 2014. <http://link.springer.com/article/10.3758/s13423-013-0563-4/fulltext.html>.

Wallraven, Christian. “Touching on Face Space: Comparing Visual and Haptic Processing of Face Shapes.”Springer Link (2014). Springer Link. Springer. Web. 23 Nov. 2014. <http://link.springer.com/article/10.3758/s13423-013-0577-y/fulltext.html>

Images:

http://24.media.tumblr.com/eb843c799502fd0235accc3efe4f3bd2/tumblr_mh0aj3T6Yn1qg5xklo1_r1_1280.gif

http://slickmen.com/wp-content/uploads/2013/04/Health-Benefits-Of-Lychee-.jpeg

-http:/i.imgur.com/wQw1c3k.jpg-

  1. November 28th, 2014 at 21:37 | #1

    We have much more experience using visual memory than haptic memory, don’t we? If nothing else, our visual memory is much more detailed and precise. The group that felt the objects but wore blindfolds would still be imagining the shape of the object visually, and those who had initially only seen the object would be imagining it when they felt the object. It wouldn’t have to be super precise, only salient features would have to be noticed to make a distinction. Every participant was given a chance to both see and feel the object before testing, so any important haptic or visual features would be noted. There are plenty of reasons we can see for the two to be linked and one should improve the other. Feeling an object would add more detail to our visual mental image, and seeing an object would give us a more detailed visual mental image for when we feel the object. I’d love to see what would happen if the participants were tested without prior experience. How well would they do discriminating shapes A and B if they had never seen them, and how well would they do if they saw them for the first time after having only felt them? I’m not sure what the exotic fruits test was trying to prove, because there’s no way you could confuse dragonfruit or lychee if you had seen images, the leaf-things on the dragon fruit would immediately give it away, not to mention the size difference!

  2. December 3rd, 2014 at 23:48 | #2

    I found this post interesting because I have plenty of friends who happen to have the desire to touch my face but just to annoy me for a moment. On the other hand, I am that friend who likes to sneak up behind someone to cover his or her eyes (if I can reach), and let them guess, in which they also engage in touching my hands, hair and face in order to give a good estimate. Thus, this article shines a new light on these annoying methods in which by experience I would agree that having both the visual and touch sense of an object/ person can increase someone’s capability to recognize. Furthermore, I wonder how findings addressing becoming familiar with an object through many senses (specifically sight and touch) can be explained by prototypes and other effects associated with analyzing a face holistically. For example, when analyzing a face holistically can be related to biases (Harrison & Hole, 2009), in which one’s contact with a person of own age/ race increases recognition. This study could potentially further explain the findings mentioned. Moreover, this bias of associating individuals by a category gives rise to ideas of prototypes in which, there can be some underlying prototype that has a general set of characteristics that fit these people of same race or age, or in terms of this study objects or fruits because visual and touch senses add to more items in memory associated with the object, again leading to more rapid recognition.

  3. October 19th, 2015 at 15:28 | #3

    I am curious about the method of the first study where the participants had to categorize based on visual or haptic cues. You mention that the participant has to categorize “as if they had a blindfold on”. Well, without a blindfold wouldn’t it be the same as the visual trials? In fact, wouldn’t having both visual and haptic cues enhance their ability to categorize more accurately?

    I feel like these papers are missing a key component of the senses. Based on neurological research on brain plasticity all of the senses are technically connected. What I mean by this is that when one sense is “turned down” such as the man in the video going blind, brain plasticity encourages the other senses, like that of touch, to become more sensitive. So it would make sense that the man would want to touch the woman’s face because if he previously had a mental representation that was created by his visual sense, he very well might be able to map her face by touch on that mental representation and “see” her. This also coincides with the way we pattern recognize faces. Considering that senses can “replace” one another (used loosely), I wonder if the House versus Face experiment, where participants were asked to identify houses and faces holistically or piece by piece would carry over for blind people with their enhanced haptic abilities? Would a blind person be able to distinguish between individual features through their the analyitic system better than a person who wasn’t blind? Would their newly enhanced haptic ability work in the same way that a person with able vision? I would also imagine that the inversion effect would be less dramatic with blind individuals as well!

You must be logged in to post a comment.