Home > Attention > The powerful pull of the face: how human faces capture and hold our attention

The powerful pull of the face: how human faces capture and hold our attention

“It is not the strongest of the species that survives, nor the most intelligent…It is the one that is most adaptable to change. In the struggle for survival, the fittest win out at the expense of their rivals because they succeed in adapting themselves best to their environment.” Sound at all familiar to you? It might if you have ever heard of a man named Charles Darwin. (High-speed sixth-grade science class recap: Darwin was that brilliant, illustriousnaturalistaccredited with the theory of “survival of the fittest”—the natural selection tagline, if you will.) In 21st-century vernacular, one might say that he was “kind of a big deal.”

Screen Shot 2014-11-23 at 11.47.41 AM

Thus we as a human species adapt over time, leaving behind the traits that aren’t so useful and taking with us into the biological future those that are most advantageous to our survival. So what are some of these mechanisms most crucial to our survival? And, moreover, how do they relate to the field of cognitive psychology?

One of the innumerable fascinating things about the brain is its ability to direct our attention to the things that matter most. Not only this, it does so without any conscious effort on our part! Thank goodness for the both of these cognitive characteristics. Take a minute to think about all of the potentially distracting things, or stimuli, that are present in the environment around you at any given point in time. The world simply has too much stuff to pay attention to all at once! If there wasn’t a cognitive system in place to allocate our attention in pertinent ways, we would be constantly distracted by things that didn’t necessarily matter—not to mention overwhelmed, frustrated, and consequently incapable of functioning sanely in everyday life. (So thank you, cognitive processes; I owe you one.) Not only this, attention, just like oil and coal, is a limited resource. As a result, it must be used not only effectively (i.e., in ways that best enhance our chances of survival) but also efficiently.

One of the key ways in which the brain directs our attention both effectively and efficiently is through its innate categorizing of stimuli into “crucial-to-attend-to” and “not-crucial-to-attend-to” groups. So, you may ask, what differentiates a “crucial” stimulus from a “not-crucial” one? Stimuli that make it into the “crucial” group are the things that have, evolutionarily speaking, posed potential threats to our survival. So, things like loud sounds, bright light, and people (the faces of whom we will discuss later) get grouped into this “crucial-to-attend-to” category—a category of attentional directing known in the world of cognitive psychology as exogenous orienting. (The other category of attentional directing is endogenous orienting, but we will not discuss this type here.) “Exogenous” can be defined as: “of, relating to, or developing from external factors.” This definition helps to clarify that the stimuli prompting exogenous orienting of attention are those that may signal some sort of survival threat. In other words, they are worth paying attention to—perhaps even vital.

So, we have established which stimuli are attended to with exogenous attentional orienting. But how are they attended to? That is, by what means are these stimuli treated with such cognitive care? Exogenous stimuli are attended to by means of attentional capture, wherein the stimulus “captures” your attention unconsciously (or automatically) and spontaneously redirects it to focus on this new, significant stimulus. In this case we do not have control over whether or not our attention is diverted; exogenous orienting is an automatic process, meaning that it occurs outside of our conscious control or awareness. Attentional capture is therefore an efficient process, for automatic processes require fewer cognitive resources than do conscious, controlled, and more effortful processes. So we can see that exogenous stimuli are prioritized by our attentional system: they will always be attended to whenever they appear. How nice it is to know that our cognitive systems are constantly on the lookout for us! Our own personal watchdogs, they have our backs, ready to bark at any potential threat in order to reorient our attention.

But what about after this barking stops? That is, what happens to our redirected attention once it has been captured initially? Does the exogenous stimulus lead to a holding of our attention (i.e., attentional distraction) or merely a capture? Earlier this year (in 2014), three researchers set out to discover whether or not there was a distinction between the initial capture or orienting of attention and the subsequent holding of attention. Put in psychology terms, they wanted to dissociate these steps in order to see if distraction was made up of two independent attentional mechanisms.

Awesome research idea! But with what exogenous stimulus to test such a question? Let’s think: what distracting stimulus in our world both captures our attention fairly consistently and is highly significant as a cue? I know that I am constantly distracted by people, particularly by their unique faces. This makes sense when we consider that faces are an incredibly important stimulus in our world: they serve as excellent social cues, capable of expressing a multitude of meanings through a multitude of expressions. We can all probably attest to the sometimes-misleading nature of facial expressions (I’m sure we all know people who have perfected their poker face!). Generally speaking, however, the face is (at least in our culture) a very reliable indicator of different social and biological circumstances, and as a result is highly relevant—highly salient—as a stimulus. In fact, faces are so important in our everyday lives that our brains supposedly have an entire area dedicated to facial recognition processes. (See Talia’s awesome post about this for more information.) You might also want to check out Gemma’s post about social status and facial processing to see just how integrated the face is with larger contexts of social behavior.

It makes perfect sense, then, that the researchers decided to use faces as the distractor item in their study, which consisted of two main experiments. The methods and procedures across the two experiments were very similar, with only minor (but crucial) variations. Participants in both experiments were students aged 17 to 23. They were presented with a series of gray frames; in the middle of each frame was a small fixation point upon which participants were instructed to stay fixated. The upper right corner of the frame was where the target item, the letter “T,” appeared, randomly switching its orientation every second. Participants’ task was to categorize the letter’s orientation after each switch it made (e.g., “Press button 1 if the T was oriented in the vertical/horizontal direction and button 2 if it was oriented in the diagonal direction.”). This distractor item was completely neutral, which is important considering that overlap between distractor and target sets can affect the extent of attentional hold (Sy & Giesbrecht, 2011). Meanwhile, the distractor items would appear at the fixation point at 4-second intervals. (This substantial 4-second duration ensured that the distractor was fully processed and that any attentional holding could confidently be attributed to the stimulus itself, rather than to the effects of any transient attentional capture). Across all of the experiments, participants were told that the distractors were task irrelevant and should be ignored. The distractors were also not new to participants, which is important due to the fact that a stimulus’s familiarity can affect attentional dwell time (Pars & Hopfinger, 2008).

13423_2014_615_Fig1_HTML

It is in their distractor items that the experiments differed. Let’s start with Experiment 1, which actually had two phases. In phase A, the distractors were grayscale images of either places or fearful faces; participants were presented with an equal number of each type. Reaction times to the “T” target item were slowed at the initial onset of both the fearful face and place distractors. However, in subsequent frames, reaction times were slower after a fearful face was shown than after a place was shown. This evidence showed that while both distractor types initially captured attention, only fearful face distractors continued to hold attention and distract participants (as evidenced by the slowed reaction times on later targets).

In Phase B of Experiment 1 the researchers wanted to refine the results of Phase A. They wanted to make sure that the attentional holding observed was a result of the face and not the emotion itself (in this case, fear). So, in Phase B the distractor items were either places or neutral faces. If attentional hold was indeed dependent on the faces themselves (and not their emotional expression), then here we would expect to see the same pattern of attentional hold for the neutral faces. And was this the case? Yes! Participants were still distracted by the neutral face distractors in later frames (as evidenced by slower target response times), and not by the place distractors. These results provided evidence that task-irrelevant (i.e., distracting) faces, regardless of their emotional expression, hold attention past an initial attentional capture.

Screen Shot 2014-11-23 at 5.49.06 PMLastly, Experiment 2 wanted to further hone the results from all components of Experiment 1 by clarifying the following question: Might the face distractors result in attentional hold because of the distractor context in which they are presented? Thus Experiment 2 wanted to see if the attentional holding by faces is automatic or dependent on context. In this experiment, the two types of distractor items were fearful faces and neutral faces. If the results of Experiment one were independent of the distractors’ context, then here too we should observe a holding of attention by both the fearful and neutral faces. But what did the researchers observe? They found that while both types of faces resulted in an initial capture of attention (consistent with earlier findings), they also both produced an equal amount of distraction (i.e., an equal degree of attentional hold)! These results can be explained in terms of context. In Experiment 1, participants couldn’t expect the presence of a face, leading to an extended holding of attention as a result of the lack of anticipation. However, in Experiment 2 participants knew that a face was going to appear no matter what. This expectation prevented participants’ extended holding of attention in allowing them to sufficiently disengage from each distractor item (so as to avoid distraction on subsequent trials). Here we can see just how powerful a tool context can become: thanks to top-down cognitive processes (processes in which prior knowledge and expectations result in efficient perception), we can exploit the familiar to escape distraction! (Easier said than done, of course: focus is quite another issue…)

We now know that task-irrelevant faces, regardless of their emotional expression, result in initial attentional capture and in a context-dependent attentional holding. Because the initial orienting of attention occurred in the absence of attentional holding in Experiment 2, the researchers were able to satisfy their hypothesis that distraction is composed of two distinct mechanisms of attention (capturing and holding).

Thanks, Mr. Darwin, for setting these types of inquiries in motion.

And thanks, cognitive guard dogs, for your protective services.

 

To read the article for yourself, click here!

 

References

Parks, E. L., & Hopfinger, J. B. (2008). Hold it! Memory affects attentional dwell time, Psychonomic, 6, 1128–1134.

Parks, E. L., Kim, S.-Y., & Hopfinger, J. B. (2014). The Persistence of distraction: A study of attentional biases by fear, faces, and context Parks, 21, 1501–1508.

Sy, J., & Giesbrecht, B. (2011). The influence of target-distractor similarity on perceptual distraction, 11, 247.

Images

Charles Darwin: http://www.macroevolution.net/charles-darwins-autobiography-7.html#.VHJic1appuY

Fearful Emoji: http://www.emojistickers.com/products/fearful-face

Neutral Emoji: http://www.emojistickers.com/products/neutral-face

Categories: Attention Tags:
  1. mekopp
    October 21st, 2015 at 23:49 | #1

    I really like the part where you compare our exogenous orienting with a watchdog. Thanks to our personal guard dog, we are indeed able to automatically redirect our attention (attention capture) to a salient and possibly threatening stimulus like a loud noise or a sudden movement.

    Past research showed that participants detected angry faces faster than neutral or happy faces (Eastwood, Smilek, & Merikle, 2001; Maratos, Mogg, & Bradley, 2008) Therefore, I initially was surprised by the results of the second experiment because I hypothesised that angry faces would act as a more distracting target (in comparison to neutral faces) due to more resources allocation to the possible threat of angry faces.

    Experiment 2 revealed, as Sarah mentioned, that the attentional hold by faces is not an automatic process but is strongly context related. Test participants in experiment 1a and 1b might have been more distracted by the presence of faces because they are socially more important to us than places. But when angry and neutral faces were presented in experiment 2 (different context), the test participants knew that faces would appear no matter what, so a longer attentional hold was successfully prevented.

    Reading this blog post and the study itself made me think of our discussion in class whether face recognition was special or not. This study suggests that face recognition might not be that special because we can redirect our attention either more or less quickly depending on the context we are in (experiment 1 vs. experiment 2). What’s meaningful (and therefore worth being processed and recognized) to one individual might vary across other individuals.

    It would be interesting to conduct a similar experiment with people diagnosed with Prosopagnosia to see if they would be more distracted by places since they present a more meaningful stimulus due to the participants’ inability to recognize faces. Another interesting group of participants to look at would be bird or dog experts. How would their attentional hold be affected across human faces and birds/dogs faces? Would they dedicate the same or even more attentional hold to the animals in which they are experts in?

    After all, face recognition might not be that special but we get so good at it because it is an important social tool. We develop perceptual expertise in recognizing faces. Depending on which context individuals are in and depending on what is meaningful to them, face recognition may or may not promote higher level processing.

    References:

    Eastwood, J. D., Smilek, D., & Merikle, P. M. (2001). Differential attentional guidance by unattended faces expressing positive and negative emotion. Perception and Psychophysics, 63(6), 1004– 1013.

    Maratos, F. A., Mogg, K., & Bradley, B. P. (2008). Identification of angry faces in the attentional blink. Cognition & Emotion, 22(7), 1340– 1352.

  2. October 22nd, 2015 at 09:21 | #2

    I found it compelling about how context affects how long we hold on to a given stimulus after it captivates our attention. It’d be worthwhile for researchers to further explore different contexts, because the researchers of the featured study only examined whether participants were expecting or not expecting faces. I wonder if neutral faces that look similar to the participants’ faces or were familiar, such as a celebrity, would yield longer holding times than found in the study, because it would involve additional top-down processes (previous knowledge, context, etc.).
    Additionally, I found the blogger’s decision to involve evolutionary components beneficial for applying some of the study’s findings to a broader setting. I am currently enrolled in the social psychology seminar class, and we have discussed whether it is always appropriate to use evolution to explain why certain phenomenas occur. I believe evolution can be a valid explanation for the capturing pattern of faces we see in this study, as directing our attention to faces is essential for both surviving and procreating. I think that using this connection helps with providing conceivable reasonings, but I’d be curious if other reasonings could contribute to why we so often automatically direct our attention to faces.
    This relates to the debate covered in PS232 about whether recognizing faces is special or not. Perhaps humans are attuned to direct their attention to faces because we are experts at doing it, but would this apply to people who are experts in planes or animals? This could yield another intriguing study, and I’d be curious how participants who are experts would direct, and hold, their attention if provided with both faces and an image of what they are experts in.

  3. October 22nd, 2015 at 13:53 | #3

    The proposed topic considering evolutionary mechanisms of survival in regard to cognitive processes embodies both acquired as well as innate characteristics of the human mind. As Raymond explained in his post, “How good are your survival instincts?” even in our modern world, our ability to attend to and absorb information is greatly impacted by our natural instinct to favor information regarding our survival. Further, a phenomenon Raymond referred to as “survival processing,” suggests that human minds are genetically predisposed to external stimuli that are relevant to ones survival. Could the results described in the current study, regarding attentional capture and context-dependent attentional holding also be explained by a possible “survival of the fittest” perspective? In the experiment described in the present post, researchers found that individuals exhibited significantly greater attentional hold, for images of fearful faces in comparison to images of places. Perhaps our natural inclination to attend to and fixate on the fearful face is due to our innate concern for our own species’ survival. Our survival as a species is dependent on human interaction; human interaction is extremely reliant on our ability to read facial expressions—we have even developed specialized cognitive systems for perceiving faces in particular.
    The present research regarding attentional capture and context-dependent attentional holding supports our understanding of the cognitive mechanisms that our brain uses for facial processing. In class we learned about facial recognition in regard to pattern recognition; cognitive researchers suggest that an interaction between top-down and bottom-up processes, which occur in parallel (simultaneously), enable a special ability to recognize faces in particular. In a study of configural processing conducted by Tanaka & Farah (1993), researchers examined individuals’ ability to identify individual parts compared to the whole image of a house compared to a face. Results showed that the process of facial recognition not only includes processing of the isolated features of a face, but also includes a very special characteristic of configuration such that how those features go together is equally as critical. This study, among others that suggest specialized cognitive processing for facial recognition, provides a possible explanation for the results of Experiment 1 and 2. In the described experiment, researchers found that images of faces had a greater attentional hold compared to images of places; a specialized mechanism for facial recognition might suggest that we humans have a greater interest in faces, which is why we have evolved such a unique processing system. Thus, it would make sense that we are more interested in, and more willing to, expend additional cognitive attention to faces compared to places or other objects. It is possible, that we have developed this evolutionary mechanism to attend to faces because facial processing is crucial to our existence.

    References:
    Tanaka, J. W., & Farah, M. J. (1993). Parts and wholes in face recognition. The Quarterly journal of experimental psychology, 46, 225-245.

You must be logged in to post a comment.