Selective Attention

5 Paying Attention

 

 

 

Selective Attention

 

William James (1842-1910) is one of the historical giants of the field of psychology, and he is often quoted in the modern literature. One of his most famous quotes provides a starting point for this chapter. Roughly 125 years ago, James wrote:

 

Everyone knows what attention is. It is the taking possession by the mind, in clear and vivid form, of one out of what seem several simultaneously possible objects or trains of thought. Focalization, concentration, of consciousness are of its essence. It implies withdrawal from some things in order to deal effectively with others, and is a condition which has a real opposite in the confused, dazed, scatterbrained state which in French is called distraction…. (James, 1890, pp. 403-404)

 

In this quote, James is describing what modern psychologists call selective attention-that is, the skill through which a person focuses on one input or one task while ignoring other stimuli that are also on the scene. But what does this skill involve? What steps do you need to take in order to achieve the focus that James described, and why is it that the focus “implies withdrawal from some things in order to deal effectively with others”?

 

Dichotic Listening

 

Early studies of attention used a setup called dichotic listening: Participants wore headphones and heard one input in the left ear and a different input in the right ear. The participants were instructed to pay attention to one of these inputs-the attended channel-and to ignore the message in the other ear-the unattended channel.

To make sure participants were paying attention, investigators gave them a task called shadowing: Participants were required to repeat back what they were hearing, word for word, so that they were echoing the attended channel. Their shadowing performance

was generally close to perfect, and they were able to echo almost 100 % of what they heard. At the same time, they heard remarkably little from the unattended channel. If asked, after a minute or so of shadowing, to report what the unattended message was about, they had no idea (e.g., Cherry, 1953). They couldn’t even tell if the unattended channel contained a coherent message or random words. In fact, in one study, participants shadowed speech in the attended channel, while in the unattended channel they heard a text in Czech, read with English pronunciation. The individual sounds, therefore (the vowels, the consonants), resembled English, but the message itself was (for an English speaker) gibberish. After a minute of shadowing, only 4 of 30 participants detected the peculiar character of the unattended message (Treisman, 1964).

 

We can observe a similar pattern with visual inputs. Participants in one study viewed a video that has now gone viral on the Internet and is widely known as the “invisible gorilla” video. In this video, a team of players in white shirts is passing a basketball back and forth; people watching the video are player to another. Interwoven with these urged to count how many times the ball is passed from one players (and visible in the video) is another team, wearing black shirts, also passing a ball back and forth; viewers are instructed to ignore these players. difficulty with this task, but, while doing it, they usually don’t see another event.

Viewers have no that appears on the screen right in front of their eyes. Specifically, they fail to notice when someone wearing a gorilla costume walks through the middle of the game, pausing briefly to thump his chest before exiting. (See Figure 5.1; Neisser & Becklen, 1975; Simons &Chabris, 1999; also see Jenkins, Lavie, & Driver, 2005.)

 

Even so, people are not altogether oblivious to the unattended channel. In selective listening experiments, research participants easily and accurately report whether the unattended channel contained human speech, musical instruments, or silence. If the unattended channel did contain speech, participants can report whether the speaker had high was male or female, or low voice, speaking loudly softly. (For reviews of this early work, see Broadbent, 1958; Kahneman, 1973.) Apparently, or was or then, physical attributes of the unattended channel are heard, even though participants are generally clueless about the unattended channel’s semantic content.

In one study, however, participants asked to shadow one passage while ignoring ere a Embedded second within the passage. unattended channel was a series of names, and one third of the participants did hear roughly their own name when it was spoken-even though (just like in other studies) they heard almost nothing else from the unattended input (Moray, 1959)

 

And it’s not just names that can “catch” your attention. Mention of a recently seen movie, or of a favorite restaurant, will often be noticed in the unattended channel. More broadly, words with some personal importance are often noticed, even though the rest of the unattended channel is perceived only as an undifferentiated blur (Conway, Cowan, & Bunting, 2001; Wood & Cowan, 1995).

 

Inhibiting Distractors

 

How can we put all these research results together? How can we explain both the general insensitivity to the unattended channel and also the cases in which the unattended channel “leaks through”?

One option focuses on what you do with the unattended input. The proposal is that you somehow block processing of the inputs you’re not interested in, much as a sentry blocks the path of unwanted guests but stands back and does nothing when legitimate guests are in view, allowing them to pass through the gate unimpeded. This sort of proposal was central for early theories of attention, which suggested that people erect a filter that shields them from potential distractors. Desired information (the attended channel) is not filtered out and so goes on to receive further processing (Broadbent, 1958)

 

But what does it mean to “filter” something out? The key lies in the nervous system’s ability to inhibit certain responses, and evidence suggests that you do rely on this ability to avoid certain forms of distraction. This inhibition, however, is rather specific, operating distractor basis. In other words, you might have the ability to inhibit your response to this distractor on a distractor-by- and the same for that distractor, but these abilities are of little value if some new, unexpected distractor comes along. In that case, you need to develop a new skill aimed at blocking the new intruder. (See Cunningham & Egeth, 2016; Fenske, Raymond, Kessler, Westoby, & Tipper, 2005; Frings & Wühr, 2014; Jacoby, Lindsay, & Hessels, 2003; Tsushima, Sasaki, & Watanabe, 2006; Wyatt & Machado, 2013. For a glimpse of brain mechanisms that support this inhibition, see Payne & Sekuler, 2014.)

The ability to ignore certain distractors-to shut them out-therefore needs to be part of our theory. Other evidence, though, indicates that this isn’t the whole story. That’s because you not only inhibit the processing of distractors, you also promote the processing of desired stimuli.

 

Inattentional Blindness

 

We saw in Chapters 3 and 4 that perception involves a lot of activity, as you organize and interpret the incoming stimulus information. It seems plausible that this activity would require some initiative and some resources from you-and evidence suggests that it does.

In one experiment, participants were told that they would see large “+” shapes on a computer screen, presented for 200 ms (milliseconds), followed by a pattern mask. (The mask was just a meaningless jumble on the screen, designed to disrupt any further processing.) If the horizontal bar of the “+” was longer than the vertical, participants were supposed to press one button; if the vertical bar was longer, they had to press a different button. As a complication, participants weren’t allowed to look directly at the “+.” Instead, they fixated on (i.e., pointed their eyes at) a mark in the center of the computer screen-a fixation target-and the “+” shapes were shown just off to one side (see Figure 5.2).

 

For the first three trials of the procedure, the participants events proceeded just as expected, and the task was relatively easy. On Trial 4, though, things slightly different: While the target “+” was on the screen, the were fixation target disappeared and was replaced by one of three shapes-a triangle, a rectangle, or a cross. Then, the entire configuration (the target and this new shape) replaced by the was mask.

Immediately after the trial, participants asked: Was there anything different on this were trial? Was anything present, anything changed, that wasn’t there on previous trials? Remarkably, 89% of the participants that there was no change; they had failed to see or anything other than the (attended) “+” To probe the participants further, the researchers told them (correctly) that during the previous trial the fixation had target momentarily shape. were then asked what that disappeared and had been replaced by a The participants shape had been, and were given the choices of a triangle, a rectangle, or a cross (one of which, of course, was the right answer). The responses to this question essentially random. Even when probed in this way, participants seemed not to have seen the shape directly in front of their eyes (Mack & Rock, 1998; also see Mack 2003).

This pattern has been named inattentional blindness (Mack & Rock, 1998; also Mack, 2003) a pattern in which people fail to see a prominent stimulus, even though they’re staring straight at it. In a similar effect, called “inattentional deafness,” participants regularly fail to hear prominent stimuli if they aren’t expecting them (Dalton & Fraenkel, 2012). In other studies participants fail to feel stimuli if the inputs are unexpected; this is “inattentional numbness” (Murphy & Dalton, 2016).

What’s going on here? Are participants truly blind (or deaf or numb) in response to these various inputs? As an alternative, some researchers propose that participants in these experiments did see (or hear or feel) the targets but, a moment later, couldn’t remenmber what they’d just experienced (e.g., Wolfe, 1999; also Schnuerch, Kreiz, Gibbons, & Memmert, 2016). For purposes of theory, this distinction is crucial, but for now let’s emphasize what the two proposals have in common: By either account, your normal ability to see what’s around you, and to make use of what you see. is dramatically diminished in the absence of attention.

 

Think about how these effects matter outside of the laboratory. Chabris and Simons (2010), for example, call attention to reports of traffic accidents in which a driver says, “I never saw bicyclist! He came out of nowhere! But then-suddenly-there he was, right in front of me.” Drew, Võ and Wolfe (2013) showed that experienced radiologists often miss obvious anomalies in a patient’s CT scan, even when looking right blindness in eyewitnesses to crimes, see at the anomaly. (For similar concerns, related to inattentional Jaeger, Levin, & Porter, 2017.) Or, as a more mundane example, you go to the refrigerator to find the mayonnaise (or the ketchup or the juice) and don’t see. it, even though it’s right in front of you.

In these cases, we lament the neglectful driver and the careless radiologist, and your inability to find the mayo may cause you to worry that you’re losing your mind (as well as your condiments). The reality, though, is that these cases of failing-to-see are than “merely” having a stimulus in front of your eyes. Perception requires entirely normal. Perception requires more some work.

 

Change Blindness

 

The active nature of perception is also evident in studies of change blindness- observers’ inability to detect changes in scenes they’re looking directly at. In some experiments, participants are shown pairs of pictures separated by a brief blank interval (e.g., Rensink, O’Regan, & Clark, 1997). The pictures in each pair are identical except for one aspect-an “extra” engine shown on the airplane in one picture and not in the other; a man wearing a hat in one picture but not wearing one in the other; and so on (see Figure 5.3). Participants know that their task is to detect any changes in the pictures, but even so, the task is difficult. If the change involves something central to the scene, participants may need to look back and forth between the pictures as many as a dozen times before they detect the change. If the change involves some peripheral aspect of the scene, as many as 25 alternations may be required.

 

A related pattern can be documented when participants watch videos. In one study, observers watched a movie of two women having a conversation. The camera first focused on one woman, then the other, just as it would in an ordinary TV show or movie. The crucial element of this experiment, though, changed. For example, from one camera angle, participants could plainly see the red plates on the table between the women. When the camera shifted to a different position, though, the plates’ color had changed was that certain aspects of the scene changed every time the camera angle to white. In another shift, one of the women gained a prominent scarf that she didn’t have on a fraction of a second earlier (see Figure 5.4). Most observers, however, noticed none of these changes (Levin & Simons, 1997; Shore & Klein, 2000; Simons & Rensink, 2005).

 

Incredibly, the same pattern can be documented with live (i.e., not filmed) events. In a remarkable study, an investigator (let’s call him “Leon”) approached pedestrians for directions to a certain building. During the conversation, two men carrying a door approached and deliberately walked between Leon and the research participant. As a result, Leon was momentarily hidden (by the door) from the participant’s view, and in that moment Leon traded places with one of the men carrying the door. A second later, therefore, Leon was able to walk away, unseen, while the new fellow (who had been carrying the door) stayed behind and continued the on a college campus and asked conversation with the participant.

Roughly half of the participants failed to notice this switch. They continued the conversation as though nothing had happened-even though Leon and his replacement were wearing different clothes and had easily distinguishable voices. When asked whether anything odd had happened in this event, many participants commented only that it was rude that the guys carrying the door had walked right through their conversation. (See Simons & Ambinder, 2005; Chabris & Simons, 2010 also see Most et al., 2001; Rensink, 2002; Seegmiller, Watson, & Strayer, 2011. For similar effects with auditory stimuli, see Gregg & Samuel, 2008; Vitevitch, 2003.)

 

Early versus Late Selection

 

It’s clear, then, that people are often oblivious to stimuli directly in front of their eyes-whether the stimuli are simple displays on a computer screen, photographs, videos, or real-life events. (Similarly people are sometimes oblivious to prominent sounds in the environment.) As we’ve said, though, there are two ways to think about these results. First, the studies may reveal genuine limits on perception, so that participants literally don’t see (or hear) these stimuli; or, second, the studies may reveal limits on memory, so that participants do see (or hear) the stimuli but immediately forget they’ve just experienced.

Which proposal is correct? One approach to this question hinges on when the perceiver selects the desired input and (correspondingly) when the perceiver stops processing the unattended input. According to the early selection hypothesis, the attended input is privileged from the start, so that the unattended input receives little analysis and therefore is never perceived. According to the late selection hypothesis, all inputs receive relatively complete analysis, and selection occurs after the analysis is finished. Perhaps the selection occurs just before the stimuli reach consciousness, so that become aware only of the attended input. Or perhaps the selection occurs later still-so that all inputs make it (briefly) into consciousness, but then the selection occurs so that only the attended input is remembered.

 

Each hypothesis captures part of the truth. On the one side, there are cases in which people seem unaware of distractors but are influenced by them anyway-so that the (apparently unnoticed) distractors guide the interpretation of the attended stimuli (e.g., Moore & Egeth, 1997; see Figure 5.5). This seems to be a case of late selection: The distractors are perceived (so that they do have an influence) but are selected out before they make it to consciousness. On the other side, though, we can also find evidence for early selection, with attended inputs being privileged from the start and distractor stimuli falling out of the stream of processing at a very early stage. Relevant evidence comes, for example, from studies that record the brain’s electrical activity in th stimulus has arrived. These studies confirm that the brain activity for attended inputs is milliseconds after a distinguishable from that for unattended inputs just 80 ms or so after the stimulus presentation-a time interval in which early sensory processing is still under way (Hillyard, Vogel, & Luck, 1998; see Figure 5.6).

 

Other evidence suggests that attention can influence activity levels in the lateral geniculate nucleus, or LGN (Kastner, Schneider, & Wunderlich, 2006; McAlonan, Cavanaugh & Wurtz, 2008 Moore & Zirnsak, 2017; Vanduffel, Tootell, & Orban, 2000). In this case, attention is changing the flow of signals within the nervous system even before the signals reach the brain. (For more on how attention influences processing in the visual cortex, see Carrasco, Ling, & Read, 2004; Carrasco, Penpeci-Talgar, & Eckstein, 2000; McAdams & Reid, 2005; Reynolds, Pasternak, & Desimone, 2000; also see O’Connor, Fukui, Pinsk, & Kastner, 2002; Yantis, 2008.)

 

e. Demonstration 5.1: Shadowing

 

Many classic studies attention involved a task called “shadowing” The instructions for this task go like this:

 

You are about to hear a voice reading some English text. Your job is to repeat what you hear, word for word, as you hear it. In other words, you’ll turn yourself into an “echo box” following along with the message as it arrives and repeating it back, echoing as many of the words as you can.

 

As a first step, you should try this task. You can do this with a friend and have him or her read to you out of a book while you shadow what your friend is saying. If you don’t have a cooperative friend or any other recording of a voice speaking in try shadowing a voice on a news broadcast, a podcast English.

Most people find this task relatively easy, but they also figure out rather quickly that there are steps they can take to make the task even easier. One obvious adjustment is to shadow in a quiet voice, because otherwise your own shadowing will drown out the voice you’re trying to hear. Another adjustment is in the rhythm of the shadowing: People often settle into a pattern of listening to a phrase, rapidly spewing out that phrase, listening to the next phrase, rapidly spewing it out, and so on. This pattern of phrase-by-phrase shadowing has several advantages. Among them, your thinking about the input as a series of phrases (rather than as individual words) allows you to rely to some extent on inferences about what is being said-so that you can literally get away with listening less, and that makes the overall task of shadowing appreciably easier.

Of course, these inferences as well as the whole strategy of phrase-by-phrase shadowing depend on your being able to detect the structure within the incoming message-providing another example in which your perception doesn’t just “receive” the input; it also organizes the input. How much does this matter? If you have a cooperative friend, you can try this variation on shadowing: Have your friend read to you from a book, but ask your friend to read the material backward. (So, for the previous sentence, your friend would literally say, “backward material the read to friend your ask.. “). Your friend will probably need a bit of practice to do this peculiar task, but once he or she has mastered it, you can try shadowing this backward English.

With backward English, you’ll find it difficult (if not impossible) to keep track of the structure of the material-and therefore much harder to locate the boundaries between phrases, and much harder to make inferences about what you’re hearing. Is shadowing of this backward material harder than shadowing of “normal” material?

 

Now that you’re practiced at shadowing, you’re ready for the third step. Have one friend read normally (not backward!) to you, and have a second friend read something else to you-both at the same time. (Ask the second friend to choose the reading, so that you don’t know in advance what it is.) Again, if you don’t have cooperative friends nearby, you can do the same with any broadcast or recorded voices. (You can play one voice on your computer and another from a podcast. Or you can have your friend do one voice while the TV news provides another voice. Do whichever of these is most convenient for you.) Your job is to shadow the first friend for a minute or so. When you’re done, here are some questions:

 

· What was the second voice saying? Odds are ood that you don’t know.

· Could you at least hear when the second voice started and stopped? If the second voice or hesitated in the middle of reading, did you hear that? Odds are good coughed, or giggled, that you did.

 

In general, people tend to be oblivious to the content of the unattended message, but they hear the physical attributes of this message perfectly well-and it’s that pattern of results that we need to explain. The textbook chapter explores what the explanation might be.

 

e. Demonstration 5.2: Color-Changing Card Trick

 

The phenomenon of “change blindness” is easily demonstrated in the laboratory, but it also has many parallels outside of the lab. For example, stage magicians often rely change blindness in their performances-with the audience being amazed by objects seeming to on (some version of materialize or dematerialize, or with an assistant being mysteriously transformed into an entirely different person. In most of these cases, though, the “magic” involves little beyond the audience’s failing to notice straightforward swaps that had, in truth, taken place right before their eyes.

A similar effect is demonstrated in a popular video on YouTube. You can find the video by using your search engine to find “color changing card trick” or you can try this address: www.youtube.com/watch?v=voAntzB7EwE . This demonstration was created by a wonderfully playful British psychologist named Richard Wiseman. Several of Wiseman’s videos are available on YouTube, but you can find others on a website that Wiseman maintains:  www.quirkology.com .

Watch the video carefully. Were you fooled? Did you show the standard change-blindness pattern: failing to notice large-scale changes in the visual input?

 

Notice also that you-like the audience at a magic performance-knew in advance that someone was trying to fool you with a switch that you wouldn’t notice. You were, therefore, presumably on your guard-extra vigilant, trying not to be fooled. Even so, the odds are good that you were still fooled, and this is, in itself, an important fact. It tells us that being extra careful is no protection against change blindness, and neither is an effort toward being especially observant. These points are crucial for the stage magician, who is able to fool the people in the audience despite their best efforts toward detecting the tricks and penetrating the illusions.

The same points tell us something about the nature of attention: It’s not seems, just to “try hard to pay attention.” Likewise, instructions to “pay close attention” may, in many especially useful, it circumstances, have no effect at all. In order to promote attention, people usually need some information about what exactly they should pay attention to. Indeed, if someone told you, before the video, to pay attention to the key (changing) elements, do you think you’d be fooled?

(For more videos showing related effects, search the Internet using the key words “invisible gorilla” and follow the links to the various demonstrations.)

 

Selection via Priming

 

Whether selection is early or late, it’s clear that people often fail to see stimuli that are directly in front of them, in plain view. But what is the obstacle here? Why don’t people perceive these stimuli?

In Chapter 4, we proposed that recognition requires a network of detectors, and we argued that these detectors fire most readily if they’re suitably primed. In some cases, the priming is produced by your visual experience-specifically, whether each detector has been used recently or frequently in the past. But we suggested that priming can also come from another source: your expectations about what the stimulus will be.

The proposal, then, is that you can literally prepare yourself for perceiving by priming the relevant detectors. In other words, you somehow reach into the network and deliberately activate just those detectors that, you believe, will soon be needed. Then, once primed in this way, those detectors will be on “high alert” and ready to fire.

Let’s also suppose that this priming isn’t “free. Instead, you need to spend some effort or allocate some resources in order to do the priming, and these resources are in limited supply. As a result, there’s a limit on just how much priming you can do.

 

We’ll need to flesh out this proposal in several ways, but even so, we can already use it to explain some of the findings we’ve already met. Why don’t participants notice the shapes in the inattentional blindness studies? The answer lies in the fact that they don’t expect any stimulus to appear, so they have no reason to prepare for any stimulus. As a result, when the stimulus is presented, it falls on unprepared (unprimed, unresponsive) detectors. The detectors therefore don’t respond to the stimulus, so the participants end up not perceiving it.

What about selective listening? In this case, you’ve been instructed to ignore the unattended input, so you have no reason to devote any resources to this input. Hence, the detectors needed for the distractor message are unprimed, and this makes it difficult to hear the distractor. But why does attention sometimes “leak” so that you do hear some aspects of the unattended input? Think about what will happen if your name is spoken on the unattended channel. The detectors for this stimulus are already primed, but this isn’t because at that moment you’re expecting to hear your name. Instead, the detectors for your name are primed simply because this is a stimulus you’ve often encountered in the past. Thanks to this prior exposure, the activation level of these detectors is already high; you don’t need to prime them further. So they will fire even if your attention is elsewhere.

 

Two Types of Priming

 

The idea before us, in short, has three elements. First, perception is vastly facilitated by the priming of relevant detectors. Second, the priming is sometimes stimulus – driven-that is, produced by the stimuli you’ve encountered (recently or frequently) in the past. This is repetition priming-priming produced by a prior encounter with the stimulus. This type of priming takes no effort on your part and requires no resources, and it’s this sort of priming that enables you to hear your name on the unattended channel. But third, a different sort of priming is also possible. This priming is expectation-driven and under your control. In this form of priming, you deliberately prime detectors for inputs you think are upcoming, so that you’re ready for those inputs when they arrive. You don’t do this priming for inputs you have no interest in, and you can’t do this priming for inputs you can’t anticipate.

Can we test these claims? In a classic series of studies, Posner and Snyder (1975) gave participants a straightforward task: A pair of letters was shown on a computer screen, and participants had to decide, as swiftly as they could, whether the letters were the same or different. So someone might see “AA” and answer “same” or might see “AB” and answer “different.”

Before each pair, participants saw a warning signal. In the neutral condition, the warning signal was a plus sign (“+”). This signal notified participants that the stimuli were about to arrive but provided no other information. In a different condition, the warning signal was a letter that actually matched the stimuli to come. So someone might see the warning signal “G” followed by the pair “GG.” In this case, the warning signal served to prime the participants for the stimuli. In a third condition, though, the warning signal was misleading. It was again a letter, but a different letter from the stimuli to come. Participants might see “H” followed by the pair “GG.” Let’s consider these three conditions neutral, primed, and misled.

 

In this simple task, accuracy rates are very high, but Posner and Snyder also recorded how quickly people responded. By comparing these response times (RTs) in the primed and neutral conditions, we can ask what benefit there is from the prime. Likewise, by comparing RTs in the misled and neutral conditions, we can ask what cost there is, if any, from being misled.

 

Before we turn to the results, there’s a complication: Posner and Snyder ran this procedure in two different versions. In one version, the warning signal was an excellent predictor of the upcoming stimuli. For example, if the warning signal was an A, there was an 80% chance that the upcoming stimulus pair would contain A’s. In Posner and Snyder’s terms, the warning signal provided a “high validity” prime. In a different version of the procedure, the warning signal was a poor predictor of the upcoming stimuli. For example, if the warning signal was an A, there was only a 20% chance that the upcoming pair would contain A’s. This was the “low validity” condition (see Table 5.1).

 

Let’s consider the low-validity condition first, and let’s focus on those few occasions in which the prime did match the subsequent stimuli. That is, we’re focusing on 20% of the trials and ignoring the other 80% for the moment. In this condition, the participant can’t use the prime as a basis for predicting the stimuli because the prime is a poor indicator of things to come. Therefore, the prime should not lead to any specific expectations. Nonetheless, we do expect faster RTs in the primed condition than in the neutral condition. Why? Thanks to the prime, the relevant detectors have just fired, so the detectors should still be warmed up. When the target stimuli arrive, therefore, the detectors should fire more readily, allowing a faster response.

The results bear this out. RTs were reliably faster (by roughly 30 ms) in the primed condition than in the neutral condition (see Figure 5.7, left side; the figure shows the differences between conditions). Apparently, detectors can be primed by mere exposure to a stimulus, even in the absence of expectations, and so this priming is truly stimulus-based.

 

What about the misled condition? With a low-validity prime, misleading the participants had no effect: Performance in the misled condition was the same as performance in the neutral condition. Priming the “wrong” detector, it seems, takes nothing away from the other detectors-including the detectors actually needed for that trial. This fits with our discussion in Chapter 4: Each of the various detectors works independently of the others, and so priming one detector obviously influences the functioning of that specific detector but neither helps nor hinders the other detectors.

Let’s look next at the high-validity primes. In this condition, people might see, for example, a “J” as the warning signal and then the stimulus pair “JJ.” Presentation of the prime itself will fire the J- detectors, and this should, once again, “warm up” these detectors, just as the low-validity primes did. As a result, we expect a stimulus-driven benefit from the prime. However, the high-validity primes may also have another influence: High-validity primes are excellent predictors of the stimulus to come. Participants are told this at the outset, and they have lots of opportunity to see that it’s true. High-validity primes will therefore produce a warm-up effect and also an expectation effect, whereas low-validity primes produce only the warm-up. On this basis, we should expect the high-validity primes to help participants more than low-validity primes-and that’s exactly what the data show (Figure 5.7, right side). The combination of warm-up and expectations leads to faster responses than warm-up alone. From the participants’ point of view, it pays to know what the upcoming stimulus might be.

 

Explaining the Costs and Benefits

 

The data make it clear, then, that we need to distinguish two types of primes. One type is stimulus- based-produced merely by presentation of the priming stimulus, with no role for expectations. The other type is expectation-based and is created only when the participant believes the prime allows a prediction of what’s to come.

These types of primes can be distinguished in various ways, including the biological mechanisms that support them (see Figure 5.8; Corbetta & Shulman, 2002; Hahn, Ross, & Stein, 2006; but also Moore & Zirnsak, 2017) and also a difference in what they “cost.” Stimulus-based priming appears to be “free”-we can prime one detector without taking anything away from other detectors. (We saw this in the low-validity condition, in the fact that the misled trials led to responses just as fast as those in the eutral trials.) Expectation-based priming, in contrast, does have a cost, and we see this in an aspect of Figure 5.7 that we’ve not yet mentioned: With high-validity primes, responses in the misled condition were slower than responses in the neutral condition. That is, misleading the participants actually hurt their performance. As a concrete example, F-detection was slower if G was primed, compared to F-detection when the prime was simply the neutral warning signal (“+”). In broader terms, it seems that priming the “wrong” detector takes something away from the other detectors, and so participants are worse off when they’re misled than when they receive no prime at all.

 

What produces this cost? As an analogy, let’s say that you have just $50 to spend on groceries. You can spend more on ice cream if you wish, but if you do, you’ll have less to spend on other foods. Any increase in the ice cream allotment, in other words, must be covered by a decrease somewhere else. This trade-off arises, though, only because of the limited budget. If you had unlimited funds, you could spend more on ice cream and still have enough money for everything else.

Expectation-based priming shows the same pattern. If the Q-detector is primed, this takes something away from the other detectors. Getting prepared for one target seems to make people less prepared for other targets. But we just said that this sort of pattern implies a limited “budget.” If an unlimited supply of activation were available, you could prime the Q-detector and leave the other detectors just as they were. And that is the point: Expectation-based priming, by virtue of revealing costs when misled, reveals the presence of a limited-capacity system.

We can now put the pieces together. Ultimately, we need to explain the facts of selective attention, including the fact that while listening to one message you hear little content from other messages. To explain this, we’ve proposed that perceiving involves some work, and this work requires some limited mental resources-some process or capacity needed for performance, but in limited supply. That’s why you can’t listen to two messages at the same time; doing so would require more resources than you have. And now, finally, we’re seeing evidence for those limited resources: The Posner and Snyder research (and many other results) reveals the workings of a limited-capacity system, just as our hypothesis demands.

 

Spatial Attention

 

The Posner and Snyder study shows that expectations about an upcoming stimulus can influence the processing of that stimulus. But what exactly is the nature of these expectations? How precise or vague are they?

As one way of framing this issue, imagine that participants in a study are told, “The next stimulus will be a T” In this case, they know exactly what to get ready for. But now imagine that participants are told, “The next stimulus will be a letter” or “The next stimulus will be on the left side of the screen” Will these cues allow participants to prepare themselves?

These issues have been examined in studies of spatial attention-that is, the mechanism through which someone focuses on a particular position in space. In one early study, Posner, Snyder, and Davidson (1980) required their participants simply to detect letter presentations; the task was just to press a button as soon as a letter appeared. Participants kept their eyes pointed at a central fixation mark, and letters could appear either to the left or to the right of this mark.

For some trials, a neutral warning signal was presented, so that participants knew a trial was about to start but had no information about stimulus location. For other trials, an arrow was used as the warning signal. Sometimes the arrow pointed left, sometimes right; and the arrow was generally an accurate predictor of the location of the stimulus-to-come. If the arrow pointed right, the stimulus would be on the right side of the computer screen. (In the terms we used earlier, this was a high-validity cue.) On 20 % of the trials, however, the arrow misled participants about location.

The results show a familiar pattern (Posner et al., 1980). With high-validity priming, the data show a benefit from cues that correctly signal where the upcoming target will appear. The differences between conditions aren’t large, but keep the task in mind: All participants had to do was detect the input. Even with the simplest of tasks, it pays to be prepared (see Figure 5.9).

 

What about the trials in which participants were misled? RTs in this condition were about 12% slower than those in the neutral condition. Once again, therefore, we’re seeing evidence of a limited- capacity system. In order to devote more attention to (say) the left position, you have to devote less attention to the right. If the stimulus then shows up on the right, you’re less prepared for it-which is the cost of being misled.

 

Attention as a Spotlight

 

Studies of spatial attention suggest that visual attention can be compared to a spotlight beam that can “shine” anywhere in the visual field. The “beam” marks the region of space for which you are prepared, so inputs within the beam are processed more efficiently. The beam can be wide or narrowly focused (see Figure 5.10) and can be moved about at will as you explore (i.e., attend to) various aspects of the visual field.

 

Let’s emphasize, though, that the spotlight idea refers to movements of attention, not movements of the eyes. Of course, eye movements do play an important role in your selection of information from the world: If you want to learn more about something, you generally look at it. (For more on how you move your eyes to explore a scene, see Henderson, 2013; Moore & Zirnsak, 2017.) Even so, movements of the eyes can be separated from movements of attention, and it’s attention, not the eyes, that’s moving around in the Posner et al. (1980) study. We know this because of the timing of surprisingly slow, requiring 180 to 200 ms. But the benefits of primes can be detected within the first 150 ms after the priming stimulus is presented. Therefore,, the the effects. Eye movements are benefits of attention occur prior to any eye movement, so they cannot be a consequence of eye movements.

But what does it mean to “move attention”? The spotlight beam is just a metaphor, so we need to ask what’s really going on in the brain to produce these effects. The answer involves a network of sites in the frontal cortex and the parietal cortex. According to one proposal (Posner & Rothbart, 2007; see Figure 5.11), one cluster of sites (the orienting system) is needed to disengage attention from one target, shift attention to new target, and then engage attention on the new target. A second set of sites (the alerting system) is responsible for maintaining an alert state in the brain. A third set of sites (the executive system) controls voluntary actions.

 

 

These points echo a theme we first met in Chapter 2. There, we argued that cognitive capacities on the coordinated activity of multiple brain regions, with each region providing depend a specialized process necessary for the overall achievement. As a result, a regions can disrupt the overall capacity, and if there are problems in several regions, the disruption problem in any of these can be substantial.

As an illustration of this interplay between brain sites and symptoms, consider disorder we mentioned earlier-ADHD. (We’ll have more to say about ADHD later in the chapter.) Table 5.2 summarizes one proposal about this disorder. Symptoms of ADHD are listed in the left column; the right column identifies brain areas that may be the main source of each symptom. This proposal is not the only way to think about ADHD, but it illustrates the complex, many-part relationship between overall function (in this case, the ability to pay attention) and brain anatomy. (For more on ADHD, see Barkley, Murphy, & Fischer, 2008; Brown, 2005; Seli, Smallwood, Cheyne, & Smilek, 2015; Zillmer, Spiers, & Culbertson, 2008.)

In addition, the sites listed in Table 5.2 can be understood roughly as forming the “control system” for attention. Entirely different sites (including the visual areas in the occipital cortex) do the actual analysis of the incoming information (see Figure 5.12). In other words, neural connections from the areas listed in the table carry signals to the brain regions that do the work of analyzing the input. These control signals can amplify (or, in some cases, inhibit) the activity in these other areas and, in this way, they can promote the processing of inputs you’re interested in, and undermine the processing of distractors. (See Corbetta & Shulman, 2002; Hampshire, Duncan, & Owen, 2007: Hon, Epstein, Owen. & Duncan, 2006; Hung, Driver, & Walsh, 2005; Miller & Cohen, 2001.)

 

 

 

Thus, there is no spotlight beam. Instead, certain neural mechanisms enable you to adjust your sensitivity to certain inputs. This is, of course, entirely in line with the proposal we’re developing namely, that a large part of “paying attention” involves priming. For stimuli you don’t care about, you don’t bother to prime yourself, and so those stimuli fall on unprepared (and unresponsive) detectors. For stimuli you do care about, you do your best to anticipate the input, and you use these anticipations to prime the relevant processing channel. This increases your sensitivity to the desired input, which is just what you want. (For further discussion of the “spotlight” idea, see Cave, 2013; Rensink, 2012; Wright & Ward, 2008; but also Awh & Pashler, 2000; Morawetz, Holz, Baudewig, Treue, & Dechent, 2007.)

 

Where Do We “Shine” the “Beam”?

 

So far, we’ve been discussing how people pay attention, but we can also ask what people attention to. Where do people “shine” the “spotlight beam”? The answer has several parts. As a start, you pay attention to elements of the input that are visually prominent (Parkhurst, Law, & Niebur 2002) and also to elements that you think are interesting or important. Decisions about what’s important, though, depend on the context. For example, Figure 5.13 shows classic data recording a viewer’s eye movements while inspecting a picture (Yarbus, 1967). The target picture is shown in the top left. Each of the other panels shows a three-minute recording of the viewer’s eye movements; on what the viewer was trying to learn about the y plainly, the pattern of movements depended picture.

 

In addition, your beliefs about the scene important role. You’re unlikely to focus,

for example, on elements of a scene that are entirely predictable, because you’ll gain no information from inspecting things that are already obvious (Brewer & Treyens, 1981; Friedman, 1979; Võ & Henderson, 2009). But you’re also unlikely to focus on aspects of the scene that are totally unexpected. If, for example, you’re walking through on the ground, and so you may fail to notice the stapler (unless it’s a bright color, or or some such). This point provides part of the basis for inattentional blindness stapler sitting directly in your path, (pp. 155-156) and also leads to a pattern called the “ultra-rare item effect” (Mitroff & Biggs, 2014). The pattern in which rare items are often overlooked; as the authors of one paper put it, term refers to a troubling “If you don’t find it often, you often don’t find it” (Evans, Birdwell, & Wolfe, 2013; for a consequence of this pattern, see Figure 5.14).

 

As another complication, people differ in what they pay attention to (e.g., Castelhano & Henderson, 2008), although some differences aren’t surprising. For example, in looking at a scene, likely than men to focus on how the people within the scene are dressed; men are women are more more likely to focus on what the people look like (including their body shapes; Powers, Andriks, & Loftus, 1979)

 

Perhaps more surprising are differences from one culture to the next in how people pay

attention. The underlying idea here is that people in the West (the United States, Canada, most of Europe) live in “individualistic” cultures that emphasize the achievements and qualities of the single person; therefore, in thinking about the world, Westerners are likely to focus on individual people individual objects, and their attributes. In contrast, people in East Asia have traditionally lived in “collectivist” cultures that emphasize the ways in which all people are linked to, and shaped by, the people around them. East Asians are therefore encouraged to think more holistically, with a focus on the context and how people and objects are related to one another. (See Nisbett, 2003; Nisbett, Peng, Choi, & Norenzayan, 2001; also Tardif et al., 2017. For a broad review, see Heine, 2015.)

 

This linkage between culture and cognition isn’t rigid, and so people in any culture can stray from these patterns. Even so, researchers have documented many manifestations of these differences from one culture to the next-including differences in how people pay attention. In one study, researchers tracked participants’ eye movements while the participants were watching animated scenes on a computer screen (Masuda et al., 2008). In the initial second of viewing, there was little difference between the eye movements of American participants and Japanese participants: Both groups spent 90% of the time looking at the target person, located centrally in the display. But in the second and third seconds of viewing, the groups differed. The Americans continued to spend 90% of their time looking directly at the central figure (and so spent only 10% of their time looking faces of people visible in the scene’s background); Japanese participants, in contrast, spent between at the faces in the background. This difference in viewing time at the 20% and 30% of their time looking was reflected in the participants’ judgments about the scene. When asked to make a judgment about the target person’s emotional state, American participants weren’t influenced by the emotional expressions of the people standing in the background, but Japanese participants were.

 

In another study, American and East Asian students were tested in a change-blindness procedure. They viewed one picture, then a second, then the first again, and this alternation continued until the participants spotted the difference between the two pictures. For some pairs of pictures, the difference involved an attribute of a central object in the scene (e.g., the color of a truck changed from one picture to the next). For these pictures, Americans and East Asians performed equivalently, needing a bit more than 9 seconds to spot the change. For other pairs of pictures, the difference involved the context (e.g., a pattern of the clouds in the background of the scene). For these pictures, the East Asians were notably faster than the Americans in detecting the change. (See Masuda & Nisbett, 2006. For other data, exploring when Westerners have a performance advantage and when East Asians have the advantage, see Amer, Ngo, & Hasher, 2016; Boduroglu, Shah, & Nisbett, 2010.)

Finally, let’s acknowledge that sometimes you choose what to pay attention to-a pattern called endogenous control of attention. But sometimes an element of the scene “seizes” your attention whether you like it or not, and this pattern is called exogenous control of attention. Exogenous control is of intense interest to theorists, and it’s also important for pragmatic reasons. For example, people who design ambulance sirens or warning signals in an airplane cockpit want to make sure these stimuli cannot be ignored. In the same way, advertisers do all they can to ensure that their product name or logo will grab your attention even if you’re intensely focused on something else (e.g., the competitor’s product; also see Figure 5.15).

 

Attending to Objects or Attending to Positions

 

A related question is also concerned with the “target” of the attention “spotlight” To understand the spotlight shines on a donut, then part of the issue, think about how an actual spotlight works. If a beam will fall on the donut’s hole and will illuminate part of the plate underneath the donut. Similarly, if the beam isn’t aimed quite accurately, it may also illuminate the plate just to the left of the donut. The region illuminated by the beam, in other words, is defined purely in spatial terms: a circle of light at a particular position. That circle may or may not line up with the boundaries of the object you’re shining the beam on.

Is this how attention works-so that you pay attention to whatever falls in a certain region of space? If this is the case, you might at times end up paying attention to part of this object, part of objects rather than to that. An alternative is that you pay attention to positions in space. To continue the example, the target of your attention might be the donut itself rather than its location. In that case, the plate just to the left and the bit of plate visible through the donut’s hole might be close to your focus, but they aren’t part of the attended object and so aren’t attended.

Which is the correct view of attention? Do you pay attention to regions in space, no matter what objects (or parts of objects) fall in that region? Or do you pay attention to objects? It turns out that each view captures part of the truth.

 

One line of evidence comes from the study of people we mentioned at the chapter’s start – people who suffer from unilateral neglect syndrome (see Figure 5.16). Taken at face value, the symptoms shown by these patients afflicted patient seem to support a space-based account of attention: The seems insensitive to all objects within a region that’s defined spatially-namely, everything outside of it, then the spatial boundary is what matters, not the object’s boundaries. This is clear, for the left of his or her current focus. If an object falls half within the region and half example, in how these patients read words (likely to read “BOTHER” as “HER” or “CARROT” as “ROT”) -responding only to the word’s right half, apparently oblivious to the word’s overall boundaries.

 

Other evidence, however, demands further theory. In one study, patients with neglect syndrome had to respond to targets that appeared within a barbell-shaped frame (see Figure 5.17). Not surprisingly, they were much more sensitive to the targets appearing within the red circle (on the right) and missed many of the targets appearing in the blue circle (on the left); this result confirms the patients’ diagnosis. What’s crucial, though, is what happened next. While the patients watched the barbell frame was slowly spun around, so that the red circle, previously on the right, was now on the left and the blue circle, previously on the left, was now on the right.

 

If the patients consistently neglect a region of space, they should now be more sensitive to the (right-side) blue circle. But here’s a different possibility: Perhaps these patients have a powerful bias to attend to the right side, and so initially they attend to the red circle. Once they have “locked in” to this circle, however, it’s the object, not the position in space, that defines their focus of attention. According to this view, if the barbell rotates, they will continue attending to the red circle (this is, after all, the focus of their attention), even though it now appears on their “neglected” side. This prediction turns out to be correct: When the barbell rotates, the patients’ focus of attention seems to rotate with it (Behrmann & Tipper, 1999)

To describe these patients, therefore, we need a two-part account. First, the symptoms of neglect syndrome plainly reveal a spatially defined bias: These patients neglect half of space. But, second, once attention is directed toward a target, it’s the target itself that defines the focus of attention; if the targ moves, the focus moves with it. In this way, the focus of attention is object- based, not space-based. (For more on these issues, see Chen & Cave, 2006; Logie & Della Salla, 2005; Richard, Lee, & Vecera, 2008.)

And it’s not just people with brain damage who show this complex pattern. People with intact brains also show a mix of space-based and object-based attention. We’ve already seen evidence for the spatial base: The Posner et al. (1980) study and many results like it show that participants can focus on a particular region of space in preparation for a stimulus. In this situation, the stimulus has not yet appeared; there is no object to focus on. Therefore, the attention must be spatially defined.

 

In other cases, though, attention is heavily influenced by object boundaries. For example, in some studies, participants have been shown displays with visually superimposed stimuli (e.g., Becklen & Cervone, 1983; Neisser & Becklen, 1975). Participants can usually pay attention to one of these stimuli and ignore the other. This selection cannot be space-based (because both stimuli are in the same place) and so must be object-based.

This two-part account is also supported by neuroscience evidence. Various studies have attending attending particular position in examined the pattern of brain activation when participants space, and the pattern of activation when participants are data suggest that the tasks involve different brain circuits-with one set of circuits (the dorsal to a are particular object. These to a attention system), near the top of the head, being primarily concerned with spatial attention, and a different set of circuits (the ventral attention system) being crucial for nonspatial tasks (Cave, 2013; Cohen, 2012; Corbetta & Shulman, 2011). Once again, therefore, our description of attention needs to include a mix of object-based and space-based mechanisms.

 

Perceiving and the Limits Cognitive Capacity: An Interim Summary

 

Let’s pause to review. In some circumstances, you seem to inhibit the processing of unwanted inputs. This inhibitory process is quite specific (the inhibition blocks well-defined input) and certainly benefits from practice.

More broadly, though, various mechanisms facilitate the processing of desired inputs. The key here is priming. You’re primed for some stimuli because you’ve encountered them often in the past, with the result that you’re more likely to process (and therefore more likely to notice) these stimuli if you encounter them again. In other cases, the priming depends on your ability to anticipate what the upcoming stimulus will be. If you can predict what the stimulus will likely be, you can prime the relevant processing pathway so that you’re ready for the stimulus when it arrives. The priming will make you more responsive to the anticipated input if it does arrive, and this gives the anticipated input an advantage relative to other inputs. That advantage is what you want-so that you end up perceiving the desired input (the one you’ve prepared yourself for) but don’t perceive the inputs you’re hoping to ignore (because they fall on unprimed detectors).

The argument, then, is that your ability to pay attention depends to a large extent on your ability to anticipate the upcoming stimulus. This anticipation, in turn, depends on many factors. You’ll have a much easier time anticipating (and so an easier time paying attention to) materials that you understand as opposed to materials that you don’t understand. Likewise, when a stimulus sequence is just beginning, you have little basis for anticipation, so your only option may be to focus on the position in space that holds the sequence. Once the sequence begins, though, you get a sense of how it’s progressing, and this lets you sharpen your anticipations (shifting to object-based attention, rather than space-based)-which, again, makes you more sensitive to the attended input and more resistant to distractors.

 

Putting all these points together, perhaps it’s best not to think of the term “attention” as referring particular mechanism. Instead, it’s better to think of attention (as one to a particular process or a research group put it) as an effect rather than as a cause (Krauzlis, Bollimunta, Arcizet, & Wang, 2014). In other words, the term “attention” doesn’t refer to some mechanism in the brain that produces a certain outcome. It’s better to think of attention as itself an outcome- a byproduct of many other mechanisms.

As a related perspective, it may be helpful to think of paying attention, not as a process or mechanism, but as an achievement-something that you’re able to do. Like many other achievements (e.g., doing well in school, staying healthy), paying attention involves many elements, and the exact set of elements needed will vary from one occasion to the next. In all cases, though, multiple steps are needed to ensure that you end up being aware of the stimuli you’re interested in, and not getting pulled off track by irrelevant inputs.

 

e. Demonstration 5.3: The Control of Eye Movements

 

The chapter discusses movements of spatial attention and emphasizes that these movements are separate from the overt movements of your eyes. The main evidence for this lies in timing: You can shift attention much more rapidly than you can move your eyes.

Nonetheless, movements of the eyes are crucial for your pickup of information from the world. There are several reasons for this, including the fact (discussed in Chapter 3) that you can only see detail in a small portion of the visual world-namely: the portion that is currently falling on your foveas. You therefore have to move your eyes around in order to bring different bits of the visual world onto the fovea. So, by looking first here and then there, you gradually pick up detail from many positions in the world before you.

But how do you decide how to move your eyes? What guides these movements? Part of the answer lies in the (undetailed) information you pick up from the edges of your vision; you use this information to decide where to look next.

To demonstrate this point, try reading the following three paragraphs. You can, if you wish, time yourself: How long do you need for the first paragraph? For the second? The third? Or, without timing, you can just get a broad sense of which paragraph is easier to read, and which is harder.

 

Paragraph 1

This is how print normally appears, with a mix of lower- and upper-case letters, and spaces between the letters. As your eye moves along these lines, you can see, at the edge of your vision, the shapes and lengths of the upcoming words. This is fairly crude information, and certainly doesn’t tell you what the upcoming words are. Nonetheless, this information is useful for you, because it helps you to plan your eye movements, as you decide where you’re going to point your eyes after your next saccade.

 

Paragraph 2

Here * we * have * removed * the * spaces * between * the * words. * We * haven’t * blurred * everything * together * Instead, * we * have * inserted * a * character * in * between * the * words. * As * a * result, * it’s * now * difficult * for * you * to * see, * in * your * peripheral * vision, * where * the. * boundaries * between * words * are * located, * and * this * makes * it * difficult * for * you * to * see, * in * peripheral * vision, * whether * the * upcoming * words * are * long * or * short. * As * a * result, * guide * to * your * eye * can’t * use * the * length * of * the * upcoming * words. * as * a * you * movements. * Does * this * affect *your * reading?

 

Paragraph 3

NOW, WE’VE LEFT THE SPACES BETWEEN THE WORDS, BUT HAVE SHIFTED ALL THE TYPE TO CAPITAL LETTERS. WITH LOWER-CASE LETTERS, WORDS OFTEN HAVEA DISTINCTIVE SHAPE, THANKS TO THE FACT THAT SOME LETTERS DANGLE DOWN BELOW THE LINE OF PRINT, WHILE OTHERS STICK UP ABOVE THE LINE OF PRINT. ONCE THE WORDS ARE ALL CAPITALIZED, THESE DISTINCTIONS ARE ELIMINATED, AND ALL WORDS HAVE THE SAME RECTANGULAR SHAPE HERE, THEREFORE, YOU CAN’T USE THE SHAPE OF THE UPCOMING WORDS AS A GUIDE TO YOUR EYE MOVEMENTS. DOES THIS MATTER?

 

Most people need more time to read Paragraphs 2 and 3, compared to Paragraph 1. There are actually several reasons for this, but part of the explanation lies in the fact that you use both the spaces between words and the “shape” of upcoming words to guide your eye movements. You might think about this the next time YOU CONSIDER HANDING IN A PAPER, OR USING HEADINGS IN A PAPER, ENTIRELY IN CAPS-YOU’RE ACTUALLY MAKING THOSE MATERIALS MORE DIFFICULT FOR YOUR READER!

 

Divided Attention

 

So far in this chapter, we’ve emphasized situations in which you’re trying to focus on a single input. If other tasks and other stimuli were on the scene, they were mere distractors. Sometimes, though, your goal is different: You want to “multitask”-that is, deal with multiple tasks, or multiple inputs, all at the same time. What can we say about this sort of situation-a situation involving divided attention?

Sometimes divided attention is easy. Almost anyone can walk and sing simultaneously; many people like to knit while they’re holding a conversation or listening to a lecture. It’s much harder, though, to do calculus homework while listening to a lecture; and trying to get your assigned reading done while watching TV is surely a bad bet. What lies behind this pattern? Why are some combinations difficult while others are easy?

 

Our first step toward answering these questions is already in view. We’ve proposed that perceiving requires resources that are in limited supply; the same is presumably true for other tasks-remembering, reasoning, problem solving. They, too, require resources, and without these resources the processes cannot go forward. What are the resources? The mix of things: certain answer includes a mechanisms that do specific jobs, certain types of memory that hold on to information while keep you’re working the mental machinery going, and more. No on it, energy supplies to a task matter what the resources are, though will be possible only if you have the needed as a dressmaker can produce a resources-just dress only if he has the raw materials, the tools, the time needed, the energy to run the sewing machine, and so on.

All of this leads to a proposal: You can perform concurrent tasks only if you have the resources needed for both. If the two tasks, when combined, require more resources than you’ve got, then divided attention will fail.

 

The Specificity of Resources

 

Imagine that you’re hoping to read a novel while listening to an academic lecture. These tasks are different, but both involve the use of language, and so it seems likely that these tasks will have overlapping resource requirements. As a result, if you try to do the tasks at the same time, they’re likely to compete for resources- and therefore this sort of multitasking will be difficult.

Now, think about two very different tasks, such as knitting and listening to a lecture. These tasks are unlikely to interfere with each other. Even if all your language-related resources are in use for the lecture, this won’t matter for knitting, because it’s not a language-based task.

More broadly, the prediction here is that divided attention will be easier if the simultaneous tasks are very different from each other, because different tasks are likely to have distinct resource requirements. Resources consumed by Task 1 won’t be needed for Task 2, so it doesn’t matter for Task 2 that these resources are tied up in another endeavor.

 

Is this the pattern found in the research data? In an early study by Allport, Antonis, and Reynolds (1972), participants heard a list of words presented through headphones into one ear, and their task was to shadow (i.e., repeat back) these words. At the same time, they were also presented with a second list. No immediate response was required to the second list, but later on, memory was tested for these items. In one condition, the second list (the memory items) consisted of words presented into the other ear, so the participants were hearing (and shadowing) a list of words in one ear while simultaneously hearing the memory list in the other. In another condition, the memory items were presented visually. That is, while the participants were shadowing one list of words, they were also seeing a different list of words on a screen before them. Finally, in a third condition, the memory items consisted of pictures, also presented on a screen.

These three conditions had the same requirements-shadowing one list while memorizing another. But the first condition (hear words + hear words) involved very similar tasks; the second condition (hear words + see words) involved less similar tasks; the third condition (hear words + see pictures), even less similar tasks. On the logic we’ve discussed, we should expect the most interference in the first condition and the least interference in the third. And that is what the data showed (see Figure 5.18).

 

The Generality of Resources

 

Similarity among tasks, however, is not the whole story. If it were, then we’d observe less and less interference as we consider tasks further and further apart. Eventually, we’d find tasks so different from each other that we’d observe no interference at all between them. But that’s not the pattern of the evidence.

Consider the common practice of talking on a cell phone while driving. When you’re on the phone, the main stimulus information comes into your ear, and your primary response is by talking. In driving, the main stimulation comes into your eyes, and your primary response involves control of your hands on the steering wheel and your feet on the pedals. For the phone conversation, you’re relying on language skills. For driving, you need spatial skills. Overall, it looks like there’s little overlap in the specific demands of these two tasks, and so little chance that the tasks will compete for resources.

 

Data show, however, that driving and cell- phone use do interfere with each other. This is reflected, for example, in the fact that phone use has been implicated in many automobile & Laakso, (Lamble, Kauranen, accidents Summala, 1999). Even with a hands-free phone drivers engaged in cell-phone conversations are more likely to be involved in accidents, more likely to overlook traffic signals, and slower to hit the brakes when they need to. (See Kunar Carter, Cohen, & Horowitz, 2008; Levy & Pashler, 2008; Sanbonmatsu, Strayer, Biondi, Behrends, & Moore, 2016; Stothart, Mitchum, & Yehnert, 2015; Strayer & Drews, 2007; Strayer, Drews, & Johnston, 2003. For some encouraging data, though, on why phone-related accidents don’t occur even more often, see Garrison & Williams, 2013; Medeiros-Ward, Watson, & Strayer, 2015.)

 

As a practical matter, therefore, talking the phone while driving is a bad idea. In fact on some people estimate that the danger caused by driving while on the phone is comparable to (and perhaps greater than) the risk of driving while drunk. But, on the theoretical side, notice that the interference observed between driving and talking is interference between two hugely distinctive activities-a point that provides important information about the nature of the resource competition involved, and therefore the nature of mental resources.

Before moving on, we should mention that the data pattern is different if the driver is talking to a passenger in the car rather than using the phone. Conversations with passengers seem to cause little interference with driving (Drews, Pasupathi, & Strayer, 2008; see Figure 5.19), and the reason is simple. If the traffic becomes complicated or the driver has to perform some tricky maneuver, the passenger can see this-either by looking out of the car’s window or by noticing the driver’s tension and focus. In these cases, passengers helpfully slow down their side of the conversation, which takes the load off of the driver, enabling the driver to focus on the road (Gaspar et al., 2014; Hyman, Boss Wise, McKenzie, & Caggiano, 2010; Nasar, Hecht, & Wener, 2008).

 

Executive Control

 

The evidence is clear, then, that tasks as different as driving and talking compete with each other for some mental resource. But what is this resource, which is apparently needed for both verbal tasks and spatial ones, tasks with visual inputs and tasks with auditory inputs?

Evidence suggests that multiple resources may be relevant. Some authors describe resources that (roughly) 2001, 2005; Lavie, Lin, Zokaei, & Thoma, 2009; MacDonald & Lavie, 2008; Murphy, Groeger, & Greene, 2016). According to this perspective, tasks vary in the “load” they put on you, and the greater as an energy supply, drawn on by all tasks (Eysenck, 1982; Kahneman, 1973; Lavie, serve the load, the greater the interference with other tasks. In one study, drivers were asked to estimate whether their vehicle would fit between two parked cars (Murphy & Greene, 2016). When the judgment was difficult, participants were less likely to notice an unexpected pedestrian at the side of the road. In other words, higher perceptual load (from the driving task) increased inattentional blindness. (Also see Murphy & Greene, 2017.)

 

Other authors describe mental resources as “mental tools” rather than as some sort of mental “energy supply.” (See Allport, 1989; Baddeley, 1986; Bourke & Duncan, 2005; Dehaene, Sergent, & Changeux, 2003; Johnson-Laird, 1988; Just, Carpenter, & Hemphill, 1996; Norman & Shallice, 1986; Ruthruff, Johnston, & Remington, 2009; Vergaujwe, Barrouillet, & Camos, 2010. For an alternative perspective on these resources, see Franconeri, 2013; Franconeri, Alvarez, & Cavanagh, 2013.) One of these tools is especially important, and it involves the mind’s executive control. This term refers to the mechanisms that allow you to control your own thoughts, and these mechanisms have multiple functions. Executive control helps keep your current goals in mind, so that these goals (and not habit) will guide your actions. The executive also ensures that your mental steps are organized into the right sequence-one that will move you toward your goals. And if your current operations aren’t moving you toward your goal, executive control allows you to shift plans, or change strategy. (For discussion of how the executive operates and how the brain tissue enables executive function, see Brown, Reynolds, & Braver, 2007; Duncan et al., 2008; Gilbert & Shallice, 2002; Kane, Conway Hambrick, & Engle, 2007; Miller & Cohen, 2001; Miyake & Friedman, 2012; Shipstead, Harrison, & Engle, 2015; Stuss & Alexander, 2007; Unsworth & Engle, 2007; Vandierendonck, Liefooghe, & Verbruggen, 2010.)

 

Executive control can only handle one task at a time, and this point obviously puts limits on your ability to multitask-that is, to divide your attention. But executive control is also important when you’re trying to do just a single task. Evidence comes from studies of people who have suffered damage to the prefrontal cortex (PFC), a brain area right behind the eyes that seems crucial for executive control. People with this damage (including Phineas Gage, whom we met in Chapter 2) can lead relatively normal lives, because in their day-to-day behavior they can often rely on habit or can simply respond to prominent cues in their environment. With appropriate tests, though, we can reveal the disruption that results from frontal lobe damage. In one commonly used task, patients with frontal lesions are asked to sort a deck of cards into two piles. At the start, the patients have to sort the cards according to color; later, they need to switch strategies and sort according to the shapes shown on the cards. The patients have enormous difficulty in making this shift and continue to sort by color, even though the experimenter tells them again and again that they’re placing the cards on the wrong piles (Goldman-Rakic, 1998). This is referred to as a perseveration error, a tendency to produce the same response over and over even when it’s plain that the task requires change in the response.

 

These patients also show a pattern of goal neglect-failing to organize their behavior in a way that moves them toward their goals. For example, when one was asked to copy Figure 5.20A, the patient patient produced the drawing shown in Figure 5.20B. The copy preserves features of the original, but close inspection reveals that the patient drew the copy with no particular plan in mind. The large was never drawn, and the diagonal lines that organize the figure rectangle that defines the shape were drawn in a piecemeal fashion. Many details are correctly reproduced but weren’t drawn in any to catch the patient’s sort of order; instead, these details were added whenever they happened attention (Kimberg, D’Esposito, & Farah, 1998). Another patient, asked to copy the same figure, produced the drawing shown in Figure 5.20C. This patient started to draw the figure in a normal way, but then she got swept up in her own artistic impulses, adding stars and a smiley face (Kimberg et al., 1998). (For more on executive control, see Aron, 2008; Courtney, Petit, Maisog, Ungerleider, & Haxby, 1998; Duncan et al., 2008; Gilbert &Shallice, 2002; Huey, Krueger, & Grafman, 2006; Kane & et al., 1998; Logie & Della Salla, 2005; Ranganath & Blumenfeld, 2005; Stuss & Engle, 2003; Kimberg Levine, 2002; also Chapter 13.)

 

Divided Attention: An Interim Summary

 

Our consideration of selective attention drove us toward a several-part account, with one mechanism serving to block out unwanted distractors and other mechanisms promoting the processing of interesting stimuli. Now, in our discussion of divided attention, we again need several theory. Interference between tasks is increased if the tasks are similar to each other elements in our presumably because similar tasks overlap in their processing requirements and make competing demands on mental resources that are specialized for that sort of task.

But interference can also be observed with tasks that are entirely different from each other-such as driving and talking on a cell phone. Therefore, our account needs to include resources that are general enough in their use that they’re drawn on by almost any task. We’ve identified several of these general resources: an energy supply needed for mental tasks, executive control, and others well. No matter what the resource, though, the key principle will be the same: Tasks will interfere with each other if their combined demand for a resource is greater than the amount available-that is, if the demand exceeds the supply.

 

Practice

 

For a skilled driver, talking on a cell phone while driving is easy as long as the driving is straightforward and the conversation is simple. Things fall apart, though, the moment the conversation becomes complex or the driving becomes challenging. Engaged in deep conversation, the driver misses a turn; while maneuvering through an intersection, the driver suddenly stops talking.

The situation is different, though, for a novice driver. For someone who’s just learning to drive straight road with no traffic. If we ask the novice to do driving is difficult all by itself, even on a anything else at the same time-whether it’s talking on a cell phone or even listening to the radio-we put the driver (and other cars) at substantial risk. Why is this? Why are things so different after practice?

 

Practice Diminishes Resource Demand

 

We’ve already said that mental tasks require resources, with the particular resources required being dependent on the nature of the task. Let’s now add another claim: As a task becomes more practiced, it requires fewer resources, or perhaps it requires less frequent use of these resources.

This decrease in a task’s resource demand may be inevitable, given the function of some resources. Consider executive control. We’ve mentioned that this control plays little role if you can rely on habit or routine in performing a task. (That’s why Phineas Gage was able to live a mostly normal life, despite his brain damage.) But early in practice, when a task is new, you haven’t formed any relevant habits yet, so you have no habits to fall back on. As a result, executive control is needed all the time. Once you’ve done the task over and over, though, you do acquire a repertoire of suitable habits, and so the demand for executive control decreases.

How will this matter? We’ve already said that tasks interfere with each other if their combined resource demand is greater than the amount of resources available. Interference is less likely, therefore, if the “cognitive cost” of a task is low. In that case, you’ll have an easier time accommodating the task within your “resource budget” And we’ve now added the idea that the resource demand (the “cost”) will be lower after practice than before. Therefore, it’s no surprise that practice makes divided attention easier-enabling the skilled driver to continue chatting with her passenger as they cruise down the highway, even though this combination is hopelessly difficult for the novice driver.

 

Automaticity

 

With practice in a task, then, the need for executive control is diminished, and we’ve mentioned one benefit of this: Your control mechanisms are available for other chores, allowing you to divide your attention in ways that would have been impossible before practice. Let’s be clear, though, that this gain comes at a price. With sufficient practice, task performance can go forward with no executive control, and so the performance is essentially not controlled. This can create a setting in which the performance acts as a “mental reflex,” going forward, once triggered, whether you like it or not.

Psychologists use the term automaticity to describe tasks that are well practiced and involve little (or no) control. (For a classic statement, see Shiffrin & Schneider, 1977; also Moors, 2016; Moors & De Houwer, 2006.) The often-mentioned example is an effect known as Stroop interference. In the classic demonstration of this effect, study participants are shown a series of words and asked to name aloud the color of the ink used for each word. The trick, though, is that the words themselves are color names. So people might see the word “BLUE” printed in green ink and would have to say “green” out loud, and so on (see Figure 5.21; Stroop, 1935).

 

This task turns out to be extremely difficult. There’s a strong tendency to read the printed words themselves rather than to name the ink color, and people make many mistakes in this task. Presumably, this reflects the fact that word recognition, especially for college-age adults, is enormously well practiced and therefore can proceed automatically. (For more on these issues, including debate about what exactly automaticity involves, see Besner et al., 2016; Durgin, 2000; Engle & Kane, 2004; Jacoby et al., 2003; Kane & Engle, 2003; Labuschagne & Besner, 2015; Moors, 2016)

 

Where Are the Limits?

 

As we near the end of our discussion of attention, it may be useful again to summarize where we are. Two simple ideas are key: First, tasks require resources, and second, you can’t “spend” more resources than you have. These claims are central for almost everything we’ve said about selective and divided attention.

As we’ve seen, though, there are different types of resources, and the exact resource demand of a task depends on several factors. The nature of the task matters, of course, so that the resources spatial task (e.g. required by a verbal task (e.g., reading) are different from those required by a remembering a shape). The novelty of the task and the amount of flexibility the task requires also matter. Connected to this, practice matters, with well-practiced tasks requiring fewer resources.

 

What, then, sets the limits on divided attention? When can you do two tasks at the same time, and when not? The answer varies, case by case. If two tasks make competing demands on task- specific resources, the result will be interference. If two tasks make competing demands on task- general resources (the energy supply or executive control), again the result will be interference. Also, it will be especially difficult to combine tasks that involve similar stimuli-tasks that both involve printed text, for example, or that both involve speech. The problem here is that these stimuli can “blur together,” with a danger that you’ll lose track of which elements belong in which input (“Was it the man who said ‘yes, or was it the woman?”; “Was the red dog in the top picture or the bottom one?”). This sort of “crosstalk” (leakage of bits of one input into the other input) can compromise your performance.

In short, it seems again like we need a multipart theory of attention, with performance being limited by different factors at different times. This perspective draws us back to a claim we’ve made several times in this chapter: Attention cannot be thought of as a skill or a mechanism. Instead attention is an achievement-an achievement of performing multiple activities simultaneously or an achievement of successfully avoiding distraction when you want to focus on a single task. And this achievement rests on an intricate base, so that many elements contribute to your ability to attend.

 

Finally, we have discussed various limits on human performance-that is, limits on how much you can do at any one time. How rigid are these limits? We’ve discussed the improvements in divided attention that are made possible by practice, but are there boundaries on what practice can accomplish? Can you gain new mental resources or find new ways to accomplish a task in order to avoid the bottleneck created by some limited resource? Some evidence indicates that the answer may be yes; if so, many of the claims made in this chapter must be understood as claims about what is usual, not about what is possible. (See Hirst, Spelke, Reaves, Caharack, & Neisser, 1980; Spelke, Hirst, & Neisser, 1976. For a neuroscience perspective, see Just & Buchweitz, 2017.) With this, some traditions in the world-Buddhist meditation traditions, for example-claim it’s possible to train attention so that one has better control over one’s mental life. How do these claims fit into framework we’ve developed in this chapter? These are issues in need of exploration, and, in truth what’s at stake here is a question about the boundaries on human potential, making these issues of deep interest for future researchers to pursue.

 

e. Demonstration 5.4: Automaticity and the Stroop Effect

 

The chapter describes the classic version of the Stroop effect, in which you are looking at words but trying not to read them, and trying instead to name the color of ink in which the words are printed. However, there are many variations on this effect-cases in which your well-practiced (and therefore automatic) habits interfere with your intentions. For example, imagine a task in which you have to say out loud whether words are at the top or bottom of a computer screen, and you see the word “top” printed at the bottom of the screen. Or imagine a task in which you have to say whether a voice belongs to a male or a female, and you hear a woman’s voice pronouncing the word “male.” These setups are likely to cause interference, and your responses will be appreciably slower than they would have been if you were judging the position of some random letters on the computer screen, or if the woman’s voice were pronouncing some irrelevant word.

Here’s a different variation on the Stroop effect. If you like, you can collect some data with this demonstration by timing how long you need for the various conditions. Even without timing, though, you’ll easily detect which conditions are easier, and which are harder. Your task here is to count how many items there are in a row, and to say the count out loud. Therefore, if you see these two rows:

 

* * * *

$ $ $

 

you should say, out loud, “four, three” (because there are four asterisks, and then three dollar signs).

 

How about these rows?

# #

&

& &

/ / / /

#

? ?

/ / /

# # #

& & & &

?

/ /

 

 

How many items are in each row?

4 4

2 2 2 2

3

3 3

1 1 1 1

4

2 2

1 1 1

4 4 4

3 3 3 3

2

1 1

 

Try this task, first, with the left column. How easy was this task? Were you able to go quickly? Did you make errors? Then, try this with the right column. Again, how easy was it, and how quickly did you go? Did you make errors? You’ll probably find the second task (with the right-hand column) much more difficult, because you have an automatic habit of reading the numerals rather than counting them-one more demonstration of the family of effects first documented in 1935 by Stroop.

 

e. Demonstration 5.5: Meditation Exercise

 

There are many forms of meditation exercises. Some are designed to reduce stress; some aim to help overcome anxiety; some exercises are alleged to have spiritual benefits. In virtually all cases, though, the exercises involve practice in paying attention. In some of these exercises, you practice focusing on a sound (e.g., a simple chant). In other exercises, you practice focusing on a visual image. Still other exercises involve focusing on a rhythmic exercise (e.g., gently tapping on specific parts of your head).

Many people find these exercises to be helpful, so you might want to see for yourself whether you have a similar positive experience. Don’t expect an immediate benefit, though. You may need a couple of weeks of regular practice before you see results. But the cost of practice is low, and the eventual benefits may be large!

Find a comfortable place to sit in a relatively quiet space. Sit in a relaxed position so that you don’t have to shift your body every few minutes. Now, close your eyes and breathe slowly, deeply and deliberately in and out. Focus on your breathing. If you find yourself thinking about something else, that’s okay; just go back to focusing on your breathing. Many people find that they want a more specific focus, so they put one hand on their chest and focus on the slow, rhythmic movements their diaphragm. Other people focus on a spot far back on the roof of the mouth (probably focusing on a body part called the “uvula”) and do what they can to think about nothing other than the flow of air in across this spot, and then back out across this spot.

 

Try this for a minute or so. Then, tomorrow, try it again. The next day, try it for a slightly longer time. If you keep at it, you’ll likely observe several things. First, you’ll have an easier and easier time keeping your attention focused on the target you’ve chosen; fewer and fewer thoughts will intrude on your exercise. You’ll also be able to hold this focus for increasingly longer durations. You’ll probably also find that you can maintain this focus in a greater variety of physical settings-so that you no longer have to return to the same comfortable chair, and you’re less and less disrupted by sounds in the environment

Try it. As we’ve already said, the cost is low-just a couple of minutes each day. But you’ll probably find that your skill in this exercise-and the ability to keep focused in this way-does improve. With this, you’ll likely find these brief sessions restful, a good way to step away from life’s strains, and maybe leaving you a more relaxed, happier person overall.

 

COGNITIVE PSYCHOLOGY AND EDUCATION

 

ADHD

 

When students learn about attention, they often have questions about failures of attention: “Why can’t I focus when I need to?”; “Why am I so distracted by my roommate moving around the room when I’m studying?”; “Why can some people listen to music while they’re reading, but I can’t?”

One question comes up more than any other: “I [or “my friend” or “my brother] was diagnosed with ADHD. What’s that all about?” This question refers to a common diagnosis: attention- deficit/hyperactivity disorder. The disorder is characterized by a number of problems, including impulsivity, constant fidgeting, and difficulty in keeping attention focused on a task. People with ADHD hop from activity to activity and have trouble organizing or completing projects. They sometimes have trouble following a conversation and are easily distracted by an unimportant sight or sound. Even their own thoughts can distract them-and so they can be pulled off track by their own daydreams.

 

The causes of ADHD are still unclear. Contributing factors that have been mentioned include encephalitis, genetic influences, food allergies, high lead concentrations in the environment, and more. The uncertainty about this point comes from many sources, including some ambiguity in the diagnosis of ADHD. There’s no question that there’s a genuine disorder, but diagnosis is complicated by the fact that the disorder can vary widely in its severity. Some people have relatively mild symptoms; others are massively disrupted, and this variation can make diagnosis difficult.

In addition, some critics argue that in many cases the ADHD diagnosis is just a handy label for children who are particularly active or who don’t easily adjust to a school routine or a crowded classroom. Indeed, some critics suggest that ADHD is often just a convenient categorization for physicians or school counselors who don’t know how else to think about an especially energetic child.

In cases in which the diagnosis is warranted, though, what does it involve? As we describe in the chapter, there are many steps involved in “paying attention,” and some of those steps involve inhibition-so that we don’t follow every stray thought, or every cue in the environment, wherever it may lead. For most of us, this is no problem, and we easily inhibit our responses to most distractors. We’re thrown off track only by especially intrusive distractors-such as a loud noise or a stimulus that has special meaning for us.

Some researchers propose, though, that people with ADHD have less effective inhibitory circuits in their brains, making them more vulnerable to momentary impulses and chance distractions. This is what leads to their scattered thoughts, their difficulty in schoolwork, and so on. What can be done to help people with ADHD? One of the common treatments is Ritalin, a drug that is a powerful stimulant. It seems ironic that we’d give a stimulant to people who are already described as to0 active and too energetic, but the evidence suggests that Ritalin is effective in treating actual cases of ADHD-plausibly because the drug activates the inhibitory circuits within the brain, helping the person to guard against wayward impulses.

 

However, we probably shouldn’t rely Ritalin as the sole treatment for ADHD. One reason is the risk of overdiagnosis-it’s worrisome that this drug may be routinely given to people, including young children, who don’t actually have ADHD. Also, there are concerns about the long-term effects and possible side effects of Ritalin, and this certainly motivates us to seek other forms of treatment. (Common side effects include weight loss, insomnia, anxiety, and slower growth during childhood.) Some of the promising alternatives involve restructuring of the environment. If people with ADHD are vulnerable to distraction, we can help them by the simple step of reducing the sources of distraction in their surroundings. Likewise, if people with ADHD are influenced by whatever cues they detect, we can surround them with helpful cues-reminders of what they’re supposed doing and the tasks they’re supposed to be working to be on. These simple interventions do seem to be helpful, especially with adults diagnosed with ADHD.

Overall, then, our description of ADHD requires multiple parts. Researchers have considered a diverse set of causes, and there may be a diverse set of psychological mechanisms involved in the disorder. (See pp. 168.) The diagnosis probably is overused, but the diagnosis is surely genuine in many cases, and the problems involved in ADHD are real and serious. Medication can help, but we’ve noted the concern about side effects of the medication. Environmental interventions can also help and may, in fact, be the best bet for the long term, especially given the important fact that in most cases the symptoms of ADHD diminish as the years go by.

 

COGNITIVE PSYCHOLOGY AND THE LAW

 

what do witnesses pay attention to? 

 

Throughout Chapter 5, we emphasized how little information people seem to gain about stimuli that are plainly visible (or plainly audible) if they aren’t paying attention to these stimuli. As a result, people fail to see a gorilla that’s directly in front of their eyes, a large-scale color change that’s plainly in view, and more.

Think about what this means for law enforcement. Imagine a customer in a bank during a robbery. Maybe the customer waited in line directly behind the robber. Will the customer remember what the robber looked like? Will the customer remember the robber’s clothing? There’s no way to tell unless we have some indication of what the customer was paying attention to-even though the robber’s appearance and clothing were prominent stimuli in the customer’s environment.

Of course, the factors governing a witness’s attention will vary from witness to witness and from crime to crime. One factor, though, is often important: the presence of a weapon during the crime. If a gun is in view, then of course witnesses will want to know whether it’s pointed at them and whether the criminal’s finger is on the trigger. After all, what else in the scene could be more important to the witnesses? But with this focus on the weapon, other elements in the scene will be unattended, so that the witnesses will fail to notice-and later on fail to remember-many bits of information that are crucial for law enforcement.

 

Consistent with these suggestions, witnesses to crimes involving weapons are often said to show a pattern called “weapon focus.” They’re able to report to the police many details about the weapon (its size, its color) and, often, details about the hand that was holding the weapon (e.g., whether the person was wearing any rings or a bracelet). However, because of this focus, the witnesses may have a relatively poor memory for other aspects of the scene-including the crucial information of what the perpetrator looked like. In fact, studies suggest that eyewitness identifications of the perpetrator may be systematically less accurate in crimes involving weapons-presumably because the witnesses attention was focused on the weapon, not on the perpetrator’s face.

The weapon-focus pattern has been demonstrated in many studies, and scientists have used a statistical procedure called “meta-analysis” to provide an overall summary of the data. Meta-analysis is a technique that mathematically combines the results from many different studies, and also compares the studies, allowing us to ask what factors lead to a stronger version of an effect and what factors lead to a weaker version. A meta-analysis of the weapon-focus studies tells us that this effect is stronger and more reliable in studies that are closer to actual forensic settings. It seems, then, that the weapon-focus effect is not a peculiar by-product of the artificial situations created in the lab; in fact, the effect may be underestimated in laboratory studies.

 

Surprisingly, though, the weapon-focus pattern is less consistent in actual crimes. In some cases, witnesses tell us directly that they experienced weapon focus: “All I remember is that huge knife”; “I couldn’t take my eyes off of that gun.” But this pattern isn’t reliable, and several studies, examining the memories of real victims of real crimes, have found little indication of weapon focus. In these studies, the accuracy of witness I.D.’s is the same whether or not the crime involved a weapon-with the implication that the weapon didn’t matter for the witness’s attention.

What’s going on here? The answer probably has several parts, including whether violence is overtly threatened during the crime (this will make weapon focus more likely) and whether the perpetrator happens to be distinctive in his or her appearance (this will make weapon focus less likely). Also, the duration of the crime is important. A witness may focus initially on the weapon (and so will show the pattern of weapon focus); but then, if the crime lasts long enough, the witness has time to look around and observe other aspects of the scene as well. This would decrease the overall impact of the weapon-focus effect.

Even with these complications, weapon focus is important for the justice system. Among other considerations, police officers and attorneys often ask whether a witness was “attentive” or not, and they use this point in deciding how much weight to give the witness’s testimony. Weapon focus reminds us, however, that it’s not enough to ask, in a general way, whether someone was attentive. Instead, we need to ask a more specific question: What, within the crime scene, was the witness paying attention to? Research is clear that attention will always be selective and that unattended aspects of an event won’t be perceived and won’t be remembered later. We should therefore welcome any clue-including the weapon-focus pattern-that helps us figure out what the witness paid attention to, as part of an overall effort toward deciding when we can rely on a witness’s report and when we cannot.

 

COGNITIVE PSYCHOLOGY AND THE LAW

 

guiding the formulation of new laws 

 

Many jurisdictions now have laws prohibiting cell-phone use while driving. In fact, in the author’s home state (Oregon), even touching a cell phone while driving can lead to a fine of more than $200.

These laws seem sensible in light of the evidence described in the chapter showing that cell- phone use does interfere with driving. Cell-phone users are more likely to overlook landmarks or even traffic signals, slower to hit the brakes when needed, and more. However, many jurisdictions allow drivers to use hands-free cell phones (e.g, phones using a Bluetooth connection to an earpiece or to a speaker in the car). Research tells us, though, that this is a mistake: The distraction from cell- phone use is the same whether the driver is using a hand to hold the phone or not. This point is clear in a number of studies. In this regard, then, the relevant research has been ignored, with the result that the laws provide less protection than we might wish!

Other jurisdictions (and many drivers) resist any laws limiting cell-phone use and offer a commonsense rebuttal to the research: Many drivers have spent countless minutes on the phone without difficulty-without injuring themselves or others, without damaging their car or someone else’s. They claim, therefore, that the studies exaggerate the risk and that it’s perfectly possible for a good driver to talk on the phone while moving down the road.

 

What’s the response to this argument? Yes, a lot driving is pretty much routine. You can adjust your steering to the road’s curves without thinking much about what you’re doing. You can rely on well-practiced habits in choosing a following distance for the car in front of you, or in navigating the often-used route to the mall. Even so, situations can come up when your set of driving habits isn’t enough: Perhaps you’re looking for a highway exit you’ve never used before, or the traffic requires tricky lane switch. In such cases, you’ll need to choose the timing and sequence of your actions with a you’ll need the resource the chapter calls “executive control.”

The same points can be made for conversations. Often, these can proceed without deep thought. You generally have an easy time formulating your ideas and choosing your words. Because you’ve had years of practice in “translating” your verbal intentions into muscle movements, you give little thought to the timing of your inhales and exhales. But what if the conversation becomes complex? What if to the positioning of your tongue to make sure the right sounds come out of your mouth, or you’re asked a hard question or want to choose your words with extra care? In these cases, you’ll to find the right example, want to rise above habits-searching through memory, perhaps, weighing your options in selecting the right phrase. Once again, you’re stepping away from habit, or and so you need executive control.

 

Now, we can put the pieces together. Ordinarily, driving while conversing is easy, because often neither driving nor the conversation requires much in the way of executive control. This is why, most of the time, you can use a cell phone while driving without any problems. But disaster can arise if the driving and the conversation both contain some unexpected element at roughly the same time. This is when you’ll need executive control for both tasks-but executive control can only handle one task at a time! And, of course, there’s no way to predict when this conflict will occur, so the only safe plan is to make sure it can’t occur. This is why jurisdictions are being entirely sensible when they forbid cell-phone use by anyone behind the wheel.

It’s clear, though, that debate over cell-phone use is certain to continue. In late 2011, the U.S. National Transportation Safety Board (NTSB) proposed a ban on all cell-phone use by drivers- whether they’re talking or texting, whether the device is hand-held or hands-free. The NTSB was reacting in part to a Missouri crash that killed two people and injured 38 more; the accident also involved two school buses. Despite this disaster and the research data supporting this recommendation, many jurisdictions have rejected the NTSB recommendation. The situation, it seems, is one in which legislatures (and citizens) are choosing to overrule the data-in a way that may continue to cost people their lives

NO TIME TO WRITE YOUR ASSIGNMENT? . PLACE AN ORDER WITH ASSIGNMENTS EXPERTS AND GET 100% ORIGINAL PAPERS

Quality, timely and plagiarism-free assignments (100% privacy Guaranteed)

ORDER NOW

0 replies

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply

Your email address will not be published. Required fields are marked *