• 沒有找到結果。

Similarity modulates the face-capturing effect in change detection

N/A
N/A
Protected

Academic year: 2021

Share "Similarity modulates the face-capturing effect in change detection"

Copied!
17
0
0

加載中.... (立即查看全文)

全文

(1)

PLEASE SCROLL DOWN FOR ARTICLE

On: 7 October 2009

Access details: Access Details: [subscription number 908165490] Publisher Psychology Press

Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered office: Mortimer House, 37-41 Mortimer Street, London W1T 3JH, UK

Visual Cognition

Publication details, including instructions for authors and subscription information: http://www.informaworld.com/smpp/title~content=t713683696

Similarity modulates the face-capturing effect in change detection

Cheng-Ta Yang a; Chia-Hao Shih a; Mindos Cheng a; Yei-Yu Yeh a a Department of Psychology, National Taiwan University, Taipei, Taiwan First Published on: 01 January 2008

To cite this Article Yang, Cheng-Ta, Shih, Chia-Hao, Cheng, Mindos and Yeh, Yei-Yu(2008)'Similarity modulates the face-capturing effect in change detection',Visual Cognition,17:4,484 — 499

To link to this Article: DOI: 10.1080/13506280701822991 URL: http://dx.doi.org/10.1080/13506280701822991

Full terms and conditions of use: http://www.informaworld.com/terms-and-conditions-of-access.pdf

This article may be used for research, teaching and private study purposes. Any substantial or systematic reproduction, re-distribution, re-selling, loan or sub-licensing, systematic supply or distribution in any form to anyone is expressly forbidden.

The publisher does not give any warranty express or implied or make any representation that the contents will be complete or accurate or up to date. The accuracy of any instructions, formulae and drug doses should be independently verified with primary sources. The publisher shall not be liable for any loss, actions, claims, proceedings, demand or costs or damages whatsoever or howsoever caused arising directly or indirectly in connection with or arising out of the use of this material.

(2)

Similarity modulates the face-capturing effect in

change detection

Cheng-Ta Yang, Chia-Hao Shih, Mindos Cheng, and

Yei-Yu Yeh

Department of Psychology, National Taiwan University, Taipei, Taiwan

We investigated whether similarity among faces could modulate the face-capturing effect in change detection. In Experiment 1, a singleton search task was used to demonstrate that a face stimulus captures attention and the odd-one-out hypothesis cannot account for the results. Searching for a face target was faster than searching for a nonface target no matter whether distractordistractor similarity was low or high. The fast search, however, did not lead to a face-detection advantage in Experiment 2 when the pre- and postchange faces were highly similar. When participants in Experiment 3 had to divide their attention between two faces in stimulus displays for change detection, detection performance was worse than performance in detecting nonface changes. The face-capturing effect alone is insufficient to produce the face-detection advantage. Face processing is efficient but its effect on performance depends on the stimulustask context.

Face perception and recognition are essential to daily social interaction. Faces provide social cues including ethnic background, identity, gender, and mood so one can select the proper social behaviours for interaction or response. The importance of face perception is demonstrated by the finding that special neural mechanisms are selectively responsive to face stimuli (Farah, 1996; Grill-Spector, Knouf, & Kanwisher, 2004; Hakoda, 2003; Kanwisher, 2000; Kawabata, 2003; Yovel & Kanwisher, 2004; but see, for a different view, Diamond & Carey, 1986; Gauthier, Skudlarski, Gore, & Anderson, 2000; Gauthier, Tarr, Anderson, Skudlarski, & Gore, 1999). Behavioural evidence also supports the proposal that human faces are

Please address all correspondence to Yei-Yu Yeh, Department of Psychology, National Taiwan University, No. 1, Sec. 4, Roosevelt Rd., Taipei, Taiwan 106. E-mail: yyy@ntu.edu.tw This research was supported by a grant from National Science Council to Y.-Y. Yeh (NSC 95-2413-H-002-003). We thank R. Palermo, Y.-M. Huang, H.-F. Chao, and Y.-C. Chiu for their valuable comments on an earlier version of the manuscript. We also thank S.-H. Lin for his assistance on stimulus generation. Parts of the results were presented at the 13th annual meeting of OPAM, Toronto, Canada in 2005.

# 2008 Psychology Press, an imprint of the Taylor & Francis Group, an Informa business http://www.psypress.com/viscog DOI: 10.1080/13506280701822991

(3)

processed differently from nonface stimuli (Farah, 1995) and that human attention is biased towards faces (Hershler & Hochstein, 2005; Lavie, Ro, & Russell, 2003; Ro, Russell, & Lavie, 2001; Theeuwes & van der Stigchel, 2006).

Lavie et al. (2003) showed how a photograph of a face used as a distractor can capture attention. The participants’ task was to search for the name of a politician or pop star among a list of letter strings while ignoring a flanking face distractor. They manipulated the perceptual load of the search task by varying the number of strings in the display. They also manipulated distractor compatibility by presenting a face in the same (congruent) or different (incongruent) category in relation to the target. Their results showed that the compatibility of the face distractor influenced performance under a high load when the search task was demanding. In contrast, searching for the name of a fruit or an instrument was not influenced by the compatibility of a flanking nonface distractor under a high load. Despite the resource demand in target processing under a high load, a face distractor captured attention and affected search performance whereas a nonface distractor did not.

As a target in a visual display, a face stimulus can also capture attention among nonface distractors. In Hershler and Hochstein’s (2005) study, a face popped out from cars and houses so that the search slope was shallow, with search time remaining almost constant despite the increase in the number of distractors. In contrast, a car did not pop out from faces and houses. This capturing effect led to a face-detection advantage in change detection when the visual display contained multiple stimuli (Ro et al., 2001). Even though change detection between two faces was worse than change detection between two nonface objects in a single-stimulus display, a face captured attention in a display of multiple stimuli; detection performance in a multistimulus display was better when the change stimulus was a face than when the change stimulus was a nonface object (Ro et al., 2001).

Alternative views have been proposed. VanRullen (2005) argued that the pop-out effect Hershler and Hochstein (2005) observed is not unique to face stimuli. They suggested that low-level features of the target and distractors determine whether a stimulus can pop out or not; any target can easily pop out when the distractors are visually homogeneous. In their study, a car popped out with a shallow search slope when the distractors were all faces. Palermo and Rhodes (2003) have also argued that the face-detection advantage Ro et al. (2001) found may have resulted from an odd-one-out effect as the face was a visually distinct stimulus among the other nonface objects. To verify this hypothesis, Palermo and Rhodes used the same paradigm as Ro et al. had used. They presented an object among three face distractors or a face among three object distractors of different categories. When the changed target was the odd stimulus in the display,

(4)

change detection in the former context was as efficient as in the latter context, verifying their hypothesis.

Although the processing of a nonface object among face distractors appears to be as efficient as the processing of a face among nonface distractors, different mechanisms may be at work in each case. In the former context, the face distractors are visually homogenous. Because distractor distractor similarity influences visual search (Duncan & Humphreys, 1989), the visual homogeneity of the face distractors leads to an efficient search for a nonface target. In contrast, nonface distractors are relatively heteroge-neous in visual attributes. A face stimulus pops out primarily because it captures attention.

The purpose of this study is to demonstrate that although a face stimulus can pop out from nonface distractors, there is a limit to the face-detection advantage. In Experiment 1, a singleton search task was adopted to show the face-capturing effect while ruling out the odd-one-out account. In Experi-ments 2 and 3, we highlight the constraints of the face-detection advantage. We demonstrate that the face-capturing effect is not sufficient to produce a face-detection advantage. Change detection can be worse when attention must be divided between two faces in contrast to a condition in which attention is divided between two nonface stimuli.

EXPERIMENT 1

Whether face and nonface stimuli are processed in a different manner is controversial. Hershler and Hochstein (2005) found a pop-out effect for faces. VanRullen (2005) argued that this result was an artifact. Instead, low-level features such as distractordistractor similarity determine search performance. This account is in accord with the odd-one-out hypothesis (Palermo & Rhodes, 2003). When the targetdistractor similarity is low and the distractordistractor similarity is high, a target is easily detected in visual search (Duncan & Humphreys, 1989).

We examined whether the odd-one-out hypothesis could fully explain the face-capturing effect in visual search. Rather than asking participants to search for a prespecified target, we asked them to search for a unique target in a display. Without the top-down bias for a specific category, target processing relies on the bottom-up competition for attention against distractor processing. The stimulus set consisted of three categories: Faces, dogs, and vehicles. Stimuli in the faces and dogs categories were visually homogeneous, and stimuli in the vehicles category were visually hetero-geneous. In the target-absent trials, all six stimuli belonged to the same category. In the target-present trials, a target stimulus was selected from one category while the other five stimuli were selected from another category.

(5)

Participants were instructed to search for the presence of an odd target that did not belong to the same category as the distractors.

By manipulating distractor type, we can compare search performance with homogeneous distractors to performance with heterogeneous distrac-tors. The odd-one-out hypothesis is supported if an odd target from any category can be searched for more efficiently when the distractors are homogeneous than when they are heterogeneous. In contrast, if the face-capturing effect is tenable, we expect distractor homogeneity to affect search performance only when the odd target is a nonface stimulus. When the odd target is a face, it should capture attention and pop out regardless of distractor homogeneity. Moreover, a face should be searched for more efficiently than a nonface target among both homogenous and hetero-geneous distractors.

Method

Participants. Twenty-two undergraduate students at National Taiwan University participated in the experiment to receive a bonus credit in an introductory psychology course. Their ages ranged from 19 to 22 years old. All participants had normal or corrected-to-normal vision.

Equipment. A PC with a 3.40 GHz Intel Pentium IV processor was used to run the experiment. The display monitor was a 17-inch colour monitor with a vertical refresh rate of 75 Hz. E-Prime (Schneider, Eschman, & Zuccolotto, 2002a, 2002b) was used to run the experiment.

Stimuli and design. Three categories of stimuli were used in this experiment: Faces, dogs, and vehicles. Each category contained 12 stimuli. Each image was digitized in 24-bit colour scale and sized to a maximum of 120 pixels on each dimension.

The images of vehicles (hot air balloon, van, train, airplane, boat, cruise ship, helicopter, bus, bicycle, sport utility vehicle, cable car, and camper) were selected from a CorelDraw ver. 5.0 art library (Coreldraw!, 1994). We chose photos of dogs with enlarged heads and shrunken bodies to highlight the faces and to make it difficult for participants to identify the specific breed under time pressure (see Figure 1 for examples). Colour images of students cropped to the head and shoulders were chosen from a yearbook from a junior high school. All were male and all wore the same school uniform. As there were more differences in global configuration and rotation among the dogs than among the faces, visual homogeneity was the highest in the faces category and the lowest in the vehicles category.

There were six stimuli in each display. Each stimulus subtended a visual angle of 5.248 (horizontal)4.298 (vertical) at the viewing distance of

(6)

approximately 60 cm. The stimuli were placed around an imaginary circle with a diameter of 7.018. Images were placed on a white background.

Two hundred and forty experimental trials were constructed. Half of the trials were the target-absent trials in which the six images belonged to the same category. The other half of the trials were the target-present trials in which a target stimulus was selected from one category while the other five stimuli were selected from another category. There were six types of target-present trials: One face among five dogs (face-dogs), one face among five vehicles (face-vehicles), one dog among five faces (dog-faces), one dog among five vehicles (dog-vehicles), one vehicle among five faces (vehicle-faces), and one vehicle among five dogs (vehicle-dogs). There were 20 observations for each condition.

Procedure. Participants previewed all stimuli before the experiment began. A trial started with a fixation cross at the centre of the screen for 1 s. A display with six stimuli was presented until a response was made. If all the stimuli were from the same category, participants pressed the right button of the mouse. When an odd target was present, participants pressed the left button of the mouse. There were 24 practice trials before the experimental trials.

Results and discussion

Proportion correct data and the median reaction time of correct responses were analysed separately. A one-way repeated measures analysis of variance (ANOVA) was conducted for the target-absent and target-present trials to verify that the main effect of display type was significant. Planned comparisons with Bonferroni adjustment of family-wise Type I error (.05) were conducted to contrast the target-present conditions of interest. Table 1 shows the mean performance data.

Figure 1. A schematic example of the dogs with enlarged heads used in Experiment 1.

(7)

Accuracy. In the target-absent trials, the main effect of display type was significant, F(2, 42)4.64, MSE0.002, pB.05. As shown in Table 1, Tukey post hoc comparisons showed that accuracy was significantly higher when the stimuli were all faces (0.97) or dogs (0.98) than when the stimuli were all vehicles (0.94). Accuracy was higher when it was easy to confirm the absence of an odd target among relatively homogeneous distractors.

In the target-present trials, the main effect of display type was significant, F(5, 105)11.77, MSE0.004, pB.001. Two contrasts related to the face-capturing effect: The comparison between the face-dogs and vehicle-dogs conditions, and the contrast between the face-vehicles and dog-vehicles conditions. Both contrasts showed significant results, t(21)3.49, pB.005, and t (21)4.28, pB.0005, respectively. No matter whether the distractors were homogeneous (dogs) or heterogeneous (vehicles), accuracy was higher when the odd target was a face than when the odd target was a vehicle or a dog.

To examine the effect of distractordistractor similarity on the search for a singleton target, we conducted three contrasts. When the odd target was a vehicle, accuracy in the vehicle-faces condition was significantly higher than in the vehicle-dogs condition, t(21)4.71, pB.001. When the odd target was a dog, there was no difference in accuracy between the dog-faces condition and the dog-vehicles condition (p.05). Also, accuracy in the dogs condition was not significantly different from that in the face-vehicles condition (p.05). Distractordistractor similarity did not affect search accuracy when the odd target was a dog or a face.

Reaction time (RT). In the target-absent trials, the main effect of display type was significant, F(2, 42)102.59, MSE240.692, pB.001.

TABLE 1

Mean performance and standard deviation in Experiment 1: Face targets were searched for more efficiently than nonface targets

Display type

Target present Target absent

Dog-faces Dog-vehicles Face-dogs Face-vehicles Vehicle-dogs

Vehicle-faces Dogs Faces Vehicles

Accuracy M 0.92 0.89 0.92 0.96 0.83 0.93 0.98 0.97 0.94 SD 0.04 0.08 0.07 0.04 0.10 0.07 0.03 0.03 0.07 Reaction time M 603.38 737.47 647.57 658.29 707.67 639.99 579.44 537.88 732.01 SD 66.03 108.12 80.40 92.33 91.38 105.72 74.81 60.70 105.18

(8)

Tukey post hoc comparisons showed that RT increased from a display of faces (537.88 ms), to dogs (579.44 ms), and to vehicles (732.01 ms). RT in judging the absence of an odd target was faster among homogeneous distractors than among heterogeneous distractors.

In the target-present trials, the main effect of display type was significant, F(5, 105)23.21, MSE2244.775, pB.0001. Planned comparisons showed the face-capturing effect in contrasting the face-dogs condition to the vehicle-dogs condition, t(21) 5.02, pB.001, and comparing the face-vehicles condition to the dog-face-vehicles condition, t(21) 5.89, pB.001. Among the same type of distractors, RT was significantly faster when the odd target was a face than when the odd target was not a face.

Distractordistractor similarity also influenced search speed as RT was significantly faster in the dog-faces condition than in the dog-vehicles condition, t(21) 9.29, pB.001. RT in the vehicle-faces condition was significantly faster than in the vehicle-dogs condition, t(21) 4.41, pB.001. Yet, RT in the face-dogs condition was not significantly different from RT in the face-vehicles condition (p.05). Distractordistractor similarity did not affect search speed when the odd target was a face. When the odd target was a nonface object, RT increased for a search among heterogeneous distractors.

The results in the target-absent condition validated the similarity manipulation supporting the importance of distractordistractor similarity in visual search (Duncan & Humphreys, 1989). Similarity was the highest for faces and the lowest for vehicles. It was easier to detect the absence of an odd target among the faces than among the dogs, which in turn was easier than detecting the absence of an odd target among vehicles. The vehicles category is at a superordinate level. The stimuli in the vehicles category are heterogeneous objects with a distinct object name for each, such as car or boat. In contrast, the dogs and faces are categorized at a basic level with the same general label such as male face or dog unless a participant was familiar with a specific stimulus in the category.

Distractordistractor similarity also influenced search performance for a nonface target in the target-present trials. When the odd target was a dog or a vehicle, performance was better when faces were the distractors than when nonface objects were the distractors. This advantage was observed in contrasting the dog-faces to the dog-vehicles conditions and also in comparing the vehicle-faces to the vehicle-dogs conditions. The distractor distractor similarity, however, did not affect search performance when the odd target was a face. Searching for a face among heterogeneous distractors (vehicles) was as efficient as searching among homogeneous distractors (dogs).

The face-capturing effect manifested. Participants were faster when the odd target was a face than when the odd target was a nonface object. The

(9)

face-capturing effect was observed both when the distractors were visually homogeneous and when they were heterogeneous. A face stimulus has an advantage beyond the odd-one-out effect as Palermo and Rhodes (2003) proposed. A face can attract attention. When a nonface target such as a dog or vehicle did not attract attention, distractordistractor similarity influ-enced search performance. Low-level feature similarity indeed influences visual processing (Palermo & Rhodes, 2003; VanRullen, 2005), but the face-capturing effect can eliminate the influence of low-level feature similarity.

EXPERIMENT 2

The results of Experiment 1 demonstrated the face-capturing effect in a singleton search task regardless of the feature similarity of distractors. The objective of this experiment is to investigate whether such a capturing effect can override the effect of feature similarity on change detection. Previous studies of change detection have shown that change magnitude between the pre- and postchange objects can significantly influence performance (Mitr-off, Simons, & Franconeri, 2002; Silverman & Mack, 2006; Smilek, Eastwood, & Merikle, 2000; Williams & Simons, 2000; Yeh & Yang, 2008; Zelinsky, 2003) and that the signal detection model can predict change-detection performance (Wilken & Ma, 2004). Change change-detection is poor when the pre- and postchange targets are highly similar. With a low signal-to-noise (S/N) ratio in detection, high similarity between two targets costs detection performance.

Although Ro et al. (2001) demonstrated a face-detection advantage despite the small change magnitude between two faces, examination of their face stimuli1reveals differences in global configuration such as hair style and head rotation. In addition, emotional expressions also differed among some faces. Ohman, Lundqvist, and Esteves (2001) showed that visual search is quite efficient with a shallow search slope when the target face contains emotional expression and the distractors are faces of neutral emotion. The face-detection advantage may have arisen in their study both from the capturing effect and from the ease of detecting changes in global config-uration and emotional expression.

We postulate that both the capturing effect and the similarity effect operate in change detection. The capturing effect itself can be insufficient to produce the face-detection advantage. When the pre- and postchange faces are highly similar, we expect that the face-detection advantage will not be observed. The male faces used in Experiment 1 were highly similar with little

1We thank Ro for providing us with the stimuli from his study. Only achromatic female faces

were used in their experiments.

(10)

emotional expression or head rotation. We expect that performance in detecting a face change with these stimuli should be equal to or even worse than performance in detecting an object change. Similarity should modulate the face-detection advantage.

Method

Participants. Twelve undergraduate students from National Taiwan University volunteered to take part in this experiment for a bonus credit in an introductory psychology course. Their ages ranged from 19 to 22 years old. All participants had normal or corrected-to-normal vision.

Stimuli and design. The stimulus set was composed of 36 colour stimuli from six categories similar to the ones used in Ro et al.’s (2001) study: Male faces, appliances (e.g., a telephone), food (e.g., an apple), clothes (e.g., a coat), instruments (e.g., a guitar), and plants (e.g., a rose). There were six stimuli in each category. Six faces were chosen from the stimuli used in Experiment 1. The other 30 images were selected from the CorelDraw 5.0 art library (Coreldraw!, 1994). As similarity between the faces was very high, detecting a change in faces must rely on detailed analysis of facial features.

Two hundred and forty experimental trials were constructed. Half of the trials were change trials and the other half were no-change trials. The change trials were constructed for a within-subjects factorial design of 6 (type of change: Faces, appliances, food, clothes, instruments, plants)20 observa-tions. On each trial, a display contained six images randomly selected from each of the six categories. When no change occurred, the pre- and postchange displays were identical. When a change occurred, a stimulus in the prechange display was replaced in the postchange display by another stimulus from the same category.

Procedure. Participants first practiced the task for 12 trials to ensure that they understood the instructions. They then performed the experimental trials with a brief rest after every 60 trials. A one-shot change detection paradigm was used. Each trial (see Figure 2) began with a black fixation cross for 1000 ms. A prechange display was presented for 2000 ms after the fixation cross. Following a 350 ms blank interval, a postchange display was presented with duration of 2000 ms. After another blank interval of 350 ms, participants were asked to judge whether the pre- and postchange displays were the same or not by pressing the left button of the mouse for a same response, and the right button of the mouse for a different response. The intertrial interval was 1000 ms. Reaction time was not emphasized in this experiment.

(11)

Results and discussion

Proportion correct data were analysed with a one-way repeated measures ANOVA with type of change as the single factor. Table 2 shows the mean performance data.

The results indicted a significant main effect of change type, F(5, 55) 8.18, MSE0.012, pB.01. Tukey post hoc comparisons showed that detecting changes in appliances was significantly worse than detecting changes in the other object categories. When one face replaced another, performance was significantly worse than detecting a change between two instruments. Detection accuracy was not significantly different among clothes, food, instruments, and plants. The face-detection advantage was not observed. Detecting a face change was not better than detecting an object change, and was even worse than detecting a change between two instruments.

We have no clear explanation why detecting a change between appliances was worse than detecting a change in the other object categories. The added detection difficulty may have arisen from the fact that the stimuli in this category were not as colourful as those in the other object categories.

Although the null result of a face-detection advantage was consistent with our prediction, methodological differences exist between this experiment and

Figure 2. The trial procedure used in Experiment 2. Participants judged whether a change occurred between two displays. To view this figure in colour, please see the online issue of the Journal.

(12)

the experiments conducted in Ro et al.’s (2001) study. We used a one-shot paradigm in which the pre- and postchange displays were presented once. In contrast, Ro et al. used a flicker paradigm in which the two displays alternated until participants detected a change. To rule out the possibility that the difference in methodology caused the null result, we conducted an additional experiment with 17 volunteers using a flicker paradigm. The pre-and postchange displays were cycled until participants responded. Each cycle consisted of two stimulus displays with duration of 533 ms for each display that was followed by a blank interval of 83 ms. The stimulus set was the same as that used in this experiment. Results did not show a face-detection advantage. It is unlikely that the difference in the methodology caused the null results.

We postulate that the high similarity between faces reduces the S/N ratio making change detection difficult. The lack of a face-detection advantage arises from two mechanisms: Benefit based on the capturing effect and cost based on a low S/N ratio in detection. If the S/N ratio in detection is further reduced, detection in a face-change condition should be worse than detection in object-change conditions. We verified this possibility in Experiment 3.

EXPERIMENT 3

To demonstrate the likelihood that the high visual similarity between the pre- and postchange faces cancels the capturing effect in Experiment 2, we can reduce the visual similarity among the faces, or further reduce the S/N ratio. Given that Ro et al. (2001) have already shown the face-detection advantage with female faces that differed in head rotation and emotional expression, we adopted the second approach in this experiment to demonstrate the cost in detecting a change between similar faces.

To further reduce the S/N ratio, we presented two stimuli from each category. With two faces in a display, both capture attention for further processing. The S/N ratio in detection is low when only one face changes

TABLE 2

Mean performance and standard deviation in Experiment 2: No face-detection advantage was observed

Type of change

Faces Appliances Clothes Food Instruments Plants No change

Accuracy

M 0.66 0.57 0.71 0.75 0.84 0.75 0.89 SD 0.20 0.16 0.11 0.12 0.09 0.14 0.05

(13)

between the pre- and postchange displays as participants must compare four faces to make a decision. The similarity cost should dominate in affecting detection performance. Thus, we expect performance in the face-change condition to be worse than performance in the object-change conditions.

Method

Participants. Twelve undergraduate students at National Taiwan Uni-versity participated in this experiment to receive a bonus credit in an introductory psychology course. Their ages ranged from 19 to 22 years old. All participants had normal or corrected-to-normal vision.

Stimuli, design, and procedure. Three categories of stimuli were used in this experiment: Faces, vehicles, and appliances, with 12 stimuli selected for each category. One hundred and twenty experimental trials were used in this experiment. Half of the trials were change trials and the other half were no-change trials. The no-change trials were created based on the within-subjects factorial design of 3 (type of change: Faces, appliances, vehicles)20 observations. Only one stimulus was replaced in the change trials. The procedure was the same as in Experiment 2. Reaction time was not emphasized in this experiment.

Results and discussion

Proportion correct data were analysed with a one-way (type of change) repeated measures ANOVA. Table 3 shows the mean performance data.

The results showed that there was a significant main effect of change type, F(2, 22)8.45, MSE0.04, pB.001. Tukey post hoc comparisons showed that performance in detecting a face change was the worst and that there was no significant difference in performance in detecting a change in appliances or vehicles.

When a face was added to the stimulus display, the visual similarity effect dominated in affecting detection performance. The results of Experiment 2 showed equivalent performance in detection between the category of appliances and the category of faces. Yet, detecting an appliance change was significantly better than detecting a face change in this experiment. Cross-experiment comparisons showed that performance in detecting a change in appliances was not affected by adding a stimulus from the same category, F(1, 44)0.37, MSE0.034, p.1. In contrast, detection performance was impaired for faces when participants had to divide attention between two faces in a stimulus display, F(1, 44)12.14, MSE0.034, pB.01. The results highlighted that a singleton face may be critical to observing the face-detection advantage.

(14)

GENERAL DISCUSSION

The results of Experiment 1 support the face-capturing effect in a singleton search task. The odd-one-out hypothesis cannot fully explain the results as a search for an odd face was more efficient than a search for an odd nonface object. The results from Experiments 2 and 3 highlighted the constraints of the face-capturing effect in change detection. High visual similarity between the pre- and postchange targets counteracted the face-capturing effect in Experiment 2 and degraded performance when participants in Experiment 3 had to divide attention between two faces in a stimulus display.

A face stimulus can capture attention. When the odd target was a face in the singleton search task of Experiment 1, search performance was not affected by distractordistractor similarity. Performance was statistically equivalent between the face-dogs and face-vehicles conditions. In contrast, distractordistractor similarity affected search performance when the odd target was a dog, with better performance under high distractordistractor similarity faces) than under low distractordistractor similarity (dog-vehicles). The same pattern of results was observed when the odd target was a vehicle. Given the same types of distractors, search performance was more efficient when the odd target was a face as shown in the contrast between face-dogs and vehicle-dogs, and between face-vehicles and dog-vehicles.

Whether the face-capturing effect arose from holistic processing of face stimuli remains to be explored in future research. Recognition of inverted faces is worse than recognition of upright faces (Diamond & Carey 1986; Farah, Tanaka, & Drain, 1995; Tanaka & Farah, 1991). The neurons sensitive to faces show less activation to inverted faces than to upright faces (Yovel & Kanwisher, 2005). Searching for an upright face among inverted faces is more efficient than searching for an inverted face among upright face distractors (Tomonaga, 2007). When each image is cut into various segments and randomly reassembled into a scrambled stimulus, searching for a scrambled face among scrambled objects is not efficient (Hershler & Hochstein, 2005). While upright, inverted, and scrambled faces contain

TABLE 3

Mean performance and standard deviation in Experiment 3: Face similarity led to performance cost

Type of change

Faces Appliances Vehicles No change

Accuracy

M 0.40 0.61 0.73 0.80

SD 0.20 0.18 0.22 0.24

(15)

low-level features, only upright faces preserve configural information for holistic processing. The better performance with upright faces than with inverted or scrambled faces suggests that faces are processed holistically. Yet, VanRullen (2005) showed that a search for an inverted face among inverted objects is also efficient with a shallow slope. It is unclear what has caused the inconsistent findings.

Although a capturing effect was observed in Experiment 1, it was insufficient to produce a face-detection advantage in Experiment 2. Change magnitude influenced detection performance as demonstrated in previous studies (Mitroff et al., 2002; Silverman & Mack, 2006; Smilek et al., 2000; Wilken & Ma, 2004; Williams & Simons, 2000; Yeh & Yang, 2008; Zelinsky, 2003). With highly similar pre- and postchange faces, the face-detection advantage was not observed. Detection performance deteriorated further in Experiment 3 when two faces were present in each visual display.

The high similarity among faces was detrimental as change detection is based on the ratio of mismatch (change) to match (no change) signals and the high similarity increased the match signals. When participants in Experiment 3 had to divide attention between two faces in each stimulus display, the S/N ratio was low based on the computation among four faces in the pre- and postchange displays. Alternatively, it is plausible that the presence of two faces reduced the capturing effect because no more than one face can be processed at a time (Bindemann, Burton, & Jenkins, 2005). As a result, detection of a face change was worse than detecting an object change.

CONCLUSION

A human face appears to have an advantage beyond the odd-one-out effect in a visual search, supporting the face-capturing effect. This capturing effect, however, cannot override the feature similarity effect on change detection. When the pre- and postchange faces were highly similar, no detection advantage was observed. When each visual display contained two faces, detection in the face-change condition was worse than in the object-change condition. Visual similarity can modulate the face-detection advantage. Face processing is efficient, but its impact on performance depends on the stimulustask context.

REFERENCES

Bindemann, M., Burton, A. M., & Jenkins, R. (2005). Capacity limits for face processing. Cognition, 98, 177197.

CorelDRAW! [Computer software]. (1994). Ottawa, ON, Canada: Corel, Inc.

(16)

Diamond, R., & Carey, S. (1986). Why faces are and are not special: An effect of expertise. Journal of Experimental Psychology: General, 115, 107117.

Duncan, J., & Humphreys, G. W. (1989). Visual search and stimulus similarity. Psychological Review, 96, 3, 433458.

Farah, M. J. (1995). Dissociable systems for visual recognition: A cognitive neuropsychology approach. In S. M. Kosslyn & D. N. Osherson (Eds.), Visual cognition: An invitation to cognitive science (2nd ed., Vol. 2, pp. 101119). Cambridge, MA: MIT Press.

Farah, M. J. (1996). Is face recognition ‘‘special’’? Evidence from neuropsychology. Behavioural Brain Research, 76, 181189.

Farah, M. J., Tanaka, J. W., & Drain, H. M. (1995). What causes the face inversion effect? Journal of Experimental Psychology: Human Perception and Performance, 21, 628634. Gauthier, I., Skudlarski, P., Gore, J. C., & Anderson, A. W. (2000). Expertise for cars and birds

recruits brain areas involved in face recognition. Nature Neuroscience, 3, 191197. Gauthier, I., Tarr, M. J., Anderson, A. W., Skudlarski, P., & Gore, J. C. (1999). Activation of the

middle fusiform ‘‘face area’’ increases with expertise in recognizing novel objects. Nature Neuroscience, 2, 568573.

Grill-Spector, K., Knouf, N., & Kanwisher, N. (2004). The fusiform face area subserves face perception, not generic within-category identification. Nature Neuroscience, 7, 555562. Hakoda, Y. (2003). Domain-specificity versus domain-generality in facial expressions and

recognition. Japanese Journal of Psychonomic Science, 22, 121124.

Hershler, O., & Hochstein, S. (2005). At first sight: A high-level pop out effect for faces. Vision Research, 45, 17071724.

Kanwisher, N. (2000). Domain specificity in face perception. Nature Neuroscience, 3, 759763. Kawabata, H. (2003). Domain-specificity and generality in the brain. Japanese Journal of

Psychonomic Science, 22, 132136.

Lavie, N., Ro, T., & Russell, C. (2003). The role of perceptual load in processing distractor faces. Psychological Science, 14, 510515.

Mitroff, S. R., Simons, D. J., & Franconeri, S. L. (2002). The siren song of implicit change detection. Journal of Experimental Psychology: Human Learning and Memory, 28, 798815. Ohman, A., Lundqvist, D., & Esteves, F. (2001). The face in the crowd revisited: A threat advantage with schematic stimuli. Journal of Personality and Social Psychology, 80, 381396. Palermo, R., & Rhodes, G. (2003). Change detection in the flicker paradigm: Do faces have an

advantage? Visual Cognition, 10, 683713.

Ro, T., Russell, C., & Lavie, N. (2001). Changing faces: A detection advantage in the flicker paradigm. Psychological Science, 12, 9499.

Schneider, W., Eschman, A., & Zuccolotto, A. (2002a). E-Prime user’s guide. Pittsburgh, PA: Psychology Software Tools, Inc.

Schneider, W., Eschman, A., & Zuccolotto, A. (2002b). E-Prime reference guide. Pittsburgh, PA: Psychology Software Tools, Inc.

Silverman, M. E., & Mack, A. (2006). Change blindness and priming: When it does and does not occur. Consciousness and Cognition: An International Journal, 15, 409422.

Smilek, D., Eastwood, J. D., & Merikle, P. M. (2000). Dose unattended information facilitate change detection? Journal of Experimental Psychology: Human Perception and Performance, 26, 480487.

Tanaka, J. W., & Farah, M. J. (1991). Second-order relational properties and the inversion effect: Testing a theory of face perception. Perception and Psychophysics, 50, 367372.

Theeuwes, J., & van der Stigchel, S. (2006). Faces capture attention: Evidence from inhibition of return. Visual Cognition, 13, 657665.

Tomonaga, M. (2007). Visual search for orientation of faces by a chimpanzee (Pan troglodytes): Face-specific upright superiority and the role of facial configural properties. Primates, 48, 112.

(17)

VanRullen, R. (2005). On second glance: Still no high-level pop-out effect for faces. Vision Research, 46, 30173027.

Wilken, P., & Ma, W. J. (2004). A detection theory account of change detection. Journal of Vision, 4, 11201135.

Williams, P., & Simons, D. J. (2000). Detecting changes in novel, complex three-dimensional objects. Visual Cognition, 17, 297322.

Yeh, Y.-Y., & Yang, C.-T. (2008). Object memory and change detection: Dissociation as a function of visual and conceptual similarity. Acta Psychologica, 127, 114128.

Yovel, G., & Kanwisher, N. (2004). Face perception: Domain specific, not process specific. Neuron, 44, 889898.

Yovel, G., & Kanwisher, N. (2005). The neural basis of the behavioral face-inversion effect. Current Biology, 15, 22562262.

Zelinsky, G. J. (2003). Detecting changes between real-world objects using spatiochromatic filters. Psychonomic Bulletin and Review, 10, 533555.

Manuscript received March 2007 Manuscript accepted November 2007 First published online January 2008

數據

Figure 1. A schematic example of the dogs with enlarged heads used in Experiment 1.
Figure 2. The trial procedure used in Experiment 2. Participants judged whether a change occurred between two displays

參考文獻

相關文件

We solve the three-in-a-tree problem on

‹ Based on the coded rules, facial features in an input image Based on the coded rules, facial features in an input image are extracted first, and face candidates are identified.

Before the frame start specified in PMC_RSP, the MS shall transmit PMC_REQ in response to receipt of an PMC_RSP from the BS directing a change to uplink power

• One technique for determining empirical formulas in the laboratory is combustion analysis, commonly used for compounds containing principally carbon and

• When a system undergoes any chemical or physical change, the accompanying change in internal energy, ΔE, is the sum of the heat added to or liberated from the system, q, and the

To take collaborative actions to face the challenge arising from global climate change, we issued a circular in April 2017 to remind all schools to formulate and put in

Sunya, the Nothingness in Buddhism, is a being absolutely non-linguistic; so the difference between the two "satyas" is in fact the dif- ference between the linguistic and

(1) 99.8% detection rate, 50 minutes to finish analysis of a minute of traffic. (2) 85% detection rate, 20 seconds to finish analysis of a minute