Sayfalar

30 Haziran 2024 Pazar

27

 The Effect of Trait Anxiety on Search Performance in an Emotional vs. Non-emotional Search Task

The purpose of the study is to investigate whether emotional faces can capture attention even when they are entirely irrelevant to the task, and whether this disengagement from emotional stimuli differ in individuals with high and low anxiety. Participants were 42 undergraduate students (40 female, 2 male; Mage=20.40, SD=3.46) at University of Reading. There were two different visual search tasks: emotionally-relevant (emotion task), which participants searched for an angry or a happy face, and emotionally-irrelevant (age task), which participants indicated whether the discrepant face in a crowd was an old or a young individual’s face. In addition, a widely-used measure of trait and state anxiety The State and Trait Anxiety Inventory (STAI-Trait & State, Spielberger, Gorsuch, Lushene, Vagg, & Jacobs, 1983) were used and participants were allocated two groups (low and high anxious). The results revealed a general happy advantage in both tasks, suggesting that emotional expressions (especially positive emotions) can capture attention even when they are entirely irrelevant to the task. This overall positivity bias (happy advantage) was consistent with the findings using real photographical stimuli and heterogeneous crowd (e.g., Öhman, Juth, & Lundqvist, 2010). Also, the happy advantage over angry targets was not presented in individuals with high anxiety in the age task, confirming the hypothesis that high and low anxious groups will differ when the task was emotionally-irrelevant.

Keywords: Facial emotion; Attention; Visual search; Task-irrelevant





Facial expressions are essential to be scanned in order to determine a potential threat which considered as a prominent survival mechanism (e.g. Baron & Byrne, 1991). The ability to identify the emotional information from one’s facial expression is well developed in human (Ekman & Friesen, 1975). Also, Zajonc (1980) suggest that the automatic perception of emotional expressions evolved to enhance social communication. Furthermore, the face carries important information about many social and biological features, such as gender, age, identity, and emotional state. In this study, I examined whether faces displaying emotional expressions (i.e., angry and/or happy) have an advantage over neutral faces in an emotional (i.e., searching for emotional expressions) versus non-emotional search task (i.e, searching

 

for young or old individual’s faces). More importantly, the aim was to investigate the role of anxiety level on performance on these two visual search tasks (emotional vs. non-emotional). In the introductory part, I first reviewed some of the methodological issues on a visual search paradigm in general population. Then, it followed a summary of the theoritical models and emprical evidence on attentional bias in anxious populations.


General Issues in Visual Search Paradigm


Literature on attentional bias to emotion in the general population comes in large parts from the visual search paradigm (for a review, see Yiend, 2010). In a typical visual search experiment, an array of stimuli (i.e. faces) is presented with a unique face posing a particular expression among other faces with different expressions, and participants are required to make responses to the discrepant one as quickly as possible. RTs (Response Times) and accuracy rates (or error rates) are measured as representations of the efficiency of detecting the target. The idea is that if an emotional facial expression (e.g., angry face) pops out, then, RTs and errors should be smaller than the distracters (e.g., neutral face). Typically, RTs are plotted against increasing array size (number of distracters) and the ‘‘search slope’’ of the resulting graph is characteristically flat for targets that ‘‘pop out’’, or tended to need attentional selection (Treisman & Gelade, 1980). These slopes represent fast, parallel and slow serial-search processes, and are used to quantify the extent of parallel processing (Yiend, 2010). Therefore, the visual search task is a useful tool in order to investigate attentional bias to different combinations of target and distracter expressions.


Many combinations of target and distracter expressions can be found in studies. Yiend (2010) claims that valenced targets within neutral-distracter arrays allows clean conclusions about the speed of detection of the target and the underlying mechanism of attentional capture. For instance, if happy faces in a neutral crowd are detected faster than angry faces in a neutral crowd, then, we can conclude that happy faces grab individuals’ attention more quickly (perhaps, automatically) than angry ones. Furthermore, neutral targets in a valenced crowd allow us to consider distracter effects alone (Yiend, 2010). Another common design in visual search experiments is to present valenced target among valenced distracters (so called search-asymmetry designs, see Horstman, 2009). For instance, search performance for an angry face among happy faces may be compared with search performance for a happy face among angry faces. It is noted that an angry face clearly show an indication of threat among

 

happy faces (Horstmann & Bauland, 2006) while neutral faces are often perceived as mildly hostile (Öhman, Flykt, & Esteves, 2001). On the other hand, Frischen and colleagues argue that it is not clear whether different performance of the two conditions should be attributed to the attentional guidance of the target or to the distraction of crowds (dwell time), and thus, the data from such designs is harder to interpret (Frishen, Eastwood, & Smilek, 2008). For instance, some researchers suggest that targets which were embedded among positive distracters found better than targets in a negative crowd (Hahn, Carlson, Singer, and Gronlound, 2006; Horstman, Scharlau, & Ansorge, 2006). This suggests that emotional distracters influence search performance by capturing and delaying disengagement of attention (Frishen et al., 2008).


The other issue in visual search paradigm is to use either photographical images of real faces or schematic faces, which are simple line drawings consisting of a circle, mouth, eyes and eyebrows. Studies with schematic faces have generally found a sad superiority effect (Eastwood, Smilek, & Merikle, 2001; Fox et al., 2000), and an angry superiority effect among happy and/or neutral schematic faces (Fox et al., 2000; Öhman et al., 2001; Hahn et al., 2006; Hahn & Gronlund, 2007; Horstmann, 2007). Others did not find a significant difference between positive and negative emotional expressions (Horstmann, 2009; White, 1995). Horstmann and Bauland (2006) highlight that as a response to the confound-problem in one of the picture-based study (Hansen & Hansen, 1988), researchers had been mostly motivated to use schematic faces. On the contrary, they suggest that with available computer applications, it is possible to generate new stimuli that differ only in their facial expressions without presenting confounds. One way would be to use faces that are totally digitally constructed. Becker and colleagues generated computer-based facial expressions (angry, happy, neutral) with different/ heterogenous identities, and found a happy advantage over angry and neutral targets meaning that participants were quicker when the target was happy (Becker, Anderson, Mortensen, Neufeld, & Neel, 2011). They conclude that searching for facial expressions requires a serial search process and that previous evidence of pop-out for angry expressions (i.e., parallel searches) might be “spurious”. However, the ecological validity of the results obtained with schematic faces (e.g., Calvo & Nummenmaa, 2008; Horstmann & Bauland, 2006), and as well as computer-generated faces is still a matter of question.

 

On the other hand, there are studies using real photographical faces which are more inconsistent than schematic ones. Some showed an angry superiority effect (Fox & Damjanovic, 2006; Gilboa-Schechtman, Foa, & Amir, 1999; Horstman & Bauland, 2006) whereas others suggest faster detection of happy targets (Calvo & Nummenmaa, 2008; Juth, Lundqvist, Karlsson, & Öhman, 2005). Others found no difference between happy and angry targets (Purcell, Stewart, & Skov, 1996; Williams, Moss, Bradshaw, & Mattingly, 2005). Related to these conflicting findings, researchers highlighted the critical importance of set sizes and identities of photographical stimuli (Öhman, Juth, & Lundqvist, 2010). They hypothesized that the distracter stimuli might have a strong effect on search performance. Thus, they expected better performance when all stimuli were homogenous (no distracter), which meant the identities of the array were the same in a given trial and when the set size was small. As expected, they found that female happy target faces were always more quickly detected than female angry target faces especially with heterogeneous distracters and larger set sizes.


Furthermore, it is noted that schematic faces lack the variations in identity and gender that are conspicuous features of real faces (Öhman et al., 2010). Also, studies with real faces reporting anger superiority have also used homogeneous displays in which both the target and distracter faces were posed by the same individual (Fox & Damjanovic, 2006; Gilboa- Schechtman et al., 1999; Horstmann & Bauland, 2006) whereas studies reporting a happy face advantage (Bryne & Eysenck, 1995; Juth et al., 2005; Öhman et al., 2010) used sets of heterogeneous distracters composed of different male and female individuals. Becker and colleagues conducted a series experiments using different stimulus types (i.e., schematic, real faces, and computer-generated) and their conclusion is that realistic expressive faces are not pre-attentively detected and the search asymmetry favours happy rather than angry expressions, persisting even when low-level visual confounds have been eliminated (Becker et al., 2011). Thus, these findings and evaluations clearly show that homogeneity of the array, gender of the stimuli, and array size are important and determinant concepts in visual search paradigm.


The last and the most essential issue in terms of the current study is top-down guidance in visual search for facial expressions. The attentional bias to emotional facial expressions is mostly attributed to the bottom-up guidance of attention, such as, negativity (Pratto & John, 1991), threat-related features of angry or fearful faces (e.g., Öhman et al.,

 

2001), and high arousal level (e.g., Bradley, Codispoti, Cuthbert, & Lang, 2001; Schimmack, 2005; Vogt, DeHouwer, Koster, Van Damme, & Crombez, 2008). However, it is well-known that visual selective attention can be allocated to objects in the visual field in either a goal- directed (top-down control) or a stimulus-driven manner (bottom-up process). Most current models of attention assume that selection is the result of the joint influence of these two factors (e.g., Treisman & Sato, 1990). Furthermore, human action is organized by a variety of different goals and tasks which may change over time and with circumstantial contexts (e.g., Rothermund, Voss, & Wentura, 2008). Consistently, researchers claim that information processing system has the capacity of flexibility that shapes the attention according to the demands of the current goal or task (e.g., Goschke, 2000). Therefore, the idea that attending a facial expression may depend on the goal at the hand has mostly been overlooked (Hahn and Gronlund, 2007). This notion is also related to the empirical evidence revealing that anxious individuals is more biased to threatening faces than less-anxious individuals (Fox, Russo, Bowles, & Dutton, 2001); older adults is better at inhibiting angry faces than younger adults (Hahn et al., 2006); and task instruction modifies visual search advantage for threatening and nonthreatening facial expression search (Williams et al., 2005). Yet, influences from task instructions as goal-directed information were mostly ignored. For instance, in all previous studies suggesting that emotional expression may grab attention in a visual search, emotion was always related to the task. That is either emotional faces were presented in the same location as the task stimuli or participants were instructed to find an emotional face or an “odd one out” that was defined by being such by its emotional content (see Hodsoll, Viding, & Lavie, 2011 for an exception). Therefore, there is no doubt that emotional information (e.g., whether it is a negative or positive emotional expression) capture more attention than neutral one, it is not clear that emotion is capable of capturing attention when it is irrelevant to the task. Thus, the important question is that whether emotional faces can capture attention even when they are entirely irrelevant to the task. In the present study, I aimed to address this important question.


Selective Attention to Threat in Anxiety


There is a body of research that demonstrates attentional biases toward threat in high levels of anxiety and anxiety disorders (e.g., Beck & Clark, 1997). Several theoretical models have been developed to explain these biases in anxious populations which I will review some of them related to the current study in the following section.

 

Theoretical models of attentional biases towards threat in anxiety. Theoretical frameworks of anxiety propose that attentional biases to threat-related stimuli might cause or maintain anxiety (e.g., Beck & Clark, 1997; Williams, Watts, MacLeod, & Mathews, 1997). Individuals with high (vs. low) levels of anxiety selectively narrow attention onto threat stimuli in preference to neutral stimuli (for a recent review, see Richards, Benson, Donnelly, & Hadwin, 2014). For instance, Williams and colleagues (Williams et al., 1997) have proposed there are two cognitive mechanism responsible for the threat-related bias in anxiety: an affective decision mechanism (ADM), which evaluates the threat value, and a resource allocation (RAM or task demand), which receives input from the ADM and determines the resource allocation. According to this model, differences in the RAM leads to the individual differences in trait anxiety that is while individuals with high levels of anxiety direct attention towards threat, low anxious individuals direct their attention away from threat. In contrast, Mogg and Bradley (1998) argue that this could be true for minor threat; however, both low and high anxious participants might be influenced by severe threat. They proposed two systems (the valence evaluation system [VES] and the goal engagement system [GES]) involved in anxiety-related stimulus processing. First, the VES, which is similar to ADM in Williams and colleagues’ model (Williams et al., 1997), assess the affective valence of a stimulus. Then, the GES automatically allocates resources to the stimulus according the output of this valence evaluation process indicates threat is present or not. The GES will continue to direct resources to current goals if the threat has a low value. Mogg and Bradley (1998) suggested that stimuli that non-anxious individuals tag as nonthreatening are tagged as threatening by anxious people, leading to a bias in the stimulus evaluation process. As a result, the GES allocates attentional resources towards threat stimuli more frequently in high anxious individuals. When the threat is relatively mild or ambiguous, the VES in individuals with high trait anxiety will evaluate this as more threatening whereas if the threat has a high value, irrespective of whether anxious or non-anxious, all individuals would evaluate it as threat. More recently, Eysenck and colleagues propose that anxiety impairs efficient functioning of the goal-directed attentional system and enhances the processing which is driven by the stimulus. In addition, they further suggest difficulties disengaging attention from threat and difficulties inhibiting processing of threatening distracters (Attentional Control Theory; Eysenck, Derakshan, Santos, & Calvo, 2007).

Emprical evidence for selective attention to threat in anxiety. Much work has been done on anxiety-related bias using different paradigms than visual search task such as spatial

 

cueing task, emotional Stroop task, or the dot probe task (for more details on these paradigms, see Cisler & Kosler, 2010; Richards et al., 2014; Yiend, 2010). Some researchers (e.g., Fox et al., 2001; Fox, Russo, & Dutton, 2002; Yiend & Mathews, 2001) suggest that anxiety has little impact on initial detection of threat; rather, has a stronger effect in modulating the maintenance of attention on the source of threat. That is, they have proposed that a delay in disengaging from threat stimuli might be the primary attentional difference between anxious and non-anxious individuals. Evidence on threat-related bias in anxiety was summarized below according to two concepts – disengagement from threat, facilitated attention for threat- which are relevant to the current study.


Research with the spatial cueing paradigm has continually evidenced difficulty in disengagement among anxious individuals (e.g., Amir, Elias, Klumpp, & Przeworski, 2003; Fox et al., 2001; 2002; Koster, Crombez, Verschuere, Van Damme, & Wiersema, 2006; Koster, Verschuere, Crombez, Van Damme, 2005; Yiend & Mathews, 2001). For example, Fox and colleagues found that the presence of a threatening cue (words or faces) had a strong effect on disengagement of attention while the threatening information had no advantage in attracting attention to their own location (Fox et al., 2001). Research using the visual search task has mostly confirmed disengaging problems from threat in anxious individuals (Byrne & Eysenck, 1995; Gilboa-Schechtman et al., 1999; Juth et al., 2005; Rinck, Becker, Kellermann, & Roth, 2003; Rinck, Reinecke, Ellwart, Heuer, & Becker, 2005). Also, research with the dot probe task design has constantly proved difficulty in disengagement (Salemink, van den Hout, & Kindt, 2007). Therefore, there is substantial amount of evidence that attentional biases towards threat are strongly related to a difficulty in disengaging attention from threat stimuli.


Research has found facilitated attention to threat among high anxious individuals using visual search task (Byrne & Eysenck, 1995; Gilboa-Schechtman et al., 1999; Juth et al., 2005) and using spatial cueing paradigm (Koster et al., 2006) while others found no evidence that attention is facilitated for threat among individuals with high anxiety using visual search task (Rinck et al., 2003; experiment 1; Rinck et al., 2005; experiment 1) and using spatial cueing task (Amir et al., 2003; Fox, et al., 2001; 2002; Yiend & Mathews, 2001). Cisler & Koster (2010) conclude that facilitated attention towards threat might be moderated by threat intensity (i.e., mild/high threat) and stimulus duration. For instance, Koster and colleagues (2006) found that at 100 ms stimulus durations, facilitated attention was found towards

 

highly, but not mildly, threatening pictures among high trait anxious individuals. However, at longer presentation times there was again no evidence of facilitated attention. Consequently, conflict findings might indicate that the phenomenon of attentional bias in anxiety is less systematic than previously considered and is sensitive to different contexts and procedures (Derakshan & Koster, 2010).


The aim of this study is to explore how threat-related bias works in an emotionally- relevant and emotionally-irrelevant situation and how these two situations interact with trait anxiety. Thus, when participants’ goal is emotionally-relevant (i.e. finding an angry face in a neutral crowd), low and high anxious was not expected to differ in search efficiency. Evidence especially from using real photographs, heterogeneous and neutral crowds showed a happy advantage effect over angry ones (e.g., Öhman et al., 2010). Thus, even, happy faces might have a superiority effect in both high and low anxious individuals (i.e., no facilitated attention was expected). However, high anxious participants were expected to be distracted by emotional stimuli (i.e., angry face) more than low anxious people when their goal was not related to find emotional facial expressions (i.e., to find an old or a young face). That is the detection of emotionally-irrelevant target would be inhibited by an emotional distracter, and thus, RTs would be delayed, suggesting difficulties in disengaging attention from the threatening face in order to find the target elsewhere. To investigate how a top-down goal (emotionally-relevant vs. emotionally-irrelevant goal) would influence attentional guidance for facial expressions, two visual search tasks (emotion task and age task) were designed. In the emotion task, the goal was to search for an angry or a happy target (in a young, middle- aged, and old age as task-irrelevant) in a neutral and also middle-aged crowd. However, in the age task, participants searched for an old or young target (with angry, happy, or neutral expression as task-irrelevant) in a neutral and middle-aged crowd. Thus, search efficiency in these two tasks were compared according to their relevance to emotion.


Method


Participants


Participants were 42 undergraduate students (40 female, 2 male; Mage=20.40, SD=3.46) recruiting using the SONA system (School of Psychology Research Panel) and word of mouth at the University of Reading. They received 1.5 course credits. Two participants were left-handed, all the other participants were right-handed. Most of the participants had British

 

nationality (N=32), the others had a nationality of Turkish (N=3), Bulgarian (N=2), French (N=1), Greek (N=1), Iranian (N=1), Spanish (N=1), and Welsh (N=1).


Stimuli


All pictures were selected from a standard set of photographs, the FACES database (Ebner, Riediger, & Lindenberger, 2010). First, old, young, and middle-aged people were selected (16 persons for each set, 48 different persons in total) to create a consistent picture-set for the purpose of this experiment. For every age group, there were equal numbers of females and males selected (8 each). For each person I chose three pictures with an an angry, happy, and neutral emotional expressions (48 x 3 pictures, in total). All stimuli were converted as gray- scale images and subsequently modified using Adobe Photoshop CS2. In each photograph, the hair and the ears were cropped using circular templates to avoid confounds.


Questionnaire


Participants completed a widely-used measure of trait and state anxiety The State and Trait Anxiety Inventory (STAI-Trait & State, Spielberger, Gorsuch, Lushene, Vagg, & Jacobs, 1983)- on the computer screen before the visual search tasks. The STAI-T and STAI-S includes separate self-report scales for measuring state and trait anxiety. The S-Anxiety scale (STAI Form Y-1) consists of 20 statements that assess how respondents feel at this moment whereas the T-Anxiety scale (STAI Form Y-2) consists of 20 statements that evaluate how people generally feel. Participants can score between 20-80 in each scale. Higher scores reflect high anxiety levels. They first completed the Y-1 form and then Y-2 form.


Procedure and ethics


Participants were informed about the content of the experiment by written instructions and completed a consent form. The study was approved by the University of Reading Ethics and Research Committee (REC). All forms and questionnaires can be found in Appendix B, C, D, and E. Only mild deception was involved as participants were not instructed that the actual concern was to examine whether angry faces distracted their attention in the age task. Obviously, if participants had been informed that there were also emotional faces in the age task, all participants might have looked out for these emotional expressions biasing the

 

results. However, following the task, they were able to ask any questions or report any concerns.


Participants sat in front of a computer screen in an experiment room in the School of Psychology building. They first completed a demographical form and the STAI scales on the computer. Then, they performed the age task and emotion tassk, respectively. The tasks were not counterbalanced since the actual purpose was to examine if their attention was distracted by emotional facial expressions when the goal was emotionally-irrelevant. If the emotion task would have been first, the search performance in the emotion task (i.e., searching for an angry/happy face) might have influenced their goal in the age task,. The experiment was programed and data were collected with E-Prime (version 2.0.10.182 Professional), and data were transferred into and analysed with SPSS (IBM SPSS Statistics 21).


All instructions were on the screen, however, particpants were encouraged to ask any questions before the actual task began. Participants completed a practice task (16 trials) before each main tasks. A single trial has begun with the presentation of a black fixation cross in the middle of a white screen for 1000 ms. The fixation cross then replaced by a circle presentation of 8 pictures, displaying for 6000 ms or until the participant made a response. When an error have been made, the message “Incorrect” were presented for 500 ms. Trials werre seperated by an intertrial interval of 1000 ms during which the screen was black. Participants were able to break between two tasks if they needed.


In the first task, participants were informed that seven of the faces in the screen were at the same age, but one of them was at a different age. They indicated whether the discrepant face was an old or a young individual pressing “o” for old or “y” for young on a QWERTY keyboard. They were free to use the keyboard with their prefered hand or both hands. In each trial, an array of eight faces (140*196 pixels each) was shown on the computer screen. Thus, the crowd was always posing a neutral facial expression whereas the discrepant face (i.e., target) could have a happy (32 trials), angry (32 trials), or a neutral facial expression (128 trials). In half of the trials, targets were old and in half of them, targets were young faces. The individuals’ faces were also arranged according to their gender. That is, if the target was a female, then the crowd had 3 female and 4 male faces. In addition, presentation of targets was pseudo-randomized. If an individual was presented as target (e.g., an old individual with posing an angry expression), s/he did not appear in the crowd (e.g. the same old individual

 

posing a neutral expression). The position of the targets were randomized. Overall, we constituted 12 different conditions: young-female-happy, young-male-happy, young-female- angry, young-male-angry, young-female-neutral, young-male-neutral, old-female-happy, old- male-happy, old-female-angry, old-male-angry, old-female-neutral, old-male-neutral.


After a short break, particpants proceeded to the emotion task. In this task, participants were informed that seven of the faces were posing the same facial expressions, but one of them would be different, and their task was to detect whether the discrepant face was an angry or happy facial expression pressing “a” for angry or “h” for happy on the keyboard. To make performance in these two tasks comparable, the tasks were made as similar as possible. Therefore, the crowd was always middle-aged whereas the target could be an old (32 trials), young (32 trials), or a middle-aged individual (128 trials). The same gender balance and randomization as in the first task were applied. Finally, they were debriefed following the experiment, and thus, any further information were provided if needed.




RESULTS


Scoring, response definition and statistical analysis


2 participants were excluded because of the programme failure. Errors were excluded from RT analysis. In addition, responses that two standard deviations (SD) faster or slower than a participant’s average response time were not incorporated into analyses. Less than 25 % of the data were excluded based on these criteria. Analyses were based on mean response times for each task averaged. In addition, accuracy analyses were conducted separately with the mean scores of the correctness for each participant. All descriptive statistics are provided in Appendix A.


Only trait anxiety scores were incorporated analyses. Trait-anxiety is considered to be a personality mood resulting in a higher frequency of incidents of increased situational or state-anxiety (Eysenck, 1992). There is some argument on which type of anxiety drives attentional biases. However, most researchers suggest that the interaction between trait- and state-anxiety is probably important (see MacLeod & Mathews, 1998; Mogg, Bradley, & Broadbent, 1994). Although there is high correlation between two types of anxiety, state

 

anxiety would change according to temporary goals, trait anxiety might be more robust in terms of attentional biases.


Response Times


Main analyses. First, RTs from the two tasks were analyzed for all participants regardless of their anxiety scores. Please see Table 1 and 2 for the descriptive statistics.


Age task. Response times from the age task were subjected to 2 x 2 x 3 (age [old, young] x gender [female, male] x emotion [angry, happy, neutral]) repeated measures of ANOVA. There was a main effect of Age, F(1,39) = 7.14, p = .01,  2 = .16. Old faces (M = 1694.6, SD = 67.08) were generally detected faster than young faces (M = 1856.2, SD = 59.14).

Mauchly’s test indicated that the assumption of sphericity had been violated for the main effect of Emotion, χ²(2) = 6.77, p < .05. Therefore, degrees of freedom were corrected using Greenhouse-Geiser estimates of sphericity (ε = .86). Thus, there was a significant main

 

effect of Emotion, F(1.72, 67.06) = 20.96, p < .001,  2

 

= .35. Paired samples t-tests revealed

 

that all three emotional expressions differ from each other in their impact on attention:. Happy faces were detected significantly faster than angry faces and neutral faces, t(39) = 2.48, p < .008, and t(39) = -4.46, p < .001, respectively; and also angry faces were detected faster than neutral faces, t(39) = -5.7, p < .001. However, the main effect of Gender was not significant (F(1,39)= .63, p = .43).

Mauchly’s test indicated that the assumption of sphericity had been violated for the interaction effect of Age and Emotion, χ²(2) = 12.21, p <. 005. Therefore, degrees of freedom were corrected using Greenhouse-Geiser estimates of sphericity (ε = .78). There was a

 

significant interaction effect of Age and Emotion, F(1.57, 61.18) = 18.81, p < .001,  2

 

= .32

 

(See Figure 1). Paired Samples t-tests revealed that old-angry and old-happy faces were both detected faster than old-neutral faces, t(39) = 6.36, p < .001, t(39) = 3.66, p = .001, respectively. However, there was no significant difference between detecting old-happy and old-angry targets (t(39) = -.95, p=.35). In addition, young-happy faces were detected faster than young-angry and young-neutral faces, t(39) = 5.01, p < .001, t(39) = 5.8, p < .001, respectively. There was no significant difference between young-angry and young-neutral

 

faces (t(39) = -.87, p = .39). When compared to the two ages according to emotions, young- angry faces were detected slower than old-angry targets, t(39) = 4.64, p < .001. However, we did not find significant differences between both old-happy and young-happy and old-neutral and young-neutral targets (t(39) = -.23, p = .82, t(39) = -1.74, p = .09).




Figure 1. Reaction-time (RT) data for neutral, angry, and happy faces (distracters) on old and young- age targets. A single RT (e.g., old-angry) was obtained adding the real RTs from different conditions (e.g., sum of old-female angry and old-male angry). This was the case for the following figures as well, stated otherwise.

In addition, the interaction between Age and Gender was approaching significance,

 

F(1,39) = 3.72, p= 0.06,  2

 

= .09 (See Figure 2). We did not find a significant interaction

 

between Gender and Emotion (F(2,78) = .44, p=.40) or an interaction between Age, Gender, and Emotion (F(2,78) = .35, p=.29).




Figure 2. Reaction-time (RT) data for young and old targets on male and female pictures.

 

Emotion task. Response times from the emotion task were subjected to 2 x 2 x 3 (emotion [happy, angry] x gender [female, male] x age [young, middle, old]) repeated measures of ANOVA.


Mauchly’s test indicated that the assumption of sphericity had been violated for the main effect of Age, χ²(2) = 11.44, p<.005. Therefore, degrees of freedom were corrected using Greenhouse-Geiser estimates of sphericity (ε = .79). There was a significant main effect

 

of Age, F(1.59, 61.90) = 4.28, p < .05,  2

 

= .10. According to paired samples t-test, young

 

faces were detected significantly slower than middle-aged faces and old faces, t(39) = 2.89, p

= .006, and t(39) = 2.19, p < .05, respectively. There was no mean difference between detecting old and middle-aged targets (t(39) = -.36, p = .72).


 

There was a main effect of Emotion, F(1,39) = 18.03, p < .001,  2

 

= .82. Happy faces

 

(M = 1047.0, SD = 46.00) were detected faster than angry faces (M = 1277.4, SD = 55.09). Additionally, the main effect of Gender was not significant (F(1,39) = .69, p = .41).

Mauchly’s test indicated that the assumption of sphericity had been violated for the interaction effect of Age and Emotion, χ²(2) = 10.41, p = .005. Therefore, degrees of freedom were corrected using Greenhouse-Geiser estimates of sphericity (ε = .81). Thus, there was a significant interaction effect between Age and Emotion, F(1.61, 62.92) = 10.90, p < .001,  2

= .22. Paired samples t-tests revealed that happy faces were detected faster than angry faces for old, young, and middle-aged targets, t(39) = 6.01, p < .001, t(39) = 11.33, p < .001, and t(39) = 13.57, p < .001, respectively. When comparing angry faces by age, young-angry faces were detected slower than both old and middle-aged-angry faces, t(39) = -3.30, p = .002 and t(39) = -4.18, p < .001, respectively. However, there was no significant difference between middle-aged and old faces for angry expressions (t(39) = 1.19, p = .24). There was also no difference between the three targets for happy expressions (t(39) = -1.13, p = .27 for middle- aged- and old-happy targets; t(39) = -.54, p = .60 for middle-aged- and young-happy targets; t(39) = .40, p = .69 for old- and young-happy targets). (See also Figure 3).

 


 



Figure 3. Reaction-time (RT) data for angry and happy targets on middle-aged, young, and old faces.


The main results revealed another two-way interaction between Gender and Emotion,

 

F(1,39) = 7.19, p = .01,  2

 

= .16. According to Paired Samples t-test, happy faces were

 

significantly faster detected than angry faces for both female and male target faces, t(39) = - 10.40, p < .001, t(39) = 10.17, p < .001, respectively. However, happy female targets were detected significantly faster than happy male ones, t(39) = -3.66, p = .001. In contrast, there was no gender difference for angry faces (t(39) = 1.1, p = .28) (See Figure 4).




Figure 4. Reaction-time (RT) data for female and male targets on angry and happy faces.


Finally, the Age x Gender x Emotion interaction was significant, F(2,78) = 3.61, p <

.05, 2  = .16, indicating that the interaction between Age and Emotion that has been

previously described was different for male and female targets.

 

To explore this interaction, RTs from the Emotion Task were subjected to a 2 x 3 (gender [female, male] x age [young, middle, old]) repeated measures ANOVA for angry faces and happy faces separately.

Angry targets. Mauchly’s test indicated that the assumption of sphericity had been violated for the main effect of Age, χ²(2) = 18.14, p <.001. Therefore, degrees of freedom were corrected using Greenhouse-Geiser estimates of sphericity (ε = .72). Thus, there was a

 

significant main effect of Age, F(1.45, 56.54) = 9.09, p <.001,  2

 

= .19. Paired Samples t-

 

tests (see also above) did show that young faces were detected significantly slower than middle-aged and old targets, t(39) = 2.89, p = .006 and t(39) = 2.19, p < .05, respectively. Neither gender main effect nor an interaction between gender and age were significant (F(1,39) = 1.21, p = .28 and F(2,78) = .92, p = .40, respectively) (See Figure 5).




Figure 5. Reaction-time (RT) data for female- and male-angry targets on middle-aged, old, and young faces. A single RT (e.g., old-female-angry) reflects participants’ actual mean RTs on this trials (e.g., old-female-angry).




Happy targets. There was a significant main effect of Gender, F(1,39) = 13.42, p =

 

.001,  2

 

= .26. Participants were responded faster when the target was a female happy face

 

(M = 3065.11, SD = 867.91) compared to male happy face (M = 3216.89, SD = 902.86). Neither the main effect of Age nor the interaction between Age and Gender was significant (F(2,78) = .48, p = .62 and F(2,78) = 1.66, p = .20, respectively) (See Figure 6).

 


 



Figure 6. Reaction-time (RT) data for female- and male-happy targets on middle-aged, old, and young faces. A single RT (e.g., old-female-happy) reflects participants’ actual mean RTs on this trials (e.g., old-female-happy).




To better understand the same 3-way interaction effect among Age, Gender, and Emotion, RTs from the Emotion Task were further subjected to 2 x 3 (emotion [happy, angry] x age [young, middle, old]) repeated measures of ANOVA for male faces and female faces separately.

Male targets. There was a significant main effect of Emotion, F(1,39) = 103.43, p <

.001, 2 = .73. Participants responded faster when the target is a male happy face (M =

3216.89, SD = 902.86) compared to male angry face (M = 3791.12, SD = 1039.15). However, the main effect of Age is approaching significance (F(2,78) = 2.89, p =.06).


There was also a significant interaction effect between Age and Emotion, F(2,78) =

 

8.53, p < .001,  2

 

= .18. Paired Samples t-tests revealed that angry targets were detected

 

significantly slower than happy targets for middle-aged, old, and young ages, t(39) = 7.31, p

< .001, t(39) = 3.49, p = .001, and t(39) = 9.97, p < .001, respectively. Young-male-angry targets were detected faster than both angry middle-aged- and old-male targets, t(39) = -3.98, p < .001 and t(39) = -3.08, p < .005, respectively. In contrast, there was no significant difference between detecting middle-aged-angry and old-angry male targets (t(39) = .25, p =

.80). None of the happy-male targets were different from each other in terms of age (t(39) = -

.19, p =.85 for middle-aged and old targets, t(39) = .87, p = .39, for middle-aged and young targets, and t(39) = .94, p =.35 for old and young targets). (See Figure 7).

 


 



Figure 7. Reaction-time (RT) data for angry- and happy-male targets on middle-aged, old, and young faces. A single RT (e.g., old-male-happy) reflects participants’ actual mean RTs on this trials (e.g., old-male-happy).




Female targets. Mauchly’s test indicated that the assumption of sphericity had been violated for the main effect of Age, χ²(2) = 20.83, p < .001. Therefore, degrees of freedom were corrected using Greenhouse-Geiser estimates of sphericity (ε = .70). Thus, the significant main effect of Age (F(2,78) = 3.05, p = .05 when Sphericity Assumed) was only approaching significant level (F(1.41,54.85) = 3.05, p = .07 with Greenhouse-Geisser).

However, there was a significant main effect of Emotion, F(1,39) = 108.10, p < .001,  2  =

.74. Participants were responded faster when the target was a female happy face (M = 3065.11, SD = 861.91) compared to female angry face (M = 3873.27, SD = 1103.17).

There was also a significant interaction effect between Age and Emotion, F(2,78) =

4.99, p< .01, 2  = .11. Paired Samples t-tests revealed that angry-female targets were

detected significantly slower than happy-female targets in all three ages; middle-aged, old, and young ages, t(39) = 13.46, p < .001, t(39) = 5.55, p < .001, and t(39) = 7.43, p < .001,

respectively. Young-female-angry targets were detected faster than both middle-aged- and old-female-angry targets, t(39) = -2.07, p < .05 and t(39) = -2.48, p < .05, respectively. Also, old-female-angry targets were detected slower than middle-aged ones, t(39) = 2.18, p < .05. None of the happy female targets differed from each other depending on age (t(39) = -1.20, p

= .24 for middle-aged and old faces, t(39) = -1.59, p = .12 for middle-aged and young faces, and t(39) = -.17, p = .87 for old and young targets). (See Figure 8).

 


 



Figure 8. Reaction-time (RT) data for female-angry and female-happy targets on middle-aged, old, and young faces. A single RT (e.g., old-female-happy) reflects participants’ actual mean RTs on this trials (e.g., old-female-happy).




Anxiety Analysis. Mean STAI score for the sample (M = 43.4, SD = 10.4) was consistent with the norms for female college students (M = 40.40, SD = 10.15, Spielberger et al., 1983). Participants were allocated to a high anxiety group if they score more than half a SD above the sample mean (N=14) and to a low anxiety group if they score less than half a SD below the sample mean (N=16). 10 participants did not meet criteria for inclusion in either group and were not therefore included in further analysis. All analyses were conducted separately for age and emotion task.


Age task. A 2 x 2 x 2 x 3 (anxiety group [high, low] x age [old, young] x gender [female, male] x emotion [angry, happy, neutral]) mixed ANOVA was conducted to compare the response times of two anxiety groups. The between-subjects effect was not significant (F(1,28) = 1.29, p=.27).


The within subjects effects showed a significant main effect of Emotion on RTs,

 

F(2,56) = 16.50, p < .001,  2

 

= .37. According to Paired Samples t-tests, all three emotions

 

differed from each other. Neutral targets were detected slower than both angry and happy faces, t(29) = -3.70, p = .001, t(29) = -4.74, p < .001. Also, angry targets were detected slower than happy targets, t(29) = 2.60, p < .05. However, the main effect of Age was only approaching the significant level (F(1,28) = 3.53, p = .07) and the effect of Gender was not significant (F(1,28) = .23, p = .64).

 

There was a significant interaction effect between Emotion and Anxiety level, F(2,56)

 

= 5.42, p < .01,  2

 

= .16, indicating that the speed of responses to the different emotional

 

expressions differed in low and high anxiety groups. To break down this interaction, contrasts compared each emotional facial expression across high and low anxious participants. For the low anxiety group, the detection of all three emotional expressions differed from each other. According to paired samples t-tests, happy targets were detected significantly faster than both neutral and angry targets, t(15) = -5.94, p < .001, t(15) = 3.32, p = .005, respectively. Also, angry faces were detected faster than neutral ones, t(15) = -2.80, p < .05. In contrast, for high anxious participants, only the difference between angry and neutral targets was significant showing that angry faces were detected significantly faster than neutral faces, t(13) = -2.55, p

< .05. There was no significant mean difference between RTs on angry and happy targets (t(13) = .20, p = .84) and no difference between happy and neutral faces in high anxiety group (t(13) = .-1.51, p = .16) .




Figure 9. Reaction-time (RT) data for low and high anxious groups on angry, happy, and neutral expressions.




Furthermore, according to independent samples t-tests there were no significant difference between high and low anxious groups in terms of detecting angry, happy, and neutral targets (t(29) = 1.40, p=.17, t(29) = -.09, p = .93, and t(29) = 1.77, p = .08, respectively). However, as can be seen in the graph (Figure 9), high anxious group seemed faster than low anxious group in detecting both angry and neutral targets while there seems to be no difference between the two groups for happy targets.

 

Mauchly’s test indicated that the assumption of sphericity had been violated for the interaction effect between Age and Emotion, χ²(2) = 6.51, p < .05. Therefore, degrees of freedom were corrected using Greenhouse-Geiser estimates of sphericity (ε = .82). There was a significant interaction effect between Age and Emotion, F(1.65,46.12) = 10.81, p < .001,

2 = .28 (See Figure 10). Paired samples t-tests revealed that happy faces were detected

faster than angry faces in young age targets, t(29) = 4.63, p < .001. In contrast, there was no difference between old-angry and old-happy faces (t(29) = -.35, p = .73). Neutral faces were detected slower than happy faces in both old and young age targets, t(29) = 3.33, p < .005, t(29) = 4.63, p < .001, respectively. While neutral faces were detected slower than angry faces in old age targets, t(29) = 4.95, p < .001, no difference were found between neutral and angry faces for young age targets (t(29) = .75, p = .46).




Figure 10. Reaction-time (RT) data for old and young targets on angry, happy, and neutral expressions.




Finally, there was an interaction effect between Age and Gender, F(1,28) = 3.94, p= .05,  2

= .12 (See Figure 11). Paired Samples t-tests revealed that young-male targets were detected slower than old-male targets, t(29) = -2.39, p = .02. In contrast, there was no difference between young and old female targets (t(29) = -1.25, p = .22). The gender difference for old- aged targets were nearly approaching the significant level (t(29) = -1.73, p = .09) and there was no gender difference for young-aged targets (t(29) = 1.02, p = .32).

 


 



Figure 11. Reaction-time (RT) data for male and female targets on old and young faces.





The interaction between Age and Anxiety (F(1,28) = .06, p = .82), Gender and Anxiety (F(1,28) = .10, p=.76), Gender and Emotion (F(2,56) = .39, p = .68) were not significant. Also, the three-way interactions among Age, Gender, and Anxiety (F(1,28) = 1.77, p = .20); Age, Emotion, and Anxiety (F(2,56) = 1.69, p = .20); Gender, Emotion, and Anxiety (F(2,56)

= .34, p = .72); and, Age, Gender, and Emotion (F(2,56) = .66, p = .40) were not significant. Finally, the last interaction effect among Age, Gender, Emotion, and Anxiety (F(2,56) = .94, p = .40) could also not reach significance.


Emotion task. A 2 x 2 x 2 x 3 (anxiety [low, high] x emotion [angry, happy] x gender [female, male] x age [middle, old, young]) mixed ANOVA was conducted to compare the response times of the two anxiety groups. The between subjects effect was not significant (F(1,28) = .19, p=.67).


The within subjects effects showed a significant main effect of Emotional Expression

 

on speed, F(1,28) = 200.86, p < .001,  2

 

= .88. Contrasts revealed that response times to

 

angry faces (M = 1284.5, SE = 67.06) were significantly slower than happy faces (M = 1060.8, SE = 58.28).


Mauchly’s test indicated that the assumption of sphericity had been violated for the main effect of Age, χ²(2) = 9.58, p < .01. Therefore, degrees of freedom were corrected using Greenhouse-Geiser estimates of sphericity (ε = .77). The significant main effect of Age (p=.05 when Sphericity Assumed) was only approaching significant level (p = .06 with

 

Greenhouse-Geisser). The main effect of Gender could not reach the significant level (p =

.47, respectively).


There was a significant interaction effect between Gender and Anxiety Level, F(1,28)

 

= 4.39, p < .05,  2

 

= .14. This indicates that low and high anxious groups differ from each

 

other in terms of the response times to male and female targets. To break down this interaction, the data were split into two groups (low and high anxiety groups) and Paired Samples t-test were conducted, but failed to find any significant results (t(15) = 1.13 p = .28 for low and t(13) = -1.75, p = .10 for high anxiety groups). In addition, to better understand the effect of anxiety across gender, Independent Samples t-tests were conducted, but again failed to reveal any significant results (t(28) = -.15, p = .88 for females and t(28) = -.71, p=.49 for males). Figure 12 shows that participants in the low anxiety group responded in tendency faster to male targets than to female targets while high anxious participants were in tendency faster when detecting female targets than male targets.




Figure 12. Reaction-time (RT) data for low and high anxious groups on female and male targets.





There was a significant interaction effect between Gender and Emotion, F(1,28) = 4.53,

 

p<.05,  2

 

= .14, indicating that the speed of detection the two emotional expressions differ

 

for male and female targets (See Figure 13). Paired samples t-tests revealed that female- happy targets were detected significantly faster than male-happy targets, t(28) = -3.58, p =

.001. In contrast, there was no significant difference when detecting female- and male-angry

 

targets (t(29) = .88, p = .38). Also, happy targets were detected faster than angry targets in both genders; female, t(28) = 9.43, p < .001, and male, t(28) = 9.94, p < .001, respectively.




Figure 13. Reaction-time (RT) data for male and female targets on angry and happy faces.





There was a significant interaction effect between Emotion and Anxiety Level, F(1,28) =

 

4.12, p = .05,  2

 

= .13, indicating that happy and angry targets were detected differently fast

 

in high and low anxious groups. To explore this interaction, the data were split into two groups –low and high anxiety-, and then, paired samples t-tests revealed that angry faces were detected slower than happy faces in both low anxiety and high anxiety groups, t(15) = 10.85, p< .001 and t(13) = 9.44, p= .001. In addition, to better understand the effect of emotion on anxiety, Independent Samples t-tests were conducted, but failed to reach significance level (t(28) = -.16, p=.87 for angry targets and t(28) = -.74, p=.47 for happy targets). However, as shown in Figure 14, high anxious group seemed slightly faster on detection of happy faces than low anxious participants whereas the two groups seemed equally faster in detecting angry targets.

 


 



Figure 14. Reaction-time (RT) data for angry and happy targets compared to low and high anxiety group.




Mauchly’s test indicated that the assumption of sphericity had been violated for the interaction of Age and Emotion, χ²(2) = 10.40, p = .006. Therefore, degrees of freedom were corrected using Greenhouse-Geiser estimates of sphericity (ε = .76). There was a significant

 

effect of the interaction between Age and Emotion, F(1.52, 42.43) = 5.30, p < .05,  2

 

= .16.

 

Paired Samples t-tests revealed that happy targets were detected significantly faster than angry targets in all three ages; middle-aged, old, and young targets, t(29) = 11.89, p <.001, t(29) = 5.27, p < .001, and t(29) = 10.25, p < .001, respectively. In addition, young-angry targets were detected slower than middle-aged and old-angry targets, t(29) = 4.19, p < .001 and t(29) = 2.63, p < .05. There was no difference between middle-aged- and old-angry targets (t(29) = .79, p = .44). Finally, age differences were not found for happy targets.


The other two-way interactions between Age and Anxiety (F(2,56) = 2.46, p = .09) and Age and Gender (F(2,56) = .18, p = .83) were not significant. Also, the three-way interactions among Age, Gender, and Anxiety (F(2,56) = 1.45, p = .24), Age, Emotion, and Anxiety (F(2,56), p = .61), Age, Gender, and Emotion (F(2,56) = 1.82, p = .17), and Gender, Emotion, and Anxiety (F(1,28) = .76, p = .39) were not significant. Finally, the interaction among Age, Gender, Emotion, and Anxiety could not reach significant level (F(2,56) = 1.41, p = .25).

 

Accuracy


Main analyses. First, accurate responses (0-1) were averaged for each participant. Then, these accuracy values from the two tasks were analyzed and reported separately with all participants, regardless of their anxiety scores.


Age task. Mean accuracy from the Age task were subjected to 2 x 2 x 3 (age [old, young] x gender [female, male] x emotion [angry, happy, neutral]) repeated measures of ANOVA. There was a main effect of Age approaching the significance level, F(1,39) = 3.92,

 

p = .055,  2

 

= .01, showing that participants made more errors when the target was young

 

(M = .88, SE = .01) than old (M = .85, SE = .01).


Mauchly’s test indicated that the assumption of sphericity had been violated for the main effect of Emotion, χ²(2) = 6.90, p<.05. Therefore, degrees of freedom were corrected using Greenhouse-Geiser estimates of sphericity (ε = .86). There was a significant main effect

 

of Emotion, F(1.72, 66.89) = 7.81, p < .005,  2

 

= .17. Paired samples t-tests revealed that

 

happy targets were detected more correctly than angry and neutral targets, t(39) = -2.71, p =

.01 and t(39) = 4.13, p < .001, respectively. There was no difference between angry and neutral targets (t(39) = .42, p = .68).There was no main effect of Gender (F(1,39) = 1.78, p =

.19) and no interaction effect between Gender and Emotion (F(2,78) = 1.92, p = .15).


 



< .001,

 

There was a significant interaction effect between Age and Gender, F(1,39) = 17.50, p

 2 = .31 (Figure 15). Paired Samples t-tests revealed that participants made more

 

errors when the target was old-female compared to old-male, t(39) = -3.37, p< .005, and they made also more errors when the target was old-male compared to young-male, t(39) = 4.31, p

< .001. Neither the gender effect of young targets nor an age effect of female targets were significant, (t(39) = 1.51, p = .14 and t(39) = -.36, p = .72, respectively).

 


 



Figure 15. Accuracy data for old and young targets on female and male faces.





Mauchly’s test indicated that the assumption of sphericity had been violated for the interaction effect between Age and Emotion, χ²(2) = 10.31, p = .006 (See Figure 16). Therefore, degrees of freedom were corrected using Greenhouse-Geiser estimates of sphericity (ε = .81). Therefore, there was an interaction effect between Age and Emotion, F(1.62,63.02) = 4.66, p < .05. Paired Samples t-tests revealed that for old faces, neutral targets were detected less correctly than happy and angry targets, t(39) = 2.87, p < .01 and t(39) = 2.07, p = .05, respectively. No accuracy difference were found between old-happy and old-angry targets (t(39) = -.49 p = .62). In contrast, participants made less errors when the target was a young happy than angry face, t(39) = -3.11, p < .005; and also they made less errors when the target was a young happy than neutral face, t(39) = 3.38, p < .005. No accuracy difference found between young-angry and young-neutral targets (t(39) = -.91, p <

.37). In terms of age effect on emotion, only a difference between angry targets was found, revealing that old-angry targets were detected more correctly than young-angry ones, t(39) = 2.84, p < .01.

 


 



Figure 16. Accuracy data for old and young targets on angry, happy, and neutral faces.


Mauchly’s test indicated that the assumption of sphericity had been violated for the interaction effect among Age, Gender, and Emotion, χ²(2) = 7.09, p < .05. Therefore, degrees of freedom were corrected using Greenhouse-Geiser estimates of sphericity (ε = .85). There was also a significant three-way interaction among Age, Gender, and Emotion, F(1.71, 66.65)

 

= 4.36, p < .05,  2

 

= .10. To explore this interaction, mean accuracy from the Age Task were

 

subjected to 2 x 3 (gender [female, male] x emotion [angry, happy, neutral]) repeated measures of ANOVA for old faces and young faces separately.

Old targets. There was a significant main effect of Gender, F(1,39) = 11.36, p < .005,

2 = .23, showing that participants were more accurate when the target was a male face (M =

.86, SE = .017) compared to a female face (M = 91, SD = .014). Mauchly’s test indicated that the assumption of sphericity had been violated for the main effect of Emotion, χ²(2) = 6.32, p

< .05. Therefore, degrees of freedom were corrected using Greenhouse-Geiser estimates of sphericity (ε = .87). Thus, there was a significant main effect of Emotion, F(1.73, 67.64) = 3.41, p < .05,  2 = .08. Paired Samples t-tests revealed that neutral targets were detected less accurately than both happy and angry targets, t(39) =2.87, p < .01 and t(39) = 2.07, p = .05,

respectively. No accuracy difference was found between detecting angry- and happy-old

targets (t(39) = -.50, p < .62) (See Figure 17). Also, there was no significant interaction effect between Gender and Emotion (F(2,78) = .32, p=.73).

 


 



Figure 17. Accuracy data for male- and female-old targets on angry, happy, and neutral faces. A single mean (e.g., old-male-happy) reflects participants’ actual mean accuracy on this trials (e.g., old-male-happy).




Young targets. Mauchly’s test indicated that the assumption of sphericity had been violated for the main effect of Emotion, χ²(2) = 9.20, p=.01. Therefore, degrees of freedom were corrected using Greenhouse-Geiser estimates of sphericity (ε = .82). Thus, there was a

 

main effect of Emotion, F(1.65, 64.19) = 7.86, p < .05,

 

 2 = .17. Paired Samples t-tests

 

revealed that happy targets were detected more accurately than both angry and neutral targets, t(39) = -3.11, p < .005, and t(39) = 3.38, p < .005, respectively. No accuracy difference were found between young-angry and young-neutral targets (t(39) = -.91, p = .37). There was no significant main effect of Gender for young targets. However, there was a significant

 

interaction effect of Gender and Emotion, F(2, 78) = 4.84, p = .01,  2

 

= .11. Paired Samples

 

t-tests revealed that there was a significant emotion effect for young-male targets, showing that participants were more accurate when detecting happy targets than both angry and neutral targets, t(39) =3.53, p= .001 and t(39) =5.12, p< .001, respectively. However, there was no emotion effect for young-female targets (t(39) = -1.74, p< .09 for angry and happy targets; t(39) = -1.21, p < .23 for angry and neutral targets; t(39) = .97, p < .34 for happy and neutral targets). Finally, there was a gender effect on neutral targets, showing that participants detected female-neutral targets more accurately than male-neutral targets, t(39) =4.75, p<

.001. This gender effect did not present for happy and angry targets, t(39) = -1.26, p < .21 and

t(39) = 1.34, p < .19, respectively (See Figure 18).

 


 



Figure 18. Accuracy data for female- and male-young targets on angry, happy, and neutral faces. A single mean (e.g., young-male-happy) reflects participants’ actual mean accuracy on this trials (e.g., young-male-happy).




Emotion Task. Mean accuracy from the Emotion Task were subjected to 2 x 2 x 3 (emotion [happy, angry] x gender [female, male] x age [young, middle, old]) repeated measures of ANOVA. There was a significant main effect of Emotion, F(1, 39) = 20.90, p <

.001, 2  = .35, showing that angry targets (M = .97, SE = .007) were detected more

accurately than happy targets (M = .93, SE = .010). The main effect of Gender was nearly approaching the significance level (F(1,39) = 3.84, p = .057), suggesting that participants made slightly more errors when detecting male targets (M = .94, SE = .009) than female targets (M = .95, SE = .007). However, there was no significant main effect of Age (F(2,78) = 1.12, p = .33). Finally, none of the interaction effects reached the significant level (p >.10).


Anxiety Analyses. The same allocation process (see analyses of RTs) was applied to constitute two anxiety groups. Then, accuracy from the two tasks was analyzed separately.


Age Task. A 2 x 2 x 2 x 3 (anxiety [low, high] x age [old, young] x gender [female, male] x emotion [angry, happy, neutral]) mixed ANOVA was conducted to compare the accuracy of two anxiety groups. The between-subjects effect was not significant (F(1,28) =

.98, p = .33).


The within subjects effects showed a significant main effect of Emotion on accuracy, F(2,56)

 

= 4.74, p= .01,  2

 

= .14. According to Paired Samples t-tests, happy targets were detected

 

more accurately than both angry and neutral targets, t(29) = -2.02, p= .05 and t(29) = 3.27, p<

.005, respectively. In addition, the main effect of Age was also significant, F(1, 28) = 4.09, p

= .05,  2 = .13. Old targets (M = .89, SE = .017) were detected more accurately than young targets (M = .85, SE = .015). However, the main effect of Gender was not significant (F(1,28)

= .62, p = .44).


Mauchly’s test indicated that the assumption of sphericity had been violated for the interaction effect between Age and Emotion, χ²(2) = 14.88, p = .001. Therefore, degrees of freedom were corrected using Greenhouse-Geiser estimates of sphericity (ε = .70). There was a significant interaction effect between Age and Emotion, F(1.40, 39.34) = 7.34, p = .005,

 2 = .21. Paired samples t-tests revealed that old-neutral targets were detected less accurately than old-angry targets, t(29) = 3.14, p < .005. However, there was no other emotion effect on old targets (t(29) = .68, p < .50 for angry and happy targets; t(29) = 1.97, p < .06). Also,

young-happy targets were detected more accurately than both young-angry and young-neutral

targets, t(29) = -2.93, p < .01 and t(29) = 3.16, p < .005, respectively. However, no difference was found between angry and neutral targets in young faces, t(29) = -.97, p < .34. Only age difference were found on angry targets, showing that old-angry targets were detected more correctly than young ones, t(29) = 3.30, p< .005. There was no age difference for happy expression (t(29) = -.93, p < .36) and neutral expressions (t(29) = 1.39, p < .18).




Figure 19. Accuracy data for old and young targets on angry, happy, and neutral faces.





There was also a significant interaction effect between Age and Gender, F(1, 28) = 13.01, p=.001,  2 = .32. Paired Samples t-tests revealed that old-female targets were detected less

 

accurately than old-male ones, t(28) = -2.62, p = .01. In contrast, there was no significant gender difference for young targets (t(28) = 1.26, p = .22). In addition, old male targets were detected more accurately than young-male targets, t(28) = 3.76, p = .001. However, there was no significant age effect for female targets (t(28) = .09, p = .93).




Figure 20. Accuracy data for old and young targets on female and male faces.





Finally, there was a significant interaction effect among Age, Gender, and Emotion, F(2,56)

= 3.76, p<.05,  2 = .12. To explore this interaction, mean accuracy from the age task were subjected to 2 x 3 (gender [female, male] x emotion [angry, happy, neutral]) repeated measures of ANOVA for old faces and young faces separately.

Old targets. There was a significant main effect of Gender, F(1,29) = 6.84, p = .01,

 2 = .19, showing that old-male targets (M = .91, SE = .017) were detected more accurately than old-female targets (M = .86, SE = .020). In addition, Mauchly’s test indicated that the assumption of sphericity had been violated for the main effect of Emotion, χ²(2) = 7.46, p <

.05. Therefore, degrees of freedom were corrected using Greenhouse-Geiser estimates of

sphericity (ε = .81). There was a significant main effect of Emotion, F(1.62, 47.01) = 3.58, p

 

< .05,

 

 2 = .11. Paired Samples t-tests revealed that neutral faces were detected less

 

accurately than angry ones, t(29) = 3.14, p < .005. However, no difference was found between angry and happy faces (t(29) = .68, p < .50) and no difference also was found between happy and neutral faces (t(29) = 1.97, p < .06). No interaction effect between Gender and Emotion was found for old targets (F(2,58) = 2.04, p = .14).

 


 



Figure 21. Accuracy data for old-female and old-male targets on three emotional expressions. A single mean (e.g., old- female-angry) reflects mean accuracy from that condition.




Young targets. Mauchly’s test indicated that the assumption of sphericity had been violated for the main effect of Emotion, χ²(2) = 8.43, p = .01. Therefore, degrees of freedom were corrected using Greenhouse-Geiser estimates of sphericity (ε = .79). Thus, there was a significant main effect of Emotion, F(1.59, 46.03) = 7.09, p < .005,  2 = .20. Paired Samples

t-tests revealed that happy targets were detected more accurately than both angry and neutral ones, t(29) = -2.93, p < .01 and t(29) = 3.16, p < .005, respectively. However, no significant main effect of Gender was found (F(1,29) = 1.60, p = .22). In addition, the interaction effect between Gender and Emotion was approaching significant level (F(2,58) = 2.17, p = .07).




Figure 21. Accuracy data for young-female and young-male targets on three emotional expressions. A single mean (e.g., young- female-angry) reflects mean accuracy from that condition.

 



Finally, other two-way interaction effects between Age and Anxiety (F(1,28) = .003, p = .96), Gender and Anxiety (F(1,28) = .19, p = .66), Emotion and Anxiety (F(2,56) = .82, p = .44) and Gender and Emotion (F(2,56) = .84, p = .44) were not significant. Also, none of the three-way interactions other than Age, Gender, and Emotion were not significant (p >.32). Finally, the last interaction among Age, Gender, Emotion, and Anxiety could not reach the level of significance (F(2,56) = .065, p = .94).


Emotion task. A 2 x 2 x 2 x 3 (anxiety [low, high] x age [old, young] x gender [female, male] x emotion [angry, happy, middle]) mixed ANOVA was conducted to compare the accuracy of two anxiety groups. The between-subjects effect was significant, F(1,28) =

 

9.45, p = .005,  2

 

= .25, suggesting that low and high anxious groups differ in terms of some

 

variables (See the next sub-section). The within subjects effects showed a significant main

 

effect of Emotion on accuracy, F(1,28) = 11.52, p < .005,  2

 

= .29, suggesting that angry

 

targets (M = .97, SE = .009) were detected more accurately than happy targets (M = .93, SE =

.009). However, the main effects of Age and also Gender were not significant (F(2,56) = .31,

p = .74 and F(1,28) = .33, p = .57, respectively).


 


.05,  2

 

There was a significant interaction between Emotion and Anxiety, F(1,28) = 6.13, p<

= .18. To reveal this interaction, the data were split into two groups; high and low

 

anxiety. In the high anxiety group, angry faces were detected more accurately than happy faces, t(13) = 3.33, p= .005. However, in the low anxious group, there was no significant difference in terms of emotional expressions (t(15) = .87, p = .40). In addition, an independent samples t-test was conducted to explore this interaction in more detail. This showed that happy targets were detected more accurately in the low anxiety group than in the high anxiety group, t(28) = 3.84, p= .001 while there was no difference on angry targets between low and high anxiety groups (t(28) = 1.15, p = .26).

 


 



Figure 21. Accuracy data for angry and happy targets for low and high anxiety groups.


There was also a significant interaction among Gender, Emotion, and Anxiety, F(1,28) =

 

4.77, p< .05,  2

 

= .15. To reveal this interaction, the data were split into two groups (high

 

and low anxiety) and a [2 x 2] Repeated Measures ANOVA were conducted. In the high

 

anxiety group, the main effect of Emotion was significant, F(1,13) = 10.11, p < .01,  2

 

= .44,

 

suggesting that angry targets (M = 1.90, SE = .04) were detected more accurately than happy targets (M = 1.78, SE = .03). There was no significant main effect of Gender, and also, no significant interaction between Gender and Emotion (F(1,13) = .11, p = .75 and F(1,13) =

.22, p = .65, respectively). For the low anxiety group, no main or interaction effect could reach significance level (p > .45) (But see Figure 22).




Figure 22. Accuracy data for female-angry, female-happy, male-angry, and male-happy targets for low and high anxiety groups.

 

The other two-way interaction effects between Age and Anxiety (F(2,56) = 1.07, p = .35), Gender and Anxiety (F(1,28) = .22, p = .64), Age and Gender (F(2,56) = 1.66, p = .20), Age and Emotion (F(2,56) = .23, p = .80), and Gender and Emotion (F(1,28) = .08, p = .78) were not significant. Also, the three-way interactions among Age, Gender, and Anxiety (F(2,56) =

.73, p = .49), Age, Emotion, and Anxiety (F(2,56) = .23, p = .80), and Age, Gender, and Emotion (F(2,56) = 1.56, p = .22) could not reach significant level. Finally, the last interaction effect among Age, Gender, Emotion, and Anxiety was also non-significant (F(2,56) = 1.48, p = .24).


Between-subjects effect. To reveal the significant between-subjects effect, two 2 x 2 x 3 (age [old, young] x gender [female, male] x emotion [angry, happy, middle]) repeated measures ANOVA was conducted for high and low anxious groups, separately.

For the high anxiety group, there was a significant main effect of Emotion, F(1,13) =

 

10.11, p< .01,  2

 

= .44, as it was revealed in the previous section, suggesting that angry

 

targets (M = .90, SE = .002 ) were detected more accurately than happy targets (M = .78, SE =

.003) (But see Figure 23). For the low anxiety group, neither main effects nor an interaction could reach the significant level (p >.45) (See Figure 24).




Figure 23. Accuracy data on six different conditions for high anxiety groups. A single mean was the actual mean accuracy from that condition.

 


 



Figure 24. Accuracy data on six different conditions for low anxiety groups. A single mean was the actual mean accuracy from that condition.





To better understand the between-subject effects, Independent Samples t-tests were conducted. These analyses revealed that when the targets have either an old- or young- female-happy identity, then the high anxious group made more errors than low anxious individuals, t(28) = 3.94, p = .001 and t(28) = 2.91, p = .01, respectively. Also, when the target has a middle-aged-male-happy identity, high anxious participants made more errors than low anxiety group, t(28) = 2.56, p < .05. In addition, there was a group difference on middle-aged-female-happy targets nearly approaching significant level (t(28) = 1.93, p = .07), showing that they might have been detected less correctly in high anxiety groups. None of the other targets differed detecting by two groups (p >.10).




Discussion


General Discussion with Overall Sample

Consistent with the research using visual search paradigm with real facial photographs and heterogeneous crowd (e.g., Calvo & Nummenmaa, 2008; Juth et al., 2005; Öhman et al., 2010), results revealed a general happy advantage over both angry and neutral facial expressions across all different target-distracters combinations and irrespective of participants’ goal (i.e., emotional or non-emotional). This clearly suggests that emotional

 

expressions capture attention even when they are entirely irrelevant to the task and it is consistent with the previous finding (Hodsoll et al., 2011).

However, overall happy advantage was somewhat diminished when it was considered with emotion, gender, and age, together. For example, when the goal was detecting whether the discrepant face was a young or an old individual, participants were faster to detect young- happy faces than young-angry or young-neutral ones. However, this happy face advantage (i.e., over angry faces) was not presented among old age targets. Rather, angry and happy faces were detected similarly, but faster than neutral targets in old age. There seems to be an overall disadvantage on neutral targets which is expected since all of the faces in crowd were neutral faces.

Moreover, participants were slower in both tasks and less precise in the age task when the target had an angry pose in a young age than in an old age. Considering with the previous results on young targets, these results can reflect the fact that the sample was comprised of young people (i.e., undergraduate students). We can assume that they mostly socialize with their own age, and moreover, they might expect more positive expressions (happy, smiling, friendly, etc.) from their peers. In contrast, viewing an individual with their own age posing an angry expression (i.e., threatening, unfriendly, etc.) might be an unexpected or undesirable event for them. Therefore, their expectations could make them respond quickly and accurately for a smiling young face than an angry young face. However, these assumptions need to be test in different samples from different ages. On the other hand, when the task was emotionally-relevant, there was an overall happy advantage over angry targets among all ages. Furthermore, following results revealed that happy advantage over angry expression in young faces in terms of accuracy was only occurred when the target was male, but not female. This result suggests that positivity bias in favour of young faces might only occur when participants were directed to search for young or old faces (i.e., in the age task) and when the target was a male identity. However, this joint influence of happy emotional expression and young faces does not present when the age was task-irrelevant (i.e., in the emotion task).

In the age task, participants were more accurate to detect young-male targets than old- male ones, and also, their accuracy has influenced by both gender and age, which is, they made more errors when the targets were female than male in an old age. This gender effect was not the case in young targets. It can be arguable that the sample mostly comprised by female participants, thus, this could have been a gender bias. However, these results

 

confirmed and expanded the findings that Juth and colleagues found better accuracy in recognizing the emotions of the male than the female faces while gender of the participants had no significant effect (Juth et al., 2005, Experiment 4A). That is why it is questionable if these findings can be attributable to the fact that the participants mostly were female in the present study. However, it should be noted that this gender effect in favour of faster detection of old-male targets than old-female ones did not occur when the task was emotionally- relevant. In contrast, when the goal was searching for emotional expression, gender was a determiner for only detecting happy expression, which is, participants were faster to find female targets than male ones. However, detection of an angry expression did not differ in female and male targets. Similarly, Juth and colleagues have found female happy faces yielded better performance (Juth et al., 2005, Experiment 4A). These results were also consistent with the previous findings that suggest a female-happy advantage (Öhman et al., 2010).


General Discussion with Anxiety Analyses

The main results from the anxiety analyses confirmed the hypothesis that when the goal was not related to the emotional expressions, low and high anxious individuals slightly differ than each other in terms of different emotional expressions. That is, low anxious participants showed a happy advantage over angry and neutral expressions and an angry advantage over neutral targets whereas high anxiety group only had an angry advantage over neutral targets. This means positivity bias (happy advantage) was not presented in individuals with high anxiety which is partly consistent with the previous expectation that high anxious individuals would be distracted by emotional stimuli (i.e., angry face) more than low anxious people when the task was emotionally-irrelevant. Indeed, both happy and angry targets caused equally distraction in high anxious individuals whereas only angry targets distracted low anxious individuals. This result suggests that high anxious individuals might have difficulties disengagement from not only threat as previously suggested (e.g., Byrne & Eysenck, 1995) but also positive emotional expression when they have a totally different goal at their hand.

On the contrary, when the task was emotionally relevant, the interaction between emotion and anxiety revealed somewhat different result which was harder to interpret. Since the post-hoc analyses on the interaction effect between Emotion and Anxiety could not reach significant level, only interpretation can be taken from Figure 14. That is, it seems that high anxious group was slightly slower than low anxious individuals when detecting a happy face

 

whereas the two groups seemed equally fast in detecting angry targets. This confirms the previous research that has found facilitated attention to threat among high anxious individuals (Byrne & Eysenck, 1995; Gilboa-Schechtman et al., 1999; Juth et al., 2005; Koster et al., 2006). In addition, low anxious participants detected angry targets more accurately than high anxiety group. Also, high anxious group made more errors when detecting a happy face than angry target whereas there was no difference detecting the two emotional expressions in low anxiety group. Considering emotion and age task together, search efficiency difference between high and low anxious groups seem to be affected by more emotionally-irrelevant stimuli, which supports difficulties inhibiting processing of threatening distracters (Eysenck et al., 2007). In addition, there was a different tendency on gender-based targets in low and high anxiety groups in the emotion task. That is, high anxious individuals detected female targets faster than male ones, however, the reverse was true for low anxiety group. Also, low anxious participants were more accurate when detecting a female-happy target and middle- aged-male target (slightly more accurate on middle-aged-female-happy target, as well) than high anxiety group. This result is difficult to interpret since it needs to be test on male participants, as well.

In addition, it was also interesting to find no main effect of age or an interaction between age and anxiety in the age task, given that the goal of the task was searching for young or old targets. Even in the previous analyses with overall sample, there were interaction effects with age in both tasks. However, this age effects disappeared when we considered with anxiety. This may suggest that even if the goal was emotionally-irrelevant, emotional expressions had more influence than the task-relevant goal of the participants, especially in the sample comprising of only low and high anxious individuals, confirming that emotional stimuli is always attract attention (e.g., Fox et al., 2001; Mogg & Bradley, 1999; Öhman et al., 2001). In contrast, in terms of accuracy, similar to above discussion with overall sample (i.e., young targets had a slower detection than old ones), young targets were detected less accurately than old targets posing an angry expression. On the other hand, when the task was emotionally-relevant, gender mattered more, suggesting that participants detected angry targets with their own age slower than with other two ages, confirming again age bias, especially for angry faces.

Although no main effect of age was found, there was an interaction effect between age and emotion, confirming strong happy advantage over angry faces on young targets. In addition, there was an angry advantage over neutral faces on old targets while there was no

 

speed difference between young-angry and young-neutral targets. This result also suggest an age bias in the anxiety sample that young participants could expect more positive emotional expressions from their peers and could percept both angry and neutral expressions as equally hostile.

Although this study extends the existing literature on the attention to emotional facial expressions, some limitations must be considered. First, matrix size was not varied and therefore cannot address whether angry faces are detected by parallel or serial search. Future studies that manipulate matrix size or incorporate eye tracking may help address this question. Second, only whole faces were used, which remains an open possibility that specific facial features (i.e., the mouth or eyes) may have disproportionately contributed to the effect shown here. Future studies that show features in isolation will likely be informative. Furthermore, there could be a confounding issue on happy faces that all of them had teeth when smiling. On the contrary, none of the angry or neutral expressions were presented with teeth. Studies that eliminate the teeth problem will reveal more valid results. Similarly, emotional intensity was not controlled. However, given that happy faces presented with their teeth while there were no teeth presented in angry faces, one can assume that emotional expressions used in the present study had more ecological validity. Intense angry expressions (i.e., anger with exposed teeth) might be less common in everyday interactions (Hodsoll et al., 2011). However, emotional intensity should be considered in future work. Finally, the presentation of stimuli in gray-scale may have somewhat limited ecological validity, but this approach was necessary to match visual properties of the stimuli. These limitations notwithstanding, the present study offers an ecologically valid demonstration of the facial expressions (especially using real photographical stimuli) indicating a strong advantage for processing positive, relative to negative, environmental stimuli. Importantly, a single target and a distracter were both in the same face. However, it could be more informative if the targets and distracters were presented separately. Finally, future research should test search efficiency on other emotional expressions (e.g., fearful or sad) in an emotionally-irrelevant task. Overall, the current study would have a potential to expand findings on attentional bias to task-irrelevant emotional stimuli in both general and anxiety populations.

 

Acknowledgement

The E-Prime presentation of the stimuli was programmed by Dr. Lies Notebaert from Psychology Department of The University of Western Australia. I gratefully acknowledge for her support.

References

Amir, N., Elias, J., Klumpp, H., & Przeworski, A. (2003). Attentional bias to threat in social phobia: Facilitated processing of threat or difficulty disengaging attention from threat?. Behaviour Research and Therapy, 41(11), 1325-1335.


Baron, R. A. & Bryne, D. (1991). Social psychology: Understanding human interaction (6th edition).

Boston; London: Allyn and Bacon.


Beck, A. T., & Clark, D. A. (1997). An information processing model of anxiety: Automatic and strategic processes. Behaviour research and therapy, 35(1), 49-58.


Becker, Anderson, Mortensen, Neufeld, & Neel, (2011). The face in the crowd effect unconfounded: happy faces, not angry faces, are more efficiently detected in single- and multiple-target visual search tasks. Journal of Experimental Psychology: General, 140(4), 637-659.


Bradley, M. M., Codispoti, M., Cuthbert, B. N., & Lang, P. J. (2001). Emotion and motivation I: defensive and appetitive reactions in picture processing. Emotion, 1(3), 276.


Byrne, A. & Eysenck, M. W. (1995). Trait anxiety, anxious mood, and threat detection. Cognition & Emotion, 9(6), 549-562.


Calvo, M. & Nummenmaa, L. (2008). Detection of emotional faces: salient physical features guide effective visual search. Journal of Experimental Psychology: General, 137(3), 471-494.


Cisler, J. M. & Koster, E. H. (2010). Mechanisms of attentional biases towards threat in anxiety disorders: An integrative review. Clinical psychology review,30(2), 203-216.


Derakshan, N. & Koster, E. H. W. (2010). Processing efficiency in anxiety: Evidence from eye- movements during visual task. Behaviour Research and Therapy, 48, 1180-1185.


Eastwood, J. D., Smilek, D., &Merikle, P. M. (2001). Differential attentional guidance by unattended faces expressing positive and negative emotion.Perception & psychophysics, 63(6), 1004-1013.


Ebner, C., Reidiger, M., & Lindenberger, U. (2010). FACES--a database of facial expressions in young, middle-aged, and older women and men: development and validation. Behavior Research Methods, 42(1), 351-362.

Ekman, P., & Friesen, W.V. (1975). Unmasking the face. Englewood Cliffs, NJ: Prentice- Hall. Ekman, P. & Friesen, W. V. (1976). The Pictures of Facial Affect, copy available from Paul Ekman,

University of California, 401 Parnassus Avenue, San Francisco, CA 94143.


Eysenck, M. W. (1992). Anxiety: The cognitive perspective. Hove, UK: Erlbaum.

 


Eysenck, M. W., Derakshan, N., Santos, R., & Calvo, M. G. (2007). Anxiety and cognitive performance: attentional control theory. Emotion, 7(2), 336.


Frischen, Eastwood, & Smilek (2008). Visual search for faces with emotional expressions.

Psychological Bulletin, 134(5), 662-676.


Fox, E. & Damjanovic, L. (2006). The eyes are sufficient to produce a threat superiority

effect. Emotion, 6(3), 534.


Fox, E., Lester, V., Russo, R., Bowles, R. J., Pichler, A., & Dutton, K. (2000). Facial expressions of emotion: Are angry faces detected more efficiently? Cognition & Emotion, 14(1), 61–92.


Fox, E., Russo, R., Bowles, R., & Dutton, K. (2001). Do threatening stimuli draw or hold visual attention in subclinical anxiety? Journal of Experimental Psychology: General, 130(4), 681- 700.


Fox, E., Russo, R., & Dutton, K. (2002). Attentional bias for threat: Evidence for delayed disengagement from emotional faces. Cognition & Emotion, 16(3), 355-379.


Gilboa-Schechtman, E., Foa, E. B., & Amir, N. (1999). Attentional biases for facial expressions in social phobia: The face-in-the-crowd paradigm. Cognition & Emotion, 13(3), 305-318.


Goschke, T. (2000). Intentional reconfiguration and involuntary persistence in task set switching. In

S. Monsell & J. Driver (Eds.), Control of cognitive processes (Attention & Performance, Vol. 18, pp. 331–355). Cambridge, MA: MIT Press.


Hahn, S., Carlson, C., Singer, S., & Gronlund, S. (2006). Aging and visual search: automatic and controlled attentional bias to threat faces. ActaPsychologica, 123, 312-336.


Hahn, S. & Gronlund, S. D. (2007). Top-down guidance in visual search for facial expressions. Psychonomic bulletin & review, 14(1), 159-165.


Hansen, C. H. & Hansen, R. D. (1988). Finding the face in the crowd: An anger superiority effect.

Journal of Personality and Social Psychology, 54(6), 917-924.


Horstmann, G. (2007). Preattentive face processing: What do visual search experiments with schematic faces tell us?. Visual Cognition, 15(7), 799-833.


Horstmann, G. (2009). Visual search for schematic affective faces: Stability and variability of search slopes with different instances. Cognition and Emotion, 23(2), 355-379.


Horstmann, G. & Bauland, A. (2006). Search asymmetries with real faces: testing the anger- superiority effect. Emotion, 6(2), 193.


Horstmann, G., Scharlau, I., & Ansorge, U. (2006). More efficient rejection of happy than of angry face distractors in visual search. Psychonomic Bulletin & Review, 13(6), 1067-1073.


Juth, P., Lundqvist, D., Karlsson, A., & Öhman, A. (2005). Looking for foes and friends: perceptual and emotional factors when finding a face in the crowd.Emotion, 5(4), 379-395.


Koster, E. H., Crombez, G., Verschuere, B., Van Damme, S., & Wiersema, J. R. (2006). Components of attentional bias to threat in high trait anxiety: Facilitated engagement, impaired disengagement, and attentional avoidance. Behaviour research and therapy, 44(12), 1757-1771.

 


Koster, E. H., Verschuere, B., Crombez, G., & Van Damme, S. (2005). Time-course of attention for threatening pictures in high and low trait anxiety. Behaviour research and therapy, 43(8), 1087- 1098.


MacLeod, C., & Mathews, A. (1988). Anxiety and the allocation of attention to threat. The Quarterly journal of experimental psychology, 40(4), 653-670.


Mogg, K., & Bradley, B. P. (1998). A cognitive-motivational analysis of anxiety.Behaviour research and therapy, 36(9), 809-848.


Mogg, K., Bradley, B. P., & Hallowell, N. (1994). Attentional bias to threat: Roles of trait anxiety, stressful events, and awareness. The Quarterly Journal of Experimental Psychology, 47(4), 841-864.


Öhman, A., Flykt, A., & Esteves, F. (2001). Emotion drives attention: Detecting the snake in the grass. Journal of Experimental Psychology: General, 130(3), 466-478.


Öhman, A., Juth, P., & Lundqvist, D. (2010) Finding the face in a crowd: Relationships between distractor redundancy, target emotion, and target gender. Cognition & Emotion, 24(7), 1216- 1228.


Pratto, F., & John, O. P. (1991). Automatic vigilance: The attention-grabbing power of negative social information. Journal of Personality and Social Psychology, 61, 380-391.


Purcell, D. G., Stewart, A. L., & Skov, R. B. (1996). It takes a cofounded face to pop out of a crowd.

Perception, 25, 1091-1108.


Richards, H. J., Benson, V., Donnelly, N., & Hadwin, J. A. (2014). Exploring the function of selective attention and hypervigilance for threat in anxiety. Clinical psychology review, 34(1), 1-13.


Rinck, M., Becker, E. S., Kellermann, J., & Roth, W. T. (2003). Selective attention in anxiety: Distraction and enhancement in visual search. Depression and Anxiety, 18(1), 18-28.


Rinck, M., Reinecke, A., Ellwart, T., Heuer, K., & Becker, E. S. (2005). Speeded detection and increased distraction in fear of spiders: evidence from eye movements. Journal of abnormal psychology, 114(2), 235.


Rothermund, K., Voss, A., & Wentura, D. (2008). Counter-regulation in affective attentional biases: a basic mechanism that warrants flexibility in emotion and motivation. Emotion, 8(1), 34.


Salemink, E., van den Hout, M. A., & Kindt, M. (2007). Selective attention and threat: Quick orienting versus slow disengagement and two versions of the dot probe task. Behaviour research and therapy, 45(3), 607-615.


Schimmack, U. (2005). Attentional interference effects of emotional pictures: Threat, negativity, or arousal? Emotion, 5, 55-66.


Spielberger, C. D., Gorsuch, R. L., Lushene, R., Vagg, P. R., & Jacobs, G. A. (1983). Manual for the State-Trait Anxiety Inventory. Palo Alto, CA: Consulting Psychologists Press.


Treisman, A. & Gelade, G. (1980). A feature integration theory of attention. Cognitive Psychology, 12, 97–136.

 


Treisman, A., & Sato, S. (1990). Conjunction search revisited. Journal of Experimental Psychology: Human Perception and Performance, 16(3), 459.


Vogt, J., De Houwer, J., Koster, E. H. W., Van Damme, S., & Crombez, G. (2008). Allocation of spatial attention to emotional stimuli depends upon arousal and not valence. Emotion, 8, 880- 885.


White, M. (1995). Preattentive analysis of facial expressions of emotion. Cognition & Emotion, 9(5), 439–460.


Williams, M., Moss, S., Bradshaw, J., & Mattingley, J. (2005). Look at me, I'm smiling: Visual search for threatening and nonthreatening facial expressions. Visual Cognition, 12(1), 29-50.


Williams, J. M. G., Watts, F. N., MacLeod, C., & Mathews, A. (1997). Cognitive psychology and emotional disorders.


Yiend, J. (2010). The effects of emotion on attention: A review of attentional processing of emotional information. Cognition and Emotion, 24(1), 3-47.


Yiend, J. & Mathews, A. (2001). Anxiety and attention to threatening pictures. The Quarterly Journal of Experimental Psychology: Section A, 54(3), 665-681.


Zajonc, R. B. (1980). Feeling and thinking: Preferences need no inferences. American Psychologist, 35(2), 151.

 


Appendix A

Tables of Descriptive Statistics

Table 1

Descriptive Statistics for the main RT analyses of Age Task (N=40)


Target MEAN SD

Old-Female-Angry* 1629.03 506.77

Old-Female-Happy* 1667.05 394.07

Old-Female-Neutral* 1869.36 483.36

Old-Male-Angry* 1559.29 503.54

Old-Male-Happy* 1611.38 482.46

Old-Male-Neutral* 1831.49 541.50

Young-Female-Angry* 1960.83 489.80

Young-Female-Happy* 1601.13 479.04

Young-Female-Neutral* 1973.61 453.31

Young-Male-Angry* 1924.85 489.08

Young-Male-Happy* 1704.90 331.96

 Young-Male-Neutral* 1972.06 427.21

Old-Neutral 3700.85 1007.73

Young-Neutral 3945.67 857.91

Old-Angry 3188.32 928.91

Young-Angry 3885.68 902.39

Old-Happy 3278.43 816.15

Young-Happy 3306.03 729.14

Female-Angry 3589.86 881.62

Female-Happy 3268.18 708.80

Female-Neutral 3842.96 826.30

Male-Angry 3484.14 809.14

Male-Happy 3316.27 736.69

Male-Neutral 3803.55 847.57

Old-Male 5002.16 1385.83

Young-Male 5601.81 1084.29

Old-Female 5165.44 1209.08

Young-Female 5535.57 1221.32

Angry 1768.50 391.48

Happy 1646.11 338.74

Neutral 1911.63 411.65












Targets with “*”are the exact mean values from the raw data while others are created during the analyses process.

 




Table 2

Descriptive Statistics for the main RT analyses of the Emotion Task (N=40)


Target MEAN SD

Middle-aged-Female-Angry* 1289.57 343.12

Middle-aged-Female-Happy* 994.86 260.74

Middle-aged-Male-Angry* 1227.70 290.33

Middle-aged-Male-Happy* 1078.31 300.44

Old-Female-Angry* 1230.92 372.61

Old-Female-Happy* 1032.29 340.56

Old-Male-Angry* 1217.13 428.87

Old-Male-Happy* 1081.76 291.28

Young-Female-Angry* 1352.78 450.87

Young-Female-Happy* 1037.96 318.88

Young-Male-Angry* 1346.28 388.52

Young-Male-Happy* 1056.81 344.09

Middle-aged-Angry 2517.27 626.89

Middle-aged-Happy 2073.17 551.78

Old-Angry 2448.05 761.03

Old-Happy 2114.06 607.48

Young-Angry 2699.07 795.25

Young-Happy 2094.77 641.54

Female-Angry 3873.27 1103.16

Female-Happy 3065.11 861.91

Male-Angry 3216.89 902.86

Male-Happy 3791.12 1039.15

Middle-aged 4590.44 1162.77

Old 4562.11 1331.43

Young 4793.84 1405.10

Angry 1768.50 391.48

Happy 1646.11 338.74

Females 6938.38 1917.82

Males 7008.00 1913.75

















Targets with “*”are the exact mean values from the raw data while others are created during the analyses process.

 




Table 3

Descriptive statistics for the anxiety RT analyses of Age Task (N=30)


Low Anxious (N=16) High Anxious (N=14)


Target MEAN SD MEAN SD

Old-Female-Angry* 1776.10 507.26 1533.46 492.08

Old-Female-Happy* 1691.15 355.71 1654.99 494.07

Old-Female-Neutral* 1989.20 531.72 1795.27 509.12

Old-Male-Angry* 1635.40 538.02 1527.84 562.21

Old-Male-Happy* 1624.72 441.50 1595.22 629.26

Old-Male-Neutral* 1952.76 576.72 1751.93 573.35

Young-Female-Angry* 2015.64 530.00 1842.71 420.82

Young-Female-Happy* 1554.47 416.10 1690.61 565.78

Young-Female-Neutral* 2123.48 482.00 1768.43 366.22

Young-Male-Angry* 2061.42 479.22 1787.52 395.70

Young-Male-Happy* 1729.38 316.16 1706.56 392.44

Young-Male-Neutral* 2120.42 385.61 1790.32 415.81

Old-Neutral 3941.96 1102.99 3547.20 1062.68

Young-Neutral 4243.90 834.16 3558.75 771.04

Old-Angry 3411.50 963.13 3061.30 995.96

Young-Angry 4077.06 905.49 3630.22 768.67

Old-Happy 3315.86 756.10 3250.21 1058.48

Young-Happy 3283.86 666.69 3397.17 880.64

Female-Angry 3791.74 877.05 3376.17 844.65

Female-Happy 3245.62 649.40 3345.61 865.94

Female-Neutral 4112.68 894.62 3563.70 782.93

Male-Angry 3696.82 833.74 3315.36 778.74

Male-Happy 3354.10 655.86 3301.77 961.01

Male-Neutral 4073.18 822.73 3542.25 874.00

Old-Male 5212.88 1396.67 4874.99 1645.64

Young-Male 5911.22 1034.18 5284.39 1098.24

Old-Female 5456.45 1198.51 4983.73 1400.74

Young-Female 5693.60 1226.51 5301.75 1237.72

Angry 1872.14 386.06 1672.88 389.29

Happy 1649.93 305.30 1661.84 436.28

Neutral 2046.46 423.14 1776.49 409.94














Targets with “*”are the exact mean values from the raw data while others are created during the analyses process.

 


Table 4

Descriptive statistics for the anxiety RT analyses of the Emotion Task (N=30)


Low Anxious (N=16) High Anxious (N=14)


Target MEAN SD MEAN SD

Middle-aged-Female-Angry* 1302.18 410.81 1298.93 315.69

Middle-aged-Female-Happy* 1006.67 305.17 1019.64 256.02

Middle-aged-Male-Angry* 1246.99 311.83 1215.74 293.98

Middle-aged-Male-Happy* 1059.32 353.85 1106.81 304.45

Old-Female-Angry* 1248.63 435.61 1243.81 370.83

Old-Female-Happy* 993.45 335.17 1099.96 426.36

Old-Male-Angry* 1145.97 367.38 1319.26 561.35

Old-Male-Happy* 1029.53 317.78 1167.86 310.31

Young-Female-Angry* 1381.54 500.40 1306.32 358.99

Young-Female-Happy*

1009.52 303.85 1089.39 340.06

Young-Male-Angry* 1316.33 366.49 1388.20 420.33

Young-Male-Happy* 1008.85 327.95 1138.65 395.90

Middle-aged-Angry 2549.17 716.73 2514.66 602.03

Middle-aged-Happy 2065.98 652.99 2126.44 548.93

Old-Angry 2394.60 783.98 2563.07 897.12

Old-Happy 2022.98 641.40 2267.82 719.00

Young-Angry 2697.86 822.04 2694.52 729.48

Young-Happy 2018.37 618.19 2228.04 713.17

Female-Angry 3932.34 1287.55 3849.06 1008.68

Female-Happy 3009.64 912.82 3208.99 971.09

Male-Angry 3709.29 967.53 3923.20 1215.70

Male-Happy 3097.70 978.60 3413.31 986.19

Middle-aged 4615.15 1350.47 4641.11 1143.11

Old 4417.58 1374.68 4830.89 1600.61

Young 4716.24 1418.96 4922.56 1418.23

Angry 1273.60 369.92 1295.38 362.54

Happy 1017.89 313.45 1103.72 324.25

Females 6941.98 2169.84 7058.05 1952.57

Males 6806.99 1926.90 7336.51 2185.76


















Targets with “*”are the exact mean values from the raw data while others are created during the analyses process.

 


Table 5

Descriptive Statistics for the main accuracy analyses of Age Task (N=40)


Target MEAN SD

Old-Female-Angry* .86 .15

Old-Female-Happy* .88 .14

Old-Female-Neutral* .83 .12

Old-Male-Angry* .92 .12

Old-Male-Happy* .92 .12

Old-Male-Neutral* .88 .10

Young-Female-Angry* .83 .15

Young-Female-Happy* .89 .15

Young-Female-Neutral* .86 .09

Young-Male-Angry* .80 .16

Young-Male-Happy* .92 .14

Young-Male-Neutral* .81 .11

Old-Neutral 1.72 .19

Young-Neutral 1.67 .19

Old-Angry 1.78 .22

Young-Angry 1.63 .26

Old-Happy 1.77 .22

Young-Happy 1.81 .24

Old-Male 2.72 .26

Young-Male 2.53 .27

Old-Female 2.57 .33

Young-Female 2.59 .26

Angry 3.41 .37

Happy 3.61 .38

Neutral 3.39 .31
























Targets with “*”are the exact mean values from the raw data while others are created during the analyses process.

 

Table 6

Descriptive Statistics for the main accuracy analyses of Emotion Task (N=40)


Target MEAN SD

Middle-aged-Female-Angry* .98 .04

Middle-aged-Female-Happy* .94 .06

Middle-aged-Male-Angry* .97 .04

Middle-aged-Male-Happy* .92 .06

Old-Female-Angry* .96 .08

Old-Female-Happy* .95 .08

Old-Male-Angry* .96 .07

Old-Male-Happy* .90 .10

Young-Female-Angry* .97 .05

Young-Female-Happy* .92 .11

Young-Male-Angry* .96 .09

Young-Male-Happy* .92 .13

Angry 5.79 .27

Happy 5.55 .36
































Targets with “*”are the exact mean values from the raw data while others are created during the analyses process.

 


Table 7

Descriptive statistics for the anxiety accuracy analyses of the Age Task (N=30)


Low Anxious (N=16) High Anxious (N=14)


Target MEAN SD MEAN SD

Old-Female-Angry* .89 .11 .85 .18

Old-Female-Happy* .90 .14 .86 .15

Old-Female-Neutral* .85 .10 .83 .13

Old-Male-Angry* .94 .12 .95 .09

Old-Male-Happy* .92 .10 .88 .15

Old-Male-Neutral* .89 .09 .87 .12

Young-Female-Angry* .84 .18 .80 .14

Young-Female-Happy* .93 .13 .88 .16

Young-Female-Neutral* .86 .10 .86 .10

Young-Male-Angry* .79 .17 .80 .18

Young-Male-Happy* .95 .08 .88 .19

Young-Male-Neutral* .82 .13 .80 .12

Old-Neutral 1.73 .17 1.70 .22

Young-Neutral 1.68 .21 1.66 .20

Old-Angry 1.83 .20 1.79 .25

Young-Angry 1.63 .32 1.61 .25

Old-Happy 1.83 .20 1.73 .26

Young-Happy 1.88 .16 1.76 .30

Old-Female 2.64 .26 2.54 .40

Old-Male 2.75 .28 2.69 .30

Young-Female 2.63 .28 2.54 .26

Young-Male 2.56 .25 2.48 .37

Angry 1.63 .32 1.61 .25

Happy 1.88 .16 1.76 .30

Neutral 1.68 .21 1.66 .20






















Targets with “*”are the exact mean values from the raw data while others are created during the analyses process.

 


Table 8

Descriptive statistics for the anxiety accuracy analyses of the Emotion Task


Low Anxious (N=16) High Anxious (N=14)


Target MEAN SD MEAN SD

Middle-aged-Female-Angry* .98 .03 .97 .04

Middle-aged-Female-Happy* .97 .04 .92 .08

Middle-aged-Male-Angry* .98 .04 .96 .06

Middle-aged-Male-Happy* .95 .06 .88 .07

Old-Female-Angry* .96 .08 .95 .10

Old-Female-Happy* .98 .04 .88 .09

Old-Male-Angry* .98 .05 .96 .08

Old-Male-Happy* .95 .10 .91 .09

Young-Female-Angry* .98 .05 .97 .05

Young-Female-Happy* .97 .05 .86 .14

Young-Male-Angry* .98 .04 .93 .14

Young-Male-Happy* .97 .06 .91 .12

Female-Angry 2.92 .10 2.89 .17

Female-Happy 2.92 .09 2.66 .22

Male-Angry 2.94 .10 2.84 .23

Male-Happy 2.87 .17 2.71 .21

Angry 5.86 .17 5.74 .38

Happy 5.80 .23 5.38 .37





























Targets with “*”are the exact mean values from the raw data while others are created during the analyses process.

 

Appendix B Consent Form



Title of Study: Searching faces in a crowd




CONSENT FORM


I, agree to participate in the study, searching faces in a


crowd, being conducted by [Anonymous ID: M115] under the supervision of Dr Julia Vogt, and Dr Helen Dodd at The University of Reading. I have seen and read a copy of the Participants Information Sheet and have been given the opportunity to ask questions about the study and these have been answered to my satisfaction. I understand that all personal information will remain confidential to the Investigator and arrangements for the storage and eventual disposal of any identifiable material have been made clear to me. I understand that participation in this study is voluntary and that I can withdraw at any time without having to give an explanation.



I am happy to proceed with my participation.



Signature




Name (in capitals)




Date

 

Appendix C Information Sheet


Title of Study: Searching faces in a crowd


Information Sheet


Supervisors:

Dr Julia Vogt Email:

j.vogt@reading.ac.uk

Phone:

0118 378 5545

Dr Helen Dodd h.f.dodd@reading.ac.uk

0118 378 5285

Experimenters: [Anonymous ID: M115]


We would be grateful to you if you could assist us by participating in this study exploring how fast individuals can find a single face in a crowd of faces.


Your participation will take approximately 45 minutes, during which time you will complete a computer based attention task and a questionnaire about anxiety.


Your data will be kept confidential and securely stored, with only an anonymous number identifying it. Information linking that number to your name will be stored securely and separately from the data you provide us. All information collected for the project will be destroyed after a period of 60 months from the completion of the project has elapsed. Taking part in this study is completely voluntary; you may withdraw at any time without having to give any reason. Please feel free to ask any questions that you may have about this study at any point.


This application has been reviewed by the University Research Ethics Committee and has been given a favourable ethical opinion for conduct


Thank you for your help.





[Anonymous ID: M115]

 

Appendix D Debrief Form



School of Psychology & Clinical Language Sciences,

Whiteknights Road,

Reading, RG6 6AL


Debriefing Form for Participation in a Research Study University of Reading

Searching faces in a crowd




Thank you for your participation in our study. Your participation is greatly appreciated.


Purpose of the Study:


We previously informed you that the purpose of the study was to investigate the process of searching faces in a crowd. In this study, we were specifically interested in how faces that differ in age and facial expression impact the speed of search. We are also interested in whether performance on the search task is related to responses on the questionnaire that we asked you to fill in.


We realize that some of the pictures viewed during the search task may have been a little uncomfortable to view. If you have any concerns following the task please let the research team know (contact details below).


Confidentiality:


You may decide that you do not want your data used in this research. If you would like your data removed from the study and permanently deleted please let the experimenter know before you leave. However, whether you agree or do not agree to have your data used for this study, you will still receive credit for your participation.



Useful Contact Information:

If you have any questions or concerns regarding this study, its purpose or procedures, or if you have a research-related problem, please feel free to contact:


Researcher: [Anonymous ID: M115]

Supervisors: Dr Julia Vogt – j.vogt@reading.ac.uk Dr Helen Dodd – h.f.dodd@reading.ac.uk

 

*** Once again, thank you for your participation in this study!***




-Finally, we would appreciate if you can answer following questions. What is the DV in this experiment?

a) Speed of Search

b) Errors

c) Visual Working Memory Capacity

d) Visual Cortex Activation What is the IV in this experiment?

a) Age of Faces

b) Attractiveness of Faces

c) Age and Facial Expression of Faces

d) Shape of Faces

 

Appendix E

Debrief about Anxiety Questionnaires


Questionnaires about Anxiety


In this study you completed some questionnaires relating to feelings of anxiety. This leaflet, which is given to all participants involved in studies that use these questionnaires, provides you with some potential sources of support, should you feel it would be helpful to talk to someone.


If you would like to talk to someone, you may wish to consider:


□ The University of Reading Counselling Services

See online at http://www.reading.ac.uk/Counselling/oldindex.htm Or send an email to arrange an appointment to:

counselling@reading.ac.uk Or call in, either at


The University Counselling Service, First Floor, The Health Centre, 9 Northcourt Avenue, Reading, RG2 7HE (Tel: Tel: (0118) 975 1823)


OR


The University Counselling Service Drop-in, First Floor, Carrington Building, Students Services Centre, Whiteknights RG6 6UA (Tel: (0118) 987 5123)


Your General Practitioner, who will be able to offer support or arrange for you to be referred to a counsellor.


Your Personal Tutor or the Principal Investigator of the study you have taken part in, either of whom will be able to offer you guidance about other sources of support.


A national organization, such as the Samaritans. You can call the Samaritans on 08457 90 90 90, or you can call in at the local branch which is located at 59a Cholmeley Road, Reading, RG1 3NB (local branch phone: 0118 926 6333).


Hiç yorum yok:

Yorum Gönder