Page 100 - Demo
P. 100
Chapter 498in the Netherlands, we had to stop data collection prematurely and ended up with 57 participants in total (56 participants: power of .879). For the analyses in this manuscript, we treated the clinical trait dimensions as continuous variables, thereby increasing the validity of the approach as well as statistical power (Altman & Royston, 2006).StimuliFollowing the call for more naturalistic stimuli in research on emotion perception, we chose the FEEDTUM database (Wallhoff et al., 2006) as a source for our stimuli. This database encompasses videotaped spontaneous (i.e., non-instructed) reactions to video clips inducing the six different basic emotions and neutral control expressions. All depicted individuals provided informed consent for the usage of the videos for research purposes, including distribution and publication of the material, in the original study. Permission to use the material under CCby and to publish example images in scientific journals, such as in Fig.1, was granted to the first author of this study by the creators of the database. Based on the choice of stimuli in a previous study investigating facial mimicry and emotion recognition in depression (Zwick & Wolkenstein, 2017),, we included facial expressions of anger, fear, happiness, sadness, surprise, and neutral. Disgust was not included, which is a basic emotion that (next to surprise) is typically less investigated in studies on emotion recognition alterations in SAD and ASD (Bui et al., 2017; Uljarevic & Hamilton, 2013). For each facial expression, video clips of ten individuals (five females and five males) were selected based on the following decision pipeline: First, videos were judged on their quality, and blurry or shaky videos were excluded. Second, individuals wearing glasses or individuals with hair in front of their eyes were excluded as these features made their facial expressions more difficult to recognize. Lastly, all remaining video clips were evaluated by the automated facial expression recognition software FaceReader 7.1 (Noldus, 2014) to ensure that the emotion label of the stimulus provided by the database could also be detected in the video. After this selection procedure, the video clips were cut to a uniform length of 2 seconds (500ms neutral expression followed by 1500ms of each category%u2019s expression). Lastly, the video clips were standardized by removing the original backgrounds using Adobe After Effects (Christiansen, 2018), and by replacing them with a uniform gray colored background (RGB color code: 145, 145, 145). This led to a total of 60 two-second videos with a grey background showing ten individuals (five males and five females) per facial expression.