Page 101 - Demo
P. 101


                                    Facial Mimicry and Metacognition in Facial Emotion Recognition994ProcedureParticipants were brought to a quiet room, in which they were given written and verbal instructions about the experimental procedure. After filling in the informed consent form, electrodes were attached to the participants%u2019 faces for the facial electromyography recordings (see Measurements section). During the tasks, participants were seated in 50cm distance of a Philips screen with a resolution of 1920 x 1080 pixels (23.6%u201d), on which the stimuli (720 x 480 pixels, average visual angle: 22.12%u00b0 horizontal and 14.85%u00b0 vertical) were presented using E-Prime 3.0 software (Psychology Software Tools, 2016). The grey background colour of all screens was set to the background colour of the stimuli (RGB color code: 145, 145, 145). The same 60 emotional facial expression videos were presented in a random order in two consecutive tasks, a passive viewing task during which the participants%u2019 facial muscle activity was recorded, and a facial emotion recognition task. The rationale behind the two separate tasks was to avoid that participants would be biased in their perception and their facial mimicry responses in the passive viewing task by being aware of the possible emotion category labels (i.e., top-down modulation). In the passive viewing task, participants were instructed to only look at the stimuli without performing any action. Each trial started with the presentation of a black fixation cross against a grey background for one second, and was followed by one of the 60 video stimuli for two seconds. The end of a trial was marked by a grey inter-trial interval (ITI) screen, which appeared with a jittered duration of either 5750, 6000, or 6250ms. To account for the possibility of missing observations due to noisy data, participants viewed each of the 60 videos twice, in two separate blocks, resulting in 120 trials in total. Between the blocks, participant could take a self-paced break. The passive viewing task lasted around 20 minutes in total. After the passive viewing task, the electrodes were detached from the participants%u2019 faces and the experiment continued with the facial emotion recognition task. In this second task, the participants viewed all 60 video stimuli once again (thus three times in total) but were now instructed to answer questions about them. Similar to the passive viewing task, each trial started with the fixation cross screen for one second and one of the 60 video stimuli (2s) was presented afterwards. Once the video disappeared, participants were asked to judge the displayed expression. More specifically, on the first question screen, they were asked to rate the expression according to its representativeness of the six expression categories that could be displayed: anger, fear, happiness, neutral, sadness, and surprise (%u2018To what degree does the expression relate to the emotions below?%u2019). Each expression 
                                
   95   96   97   98   99   100   101   102   103   104   105