Page 74 - Demo
P. 74
Chapter 372free exploration in one half of the data and confirmatory tests in the other half (Wagenmakers et al., 2012). As for the pupil size data, the SCL data and the SKT data, separate analyses were performed for the different expression modalities. Further, data from the corrugator region and the zygomaticus region were analysed separately and, similarly to Achaibou et al. (2008), only two conditions were contrasted in one test (i.e. one emotion category against neutral). Thus, for each of the 40 100ms time bins and for each presented emotional expression, we fitted separate LMMs on the mean EMG activity (filtered + rectified, see preprocessing) of the corrugator and the zygomaticus with emotion category as fixed effect and ID as random effect on the test sample. If one emotion category was significantly different from neutral in a time bin (p < .05), the same model was tested using the data from the test sample. Only if the difference between the signal related to the emotional versus the neutral expression was significant in both the training and the test sample, the EMG signal was regarded to be affected by the presentation of the respective emotional expression within this time bin.ResultsBehavioural results (Analysis 1)Descriptive statistics of the behavioural responses can be found in Table 1 in Online Resource 2. Contrasting expectations based on the stimulus validation studies (de Gelder & Van den Stock, 2011; Tottenham et al., 2009), recognition rates were lower for bodily expressions (M = 0.776, SD = 0.098) compared to the prototypical facial expressions (M = 0.885, SD = 0.081). The recognition rates of the same bodily expressions in the original validation study are higher but our results are in line with the means that were obtained in Kret, Stekelenburg et al. (2013).Prototypical facial and bodily expressions of emotion. The model on accuracy in emotion recognition yielded significant main effects of emotion category, %u03c72(4) = 185.788, p < .001, and modality (body versus face), %u03c72(1) = 39.921, p < .001. Importantly, the significant interaction between emotion category and expression modality, %u03c72(4) = 203.438, p < .001, sheds more light on the interplay between the two variables affecting accuracy in emotion recognition (see Fig. 3a below and Table 3 in Online Resource 2). Overall, while emotions were better recognized compared to a neutral expression when expressed by the face, the opposite