Page 73 - Demo
P. 73


                                    Physiological Resonance and Interpretation of Emotional Expressions713iterations as well as the maximum number of iterations for the optimization step inside optimization (msMaxIter) up to 5000, and the number of iterations for the EM algorithm (niterEM) as well as the maximum number of evaluations up to 1000. Since the model residuals were not normally distributed, we additionally applied clustered bootstrapping to estimate the confidence intervals of the coefficients. Thus, in addition to the parametric approach of determining statistical significance of fixed effects with conditional F-Tests and marginal significance of fixed-effect coefficients conditional t-tests, their respective non-parametric confidence intervals were calculated. Given the large number of statistical parameters, only the results of the F-tests and the interpretation of the analysis will be reported in the text whereas the t-statistics and the nonparametric confidence intervals can be found in Tables 1-6 in Online Resource 3. Based on previous findings (M. M. Bradley et al., 2008, 2017; Kosonogov et al., 2017; Lang et al., 1993), we additionally explored the possibility whether overall emotional intensity, instead of specific emotion expression categories, could explain a large amount of variation in the physiological signal changes (see Online Resource 3, Tables 10-12 and Fig. 2). Given that our stimuli were not controlled for global and local brightness and contrast, pupil size changes related to emotional content might have been altered in our analyses. For conciseness, these results are only reported in Online Resource 3 (Tables 7-9 and Fig. 1).Facial EMG. Since there was no empirical evidence to expect any exact shape of the two EMG signals throughout our stimulus presentation window (4 seconds), our analysis aimed to determine the parts of the signal in which a specific emotional expression differed significantly from the respective neutral expression. Here, we extended on an approach by Achaibou and colleagues (2008) who tested for significant differences in EMG activity during stimulus presentation by calculating t-tests between activations related to angry versus happy facial expressions in 100ms time bins. In contrast to their analysis, however, we (1) ran multilevel models instead of t-tests (including random variation and using the nlme package (Pinheiro et al., 2020), in consistence with the other here reported analyses), (2) compared each emotion category (happy, angry, fearful and sad) against neutral as a control condition and (3) used a split-half approach (i.e. first tested for effects in half of the sample [training set] and then validated the significant results in the other half [test set]). The two sets were matched by gender but, apart from that, randomly generated. This third adjustment was taken to allow for hypothesis
                                
   67   68   69   70   71   72   73   74   75   76   77