Page 33 - Demo
P. 33
31How attractiveness affects implicit cognition 2Wagenmakers, 2007) inform us about the credibility of the data given a hy,pothesis, Bayesian methods inform us about the credibility of our parametervalues given the data that we observed. This is reflected in the different in,terpretation of frequentist and Bayesian confidence intervals: the first is arange of values that contains the estimate in the long run, while the lattertells which parameter values are most credible based on the data (Kruschkeet al., 2012; McElreath, 2018). Furthermore, Bayesian methods allow forthe inclusion of prior expectations in the model, are less prone to Type Ierrors, and are more robust in small and noisy samples (Makowski et al.,2019). Altogether, these reasons make Bayesian methods a useful tool fordata analysis.First, we investigated whether the attractiveness ratings of the stimuligiven by our subjects matched with the categories that we used. To examinethis question, we fitted a Bayesian mixed model with an ordinal dependentvariable (attractiveness rating, 7 levels), and the interaction between Sexand Attractiveness Category as independent variables. Furthermore, weadded random intercepts per subject and stimulus, and allowed the effect ofattractiveness category to vary by subject by adding random slopes. We usedregularizing Gaussian priors with M = 0 and SD = 1 for the fixed effects,default Student’s t priors with 3 degrees of freedom for the thresholds, anddefault half Student’s t priors with 3 degrees of freedom for the randomeffects and residual standard deviation.To test our main hypothesis, we created a model that used by-subjectmean-centered RT as dependent variable and the interaction between Con,dition (attractive vs. intermediate or unattractive vs. intermediate) andProbe Location (behind intermediate or behind (un)attractive stimulus).Furthermore, to explore the effect of Sex and Age, we created two morecomplex models that included the three-way interaction between Condition,Probe location, and Sex and Age, respectively. All categorical fixed effectswere sum-to-zero coded, and Age was z-transformed. In all models, weadded random intercepts per subject and trial number (to control for ordereffects), and allowed slopes of the interaction between Condition and ProbeLocation to vary by subject. We used regularizing Gaussian priors with M= 0 and SD = 5 for all fixed effects, a Gaussian prior with M = 0 and SD= 10 for the intercept, and default half Student’s t priors with 3 degrees offreedom for the random effects and residual standard deviation, which wereweakly informative.We used multiple measures to summarize the posterior distributions foreach variable: (1) the median estimate and the median absolute deviation ofthis estimate, (2) the 89% credible interval (89% CI; McElreath, 2018), and(3) the probability of direction (pd). The 89% CI indicates the range withinwhich the effect falls with 89% probability, while the pd indicates the pro,portion of the posterior distribution that is of the median’s sign (Makowskiet al., 2019). We have chosen an 89% CI instead of the conventional 95% toreduce the likelihood that the CIs are interpreted as strict hypothesis tests(McElreath, 2018). Instead, the main goal of the credible intervals is tocommunicate the shape of the posterior distribution.Furthermore, we used leave-one-out cross-validation (PSIS-LOO-CV; Vehtari, Gelman, & Gabry, 2017) to compare the predictive accuracy of themore complex models that include sex and age, respectively, to that of thesimpler model. Using PSIS-LOO-CV, we calculated the expected log predictive density (elpdLOO), which quantifies predictive accuracy, for each model.Then, we calculated the difference in elpdLOO (∆elpdLOO) between the models and the standard error of the difference. If ∆elpdLOO is small (< 4) andthe SE is large relative to the difference, this suggests that models havesimilar predictive performance.All models were run with 4 chains of 3000 iterations (500 warmups),resulting in a total posterior sample of 10,000. Furthermore, we checkedwhether the models converged by inspecting trace plots, histograms andchecking the Gelman-Rubin diagnostic (Depaoli & van de Schoot, 2017).For all models, no indication of divergence was found.ResultsValidation of stimuliThe ordinal mixed model showed that subjects gave substantially higherattractiveness ratings to stimuli that were classified as attractive, and lowerratings to stimuli that were classified as unattractive (see Figure 2). This wasthe case for both women (∆estimateattractive-intermediate = 2.11 [0.30], 89%CI [1.63, 2.61], pd = 1.00; ∆estimate unattractive-intermediate = -1.45 [0.31],89% CI [-1.94, -0.96], pd = 1.00) and men (∆estimate attractive-intermediate =3.17 [0.59], 89% CI [2.22, 4.11], pd = 1.00; ∆estimate unattractive-intermediate= -1.73 [0.32], 89% CI [-2.25, -1.22], pd = 1.00).Simple modelTo test our main prediction that attractiveness would significantly influenceRT, we ran a Bayesian mixed model with by-subject mean-centered RT pertrial as the dependent variable, and the interaction between Condition andProbe Location as independent variables (see Table 1). We found a robustinteraction effect of Condition and Probe Location (see Figure 3), meaningthat people reacted faster on trials in which the probe appeared behindan attractive face than when it appeared behind an intermediate (mediandifference = 9.23 [2.21], 89% CI [5.67, 12.74], pd = 1.00), while an oppositepattern was found when unattractive faces were paired with intermediatefaces (median difference = -6.92 [2.33], 89% CI [-3.29, -10.56], pd = .99).Iliana Samara 17x24.indd 31 08-04-2024 16:35