Page 97 - Emotions through the eyes of our closest living relatives- Exploring attentional and behavioral mechanisms
P. 97

                                Emotions hold the attention of bonobos and humans
Data preparation
Because we used only one calibration per bonobo throughout the entire experiment rather than re-calibrating the bonobos for each experimental session, before analyzing the data, we checked whether the raw fixation data per bonobo and per session reasonably matched with the areas of the stimuli on the screen. We plotted all the gaze data for each individual onto a mapping of our screen and the location of the stimuli on the screen. We found that for two apes, in some sessions there were consistent shifts in gaze data to the left or to the right relative to the position of the stimuli on the screen.
Using K-means clustering in a custom script in Python, we established the
difference between the gaze data collected by the eye tracker and the true centroids
of the stimuli displayed on the left and right side of the screen. Based on these 4 findings, we corrected 37/54 sessions for Monyama (average offset of +134 pixels),
and 39/46 sessions for Zuani (average offset of -141 pixels) (see supplements for more
information on how we corrected these sessions).
Next, two regions of interest (ROIs) were defined in Tobii Studio. We drew a 500x512 square around each of the stimuli (sized 500x430, thus the ROI was slightly larger in length than the stimuli to compensate y-axis inaccuracies in the gaze data; Figure S3). Through Tobii Studio’s Statistics option, we extracted data on Total Fixation Duration per ROI using the Tobii Fixation Filter. Finally, after processing the Total Fixation Duration gaze data, we noticed that there were 19 trials where the total fixation duration was higher than 3 seconds (M = 4.47s, SD = 1.09), possibly due to Tobii registering a fixation that extended beyond the duration of the stimulus presentation. These isolated cases were removed from further analyses.
Statistical analyses
We used Bayesian mixed modeling in order to assess support for our hypotheses. We were interested in the total looking duration to emotional stimuli across trials. Our dependent variable was therefore the proportional looking duration to emotional stimuli (based on Tobii Studio’s Total Fixation Duration. From here on: PLDemotion), calculated by dividing the looking duration to the target by the sum of the looking duration to the target and distractor. The target was the emotional stimulus, and the distractor a neutral stimulus of the same species. A PLDemotion higher than 0.5 indicates a longer looking duration to the target.
 95























































































   95   96   97   98   99