Page 40 - Balancing between the present and the past
P. 40

                                Chapter 2
field of history and participated voluntarily.1 The instruments’ content validity was tested on both expert panels. Furthermore, we performed a principal component analysis (PCA) and a reliability analysis using the Cronbach’s alpha coefficient to explore the data structure and internal consistency of both instruments. Finally, we examined the predictive validity and calculated correlations between the scores of both instruments. To answer the third research question, we used the different mean category scores, plotted this by age and calculated correlations between the students’ HPT scores and different student characteristics (viz., age and educational level).
2.5 Results
The first two research questions focus on the reliability and validity of the instrument format developed by Hartmann and Hasselhorn (2008) when used in a different country, among a far larger and more heterogeneous student population and with a different historical topic. To answer both research questions, we looked at the instruments’ content validity, dimensionality (i.e., whether the three categories of each instrument form one or multiple factors), internal consistency, and predictive validity.
2.5.1 Content validity of both instruments
Eight teachers sorted the nine items of each instrument into the three categories (viz., the present-oriented perspective, the role of the historical agent, and the historical contextualization) to confirm the categories’ and items’ face validity. A brief description of each category was provided, and they were instructed to place the items in the appropriate category. For both instruments, we calculated the agreement among the eight experts using the jury alpha and Fleiss’s kappa, which we preferred to Cohen’s kappa so that we could calculate the agreement among more than two raters. Fleiss’s kappa values above .61 indicate substantial agreement; values greater than .81 are almost perfect agreement (Landis & Koch, 1977). For the Nazi Party instrument, the jury alpha was .96, and Fleiss’s Kappa was .64. The jury alpha for the slavery instrument was .98, and the Fleiss’ kappa was .71.
1 The published article (Huijgen, Van Boxtel, Van de Grift, & Holthuis, 2014) stated that 10 teachers were randomly selected from a list of 52 teachers and had more than 10 years’ work experience. Instead, 10 teachers were asked to examine the content validity of both instruments. The results of two teachers were deleted due to procedural mistakes when completing the task. The eight teachers varied in work experience from 2 to more than 30 years. The article stated also that 10 historians were randomly selected from a list of 44 historians. Instead, 10 historians were asked to conduct the task of both instruments to further examine the content validity.
 38



























































































   38   39   40   41   42