Page 72 - Getting of the fence
P. 72

                                Chapter 3
 at a consensus (i.e. unguided focus group 1). The output of this unguided focus group served as input for the first discussion and data analysis between teacher A and the researcher (Research team in Figure 3.1), which led to several adjustments of the underlying elements. Two days later the same four students were presented with the adjusted elements in a second unguided focus group, allowing them to validate our interpretation of the output of the first focus group in which they had taken part. According to Lincoln and Guba (1985), this kind of member checking increases the trustworthiness of qualitative research and it led to several minor adjustments. Next, a different year 4 class in the same school was invited to individually write down their answer(s) to the single open question survey. Both teacher A and the researcher used the adjusted elements to individually code all the student answers. The third discussion and data analysis, which followed the comparison of the coding, led to a few more adjustments.
Round 2 was a repetition of round one conducted at school B by teacher B and the researcher. Importantly, the input for this second group of four students was the list of adjusted elements from the research activities that took place at school A. This repetition of Round 1 was undertaken in order to increase the validity as well as reach conceptual saturation (Cohen, Manion, & Morrison, 2011).
In round 3, teacher C invited all students in the upper years (n = 199) from school C to answer the single open question survey. In order to validate Interim model 2, teacher C was first trained by the researcher. The training consisted of an in-depth discussion regarding the theoretical foundation of the Comprehensive Approach. This was followed by a practice session in which the answers to the open question survey provided by the students from schools A and B were labelled according to the underlying practical elements of interim model 2. After this training, teacher C invited all students in the upper years from school C to answer the single open question survey. The student answers to the open question survey from school C were coded by teacher C and the researcher. Interrater reliability was established using Cohen’s kappa value (.839), showing strong agreement. The discussion that followed led to several minor refinements in order to increase mutual exclusivity (when elements were too broadly defined) and exhaustiveness (when elements were too narrowly defined). In order to make sure that these final refinements would not have a negative impact on the reliability of the coding, teacher C and the researcher coded the answers again, which led to a Kappa score of .923, again showing strong agreement.
70






























































































   70   71   72   73   74