Page 161 - Getting the Picture Modeling and Simulation in Secondary Computer Science Education
P. 161
In reflecting on our results, we emphasize the three aspects where our assessment instrument for Computational Science differs from the existing assessment instruments. First, it situates computational modeling in the context of CS education, independent of any domain specific context. Second, it covers the whole of the modeling cycle described in our framework, but we do remark that it does not deeply scrutinize the program code, as opposed to many other assessment instruments related to computational modeling. Third, our instrument uses a five-level rubrics based on SOLO taxonomy (Biggs & Tang, 2011) to assess the element of our modeling cycle framework.
We note that our assessment instrument aligns well with the suggestions
regarding the assessment of CT which were put forward by Tang et al. (2020)
after we have finished this research project. We go on to discuss them in detail
and observe that our work adheres to most of these suggestions. We contributed
to creating more assessment for high school (as opposed to elementary and
middle school). Our assessment focuses on the integration of CT and subject
matter by focusing on computational modeling and simulation to be used in a
different discipline in the context of scientific enquiry. We report the validity
and reliability of the assessment. We view CT broader than programming or
computing only. To a high degree, we designed “CT assessments that can be
applicable across platforms and devices”. Namely, even though we have developed
our assessment instrument to be used in the context of scientific enquiry when
constructing and using agent-based models, with slight modifications it could
be used with other computational models as well. Our assessment does adhere
to the rest of this suggestion: “in order to compare students’ CT performance
under varied conditions of intervention”. Finally, when we look at our assessment 7 as an instrument to be used by teachers in their daily teaching practice, we
note that it does not follow their suggestion “to consider the concurrent use of qualitative measures collected by interviews, think-alouds, or focus groups to better understand students’ proficiency of CT”. In our first study (chapter 3), we scrutinized a number of qualitative measures for visible occurrences of the elements of our modeling framework. While our findings confirm that interviews with students (as well as close observations of student groups during their work) provide rich insights into students’ performance, understanding and difficulties — and therefore serve well as research instrument — we chose not to include them into our assessment instrument because they are not feasible in everyday teaching practice. Additionally, we point out that our assessment instrument also aligns well
General Conclusions and Discussion
159