Page 139 - Getting the Picture Modeling and Simulation in Secondary Computer Science Education
P. 139
Students’ Understanding and Difficulties
Students report building their models iteratively in small steps and running them all the time to test them, intertwining verification with validation (Wilensky & Rand, 2015). They report two types of errors encountered:
1. Data omissions happen when students erroneously copy or implement constants from their research data into their programs. Student S2 reported their program produced unexpected and improbable fluctuations of the number of people in their Mars colony. The error was caused by the fact that, in an early version of their program, a tick represented a day in the life of a person and a month in the life of a potato plant. When this error was fixed, the number of people showed smaller, acceptable fluctuations.
2. Process omissions occur when assumptions underlaying a model lack sufficient detail. For example, student S5, when modeling Ohm’s law, observed in an early version of the model that the electrons “become stuck and could not go any further. I don’t think it works like this, so we had to change it.” They added a random component to the angle at which the electrons bounced from the atoms and that solved this problem.
If a model is to be useful, it needs to be valid. Here we describe techniques
students employed to validate their models and report on measures they took to 6 improve their models’ validity.
During an iterative cyclic process, the students used a twofold approach to establish the validity of their models. They relied on the correct construction of their models from appropriate assumptions and then reasoned about their models to draw conclusion about the validity of the models, or alternatively, they tested their models: they generated model outcomes by varying the input parameters of their models, observed and interpreted the behavior of their models.
Generating Outcomes
All the students who tested their models engaged in parameter sweeping — a technique where the model’s parameters are systematically varied to generate outcomes, and they subsequently observed and interpreted the outcomes.
Varying the parameters as a validation technique serves two purposes: to determine the influence of various parameter values to the model’s output (parameter variability - sensitivity analysis) and to determine whether the
137