pyMANGA Module Benchmarks

To test and verify modules in pyMANGA, we use benchmarks. Those benchmarks should allow (i) to technical assess the functionality of pyMANGA modules, e.g. after code updates and (ii) to test and compare pyMANGA outputs with other module implementations, e.g. with NetLogo models. Thus, each contributor is kindly asked to provide a benchmark for each proposed module. In the following we explain the benchmark design and structure.