This title appears in the Scientific Report :
2019
How much do you trust a model? - Rigor in neuroscientific modeling and simulation through validation
How much do you trust a model? - Rigor in neuroscientific modeling and simulation through validation
BrainTC-127Modeling and the simulation of the activity in neuronal networks is an essential part of modern neuroscience and represents a powerful vehicle to combine insights from experiments and theory into a coherent understanding of brain function. The only measure to assess how much trust we can...
Saved in:
Personal Name(s): | Gutzen, Robin (Corresponding author) |
---|---|
von Papen, Michael / Trensch, Guido / Quaglio, Pietro / Grün, Sonja / Denker, Michael | |
Contributing Institute: |
Jara-Institut Brain structure-function relationships; INM-10 Computational and Systems Neuroscience; IAS-6 Computational and Systems Neuroscience; INM-6 |
Imprint: |
2019
|
Conference: | 3rd Brain Twitter Conference, n/a (n/a), 2019-03-14 - 2019-03-14 |
Document Type: |
Conference Presentation |
Research Program: |
Helmholtz Analytics Framework Human Brain Project Specific Grant Agreement 2 Theory, modelling and simulation |
Publikationsportal JuSER |
BrainTC-127Modeling and the simulation of the activity in neuronal networks is an essential part of modern neuroscience and represents a powerful vehicle to combine insights from experiments and theory into a coherent understanding of brain function. The only measure to assess how much trust we can place in a given model is how well it can predict the biological reality it aims to describe. Validation testing formalizes the comparison between measured and simulated data and quantifies their similarity. The resulting test scores characterize the model and determine its validity with respect to predictions concerning the experimental reference (Thacker et al., 2004). However, it is sometimes useful to directly compare two models by means analogous to validation testing. Such direct comparisons are not constrained by the scarcity and specificity of experimental data. In contrast to validation, direct comparisons between two models are not able to determine the descriptive power of a model regarding its reference to reality. They can, however, be greatly beneficial in evaluating the model’s consistency, robustness with respect to parameter variation, and directed improvements in the model development process (Gutzen et al., 2018). In either scenario, several aspects must be considered. Any validation test only considers a specific statistic of a certain aspect of a finitely sampled data set. Therefore, in order to gain a more complete and less biased evaluation, it is necessary to apply multiple validation tests, taking into account different aspects and statistical measures (Forrester & Senge, 1980). For example, for a neural network model the dynamics on the single-cell and network activity level are not trivially related, and thus should be regarded individually.Here, we present a workflow and a software solution to provide the means to perform such validation tests for the activity on the network level: NetworkUnit (RRID:SCR_016543). This Python package is based on the open-source projects Elephant (RRID:SCR_003833) and SciUnit (RRID:SCR_014528).For further formalization and reproducibility of neuroscientific modeling, it is beneficial to separate the model implementation from the simulation engine and make use of available simulators. Notably, different simulators are not necessarily equivalent (especially when using e.g. neuromorphic systems as simulators (van Albada et al., 2018)) and even small differences in the numerics have been shown to influence the simulated model behavior (Trensch et al., 2018). To evaluate and control for such influences, validation techniques can also be analogously applied for the quantitative comparison between simulators, as we demonstrate as well.Funding by: EU's Horizon 2020 Framework Programme for Research and Innovation under Specific Grant Agreements No. 720270 and No. 785907 (HBP SGA1, SGA2); the Helmholtz Association Initiative and Networking Fund, No. ZT-I-0003; and HDSLEE. |