This title appears in the Scientific Report :
2018
Please use the identifier:
http://hdl.handle.net/2128/21216 in citations.
Please use the identifier: http://dx.doi.org/10.3389/fninf.2018.00090 in citations.
Reproducible Neural Network Simulations: Statistical Methods for Model Validation on the Level of Network Activity Data
Reproducible Neural Network Simulations: Statistical Methods for Model Validation on the Level of Network Activity Data
Computational neuroscience relies on simulations of neural network models to bridge the gap between the theory of neural networks and the experimentally observed activity dynamics in the brain. The rigorous validation of simulation results against reference data is thus an indispensable part of any...
Saved in:
Personal Name(s): | Gutzen, Robin (Corresponding author) |
---|---|
von Papen, Michael / Trensch, Guido / Quaglio, Pietro / Grün, Sonja / Denker, Michael | |
Contributing Institute: |
Computational and Systems Neuroscience; INM-6 Jara-Institut Brain structure-function relationships; INM-10 Computational and Systems Neuroscience; IAS-6 |
Published in: | Frontiers in neuroinformatics, 12 (2018) S. 90 |
Imprint: |
Lausanne
Frontiers Research Foundation
2018
|
PubMed ID: |
30618696 |
DOI: |
10.3389/fninf.2018.00090 |
Document Type: |
Journal Article |
Research Program: |
SimLab Neuroscience Human Brain Project Specific Grant Agreement 2 Human Brain Project Specific Grant Agreement 1 Connectivity and Activity Theory, modelling and simulation |
Link: |
Get full text Get full text OpenAccess OpenAccess |
Publikationsportal JuSER |
Please use the identifier: http://dx.doi.org/10.3389/fninf.2018.00090 in citations.
Computational neuroscience relies on simulations of neural network models to bridge the gap between the theory of neural networks and the experimentally observed activity dynamics in the brain. The rigorous validation of simulation results against reference data is thus an indispensable part of any simulation workflow. Moreover, the availability of different simulation environments and levels of model description require also validation of model implementations against each other to evaluate their equivalence. Despite rapid advances in the formalized description of models, data, and analysis workflows, there is no accepted consensus regarding the terminology and practical implementation of validation workflows in the context of neural simulations. This situation prevents the generic, unbiased comparison between published models, which is a key element of enhancing reproducibility of computational research in neuroscience. In this study, we argue for the establishment of standardized statistical test metrics that enable the quantitative validation of network models on the level of the population dynamics. Despite the importance of validating the elementary components of a simulation, such as single cell dynamics, building networks from validated building blocks does not entail the validity of the simulation on the network scale. Therefore, we introduce a corresponding set of validation tests and present an example workflow that practically demonstrates the iterative model validation of a spiking neural network model against its reproduction on the SpiNNaker neuromorphic hardware system. We formally implement the workflow using a generic Python library that we introduce for validation tests on neural network activity data. Together with the companion study (Trensch et al., sub.), the work presents a consistent definition, formalization, and implementation of the verification and validation process for neural network simulations. |