This title appears in the Scientific Report :
2018
Please use the identifier:
http://dx.doi.org/10.3389/fninf.2018.00046 in citations.
Please use the identifier: http://hdl.handle.net/2128/19857 in citations.
Reproducing Polychronization: A Guide to Maximizing the Reproducibility of Spiking Network Models
Reproducing Polychronization: A Guide to Maximizing the Reproducibility of Spiking Network Models
Any modeler who has attempted to reproduce a spiking neural network model from its descriptionin a paper has discovered what a painful endeavor this is. Even when all parameters appearto have been specified, which is rare, typically the initial attempt to reproduce the networkdoes not yield results t...
Saved in:
Personal Name(s): | Pauli, Robin (Corresponding author) |
---|---|
Weidel, Philipp / Kunkel, Susanne / Morrison, Abigail | |
Contributing Institute: |
Computational and Systems Neuroscience; INM-6 Jara-Institut Brain structure-function relationships; INM-10 Computational and Systems Neuroscience; IAS-6 |
Published in: | Frontiers in neuroinformatics, 12 (2018) S. 46 |
Imprint: |
Lausanne
Frontiers Research Foundation
2018
|
DOI: |
10.3389/fninf.2018.00046 |
PubMed ID: |
30123121 |
Document Type: |
Journal Article |
Research Program: |
DEEP - Extreme Scale Technologies Human Brain Project Specific Grant Agreement 1 Theory, modelling and simulation (Dys-)function and Plasticity Connectivity and Activity Mathematische Modellierung der Entstehung und Suppression pathologischer Aktivitätszustände in den Basalganglien-Kortex-Schleifen |
Link: |
Get full text OpenAccess OpenAccess |
Publikationsportal JuSER |
Please use the identifier: http://hdl.handle.net/2128/19857 in citations.
Any modeler who has attempted to reproduce a spiking neural network model from its descriptionin a paper has discovered what a painful endeavor this is. Even when all parameters appearto have been specified, which is rare, typically the initial attempt to reproduce the networkdoes not yield results that are recognizably akin to those in the original publication. Causesinclude inaccurately reported or hidden parameters (e.g. wrong unit or the existence of aninitialization distribution), differences in implementation of model dynamics, and ambiguities inthe text description of the network experiment. The very fact that adequate reproduction oftencannot be achieved until a series of such causes have been tracked down and resolved is in itselfdisconcerting, as it reveals unreported model dependencies on specific implementation choicesthat either were not clear to the original authors, or that they chose not to disclose. In either case,such dependencies diminish the credibility of the model’s claims about the behavior of the targetsystem. To demonstrate these issues, we provide a worked example of reproducing a seminalstudy for which, unusually, source code was provided at time of publication. Despite this seeminglyoptimal starting position, reproducing the results was time consuming and frustrating. Furtherexamination of the correctly reproduced model reveals that it is highly sensitive to implementationchoices such as the realization of background noise, the integration timestep, and the thresholdingparameter of the analysis algorithm. From this process, we derive a guideline of best practicesthat would substantially reduce the investment in reproducing neural network studies, whilstsimultaneously increasing their scientific quality. We propose that this guideline can be used byauthors and reviewers to assess and improve the reproducibility of future network models. |