This title appears in the Scientific Report :
2023
Please use the identifier:
http://dx.doi.org/10.3389/fnint.2023.935177 in citations.
Please use the identifier: http://dx.doi.org/10.34734/FZJ-2023-02477 in citations.
Toward reproducible models of sequence learning: replication and analysis of a modular spiking network with reward-based learning
Toward reproducible models of sequence learning: replication and analysis of a modular spiking network with reward-based learning
To acquire statistical regularities from the world, the brain must reliably process, and learn from, spatio-temporally structured information. Although an increasing number of computational models have attempted to explain how such sequence learning may be implemented in the neural hardware, many re...
Saved in:
Personal Name(s): | Zajzon, Barna (Corresponding author) |
---|---|
Duarte, Renato / Morrison, Abigail | |
Contributing Institute: |
Jara-Institut Brain structure-function relationships; INM-10 Computational and Systems Neuroscience; IAS-6 Computational and Systems Neuroscience; INM-6 |
Published in: | Frontiers in integrative neuroscience, 17 (2023) S. 935177 |
Imprint: |
Lausanne
Frontiers Research Foundation
2023
|
DOI: |
10.3389/fnint.2023.935177 |
DOI: |
10.34734/FZJ-2023-02477 |
Document Type: |
Journal Article |
Research Program: |
Open-Access-Publikationskosten / 2022 - 2024 / Forschungszentrum Jülich (OAPKFZJ) Recurrence and stochasticity for neuro-inspired computation Computational Principles |
Link: |
OpenAccess |
Publikationsportal JuSER |
Please use the identifier: http://dx.doi.org/10.34734/FZJ-2023-02477 in citations.
To acquire statistical regularities from the world, the brain must reliably process, and learn from, spatio-temporally structured information. Although an increasing number of computational models have attempted to explain how such sequence learning may be implemented in the neural hardware, many remain limited in functionality or lack biophysical plausibility. If we are to harvest the knowledge within these models and arrive at a deeper mechanistic understanding of sequential processing in cortical circuits, it is critical that the models and their findings are accessible, reproducible, and quantitatively comparable. Here we illustrate the importance of these aspects by providing a thorough investigation of a recently proposed sequence learning model. We re-implement the modular columnar architecture and reward-based learning rule in the open-source NEST simulator, and successfully replicate the main findings of the original study. Building on these, we perform an in-depth analysis of the model's robustness to parameter settings and underlying assumptions, highlighting its strengths and weaknesses. We demonstrate a limitation of the model consisting in the hard-wiring of the sequence order in the connectivity patterns, and suggest possible solutions. Finally, we show that the core functionality of the model is retained under more biologically-plausible constraints |