This title appears in the Scientific Report :
2021
Please use the identifier:
http://hdl.handle.net/2128/28017 in citations.
Please use the identifier: http://dx.doi.org/10.3389/fncom.2021.543872 in citations.
Unsupervised Learning and Clustered Connectivity Enhance Reinforcement Learning in Spiking Neural Networks
Unsupervised Learning and Clustered Connectivity Enhance Reinforcement Learning in Spiking Neural Networks
Reinforcement learning is a paradigm that can account for how organisms learn to adapt their behavior in complex environments with sparse rewards. To partition an environment into discrete states, implementations in spiking neuronal networks typically rely on input architectures involving place cell...
Saved in:
Personal Name(s): | Weidel, Philipp |
---|---|
Duarte, Renato (Corresponding author) / Morrison, Abigail | |
Contributing Institute: |
JARA - HPC; JARA-HPC Jara-Institut Brain structure-function relationships; INM-10 Computational and Systems Neuroscience; IAS-6 Computational and Systems Neuroscience; INM-6 |
Published in: | Frontiers in computational neuroscience, 15 (2021) S. 543872 |
Imprint: |
Lausanne
Frontiers Research Foundation
2021
|
DOI: |
10.3389/fncom.2021.543872 |
Document Type: |
Journal Article |
Research Program: |
Computational Principles Functional Neural Architectures Theory, modelling and simulation Neuromorphic Computing and Network Dynamics |
Link: |
OpenAccess |
Publikationsportal JuSER |
Please use the identifier: http://dx.doi.org/10.3389/fncom.2021.543872 in citations.
Reinforcement learning is a paradigm that can account for how organisms learn to adapt their behavior in complex environments with sparse rewards. To partition an environment into discrete states, implementations in spiking neuronal networks typically rely on input architectures involving place cells or receptive fields specified ad hoc by the researcher. This is problematic as a model for how an organism can learn appropriate behavioral sequences in unknown environments, as it fails to account for the unsupervised and self-organized nature of the required representations. Additionally, this approach presupposes knowledge on the part of the researcher on how the environment should be partitioned and represented and scales poorly with the size or complexity of the environment. To address these issues and gain insights into how the brain generates its own task-relevant mappings, we propose a learning architecture that combines unsupervised learning on the input projections with biologically motivated clustered connectivity within the representation layer. This combination allows input features to be mapped to clusters; thus the network self-organizes to produce clearly distinguishable activity patterns that can serve as the basis for reinforcement learning on the output projections. On the basis of the MNIST and Mountain Car tasks, we show that our proposed model performs better than either a comparable unclustered network or a clustered network with static input projections. We conclude that the combination of unsupervised learning and clustered connectivity provides a generic representational substrate suitable for further computation. |