This title appears in the Scientific Report :
2021
Investigating the role of Chaos and characteristic time scales in Reservoir Computing
Investigating the role of Chaos and characteristic time scales in Reservoir Computing
Dynamical systems suited for Reservoir Computing (RC) should be able to both retain information for sufficiently long times and exhibit a rich representation of the input driving. However, selecting and tuning system parameters as well as choosing a sufficient input encoding has yet to be standardiz...
Saved in:
Personal Name(s): | Schmidt, Marvin (Corresponding author) |
---|---|
Pinna, Daniele (Thesis advisor) / Zajzon, Barna (Thesis advisor) / Mokrousov, Yuriy (Thesis advisor) / Morrison, Abigail (Thesis advisor) | |
Contributing Institute: |
Computational and Systems Neuroscience; INM-6 Jara-Institut Brain structure-function relationships; INM-10 Computational and Systems Neuroscience; IAS-6 |
Imprint: |
2021
|
Physical Description: |
47 p. |
Dissertation Note: |
Masterarbeit, RWTH Aachen University, 2021 |
Document Type: |
Master Thesis |
Research Program: |
Emerging NC Architectures Computational Principles |
Publikationsportal JuSER |
Dynamical systems suited for Reservoir Computing (RC) should be able to both retain information for sufficiently long times and exhibit a rich representation of the input driving. However, selecting and tuning system parameters as well as choosing a sufficient input encoding has yet to be standardized as a procedure. This work attempts to make progress in this regard by focusing on the input and dynamical timescales in RC systems. Two qualitatively different models are studied: An adaptation of the Fermi-Pasta-Ulam-Tsingou model made suitable for Reservoir Computing and sparsely connected networks of spiking excitatory/inhibitory neurons. By comparing input injection frequencies to system relaxation timescales, and measuring its effects on the degree of chaos in the dynamical system, a relationship between timescales and the performance on a short term memory and parity-check tasks is established. We find that both systems rely on a close matching of their relaxation timescales with the input frequency in order to memorize and make precise use of the most recent information in the input. This was consistent across both models, implying greater generalizability. Furthermore, we find that a high degree of chaos deprecates memory in the Fermi-Pasta-Ulam-Tsingou model, while at the same time enhancing performance in the parity-check task, suggesting the edge of chaos to be an optimal tradeoff. The networks of spiking neurons show similar performance on the performance tasks, suggesting that nonlinear computations happen on a much faster timescale. |