This title appears in the Scientific Report :
2022
Please use the identifier:
http://dx.doi.org/10.1186/s13662-022-03686-9 in citations.
Please use the identifier: http://hdl.handle.net/2128/30882 in citations.
Path classification by stochastic linear recurrent neural networks
Path classification by stochastic linear recurrent neural networks
We investigate the functioning of a classifying biological neural network from the perspective of statistical learning theory, modelled, in a simplified setting, as a continuous-time stochastic recurrent neural network (RNN) with the identity activation function. In the purely stochastic (robust) re...
Saved in:
Personal Name(s): | Boutaib, Youness (Corresponding author) |
---|---|
Bartolomaeus, Wiebke / Nestler, Sandra / Rauhut, Holger (Last author) | |
Contributing Institute: |
Computational and Systems Neuroscience; INM-6 Jara-Institut Brain structure-function relationships; INM-10 Computational and Systems Neuroscience; IAS-6 |
Published in: | Advances in continuous and discrete models, 2022 (2022) 1, S. 13 |
Imprint: |
London . BioMed Central
2022
|
DOI: |
10.1186/s13662-022-03686-9 |
Document Type: |
Journal Article |
Research Program: |
Recurrence and stochasticity for neuro-inspired computation Emerging NC Architectures Computational Principles Neuroscientific Foundations |
Link: |
OpenAccess |
Publikationsportal JuSER |
Please use the identifier: http://hdl.handle.net/2128/30882 in citations.
We investigate the functioning of a classifying biological neural network from the perspective of statistical learning theory, modelled, in a simplified setting, as a continuous-time stochastic recurrent neural network (RNN) with the identity activation function. In the purely stochastic (robust) regime, we give a generalisation error bound that holds with high probability, thus showing that the empirical risk minimiser is the best-in-class hypothesis. We show that RNNs retain a partial signature of the paths they are fed as the unique information exploited for training and classification tasks. We argue that these RNNs are easy to train and robust and support these observations with numerical experiments on both synthetic and real data. We also show a trade-off phenomenon between accuracy and robustness. |