This title appears in the Scientific Report :
2022
Statistical decomposition of feed-forward neural networks: Transfer of information between correlation functions
Statistical decomposition of feed-forward neural networks: Transfer of information between correlation functions
Uncovering principles of information processing in neural systems continues to be an active field of research. For the visual system it is well known that it processes signals in a hierarchical manner [1,2]. Feed-forward networks are commonly used models in machine learning that perform hierarchical...
Saved in:
Personal Name(s): | Fischer, Kirsten (Corresponding author) |
---|---|
Rene, Alexandre / Keup, Christian / Layer, Moritz / Dahmen, David / Helias, Moritz | |
Contributing Institute: |
Jara-Institut Brain structure-function relationships; INM-10 Computational and Systems Neuroscience; IAS-6 Computational and Systems Neuroscience; INM-6 |
Imprint: |
2022
|
Conference: | INM IBI Retreat 2022, Juelich (Germany), 2022-10-18 - 2022-10-19 |
Document Type: |
Poster |
Research Program: |
Advanced Computing Architectures Theory of multi-scale neuronal networks Transparent Deep Learning with Renormalized Flows Emerging NC Architectures Computational Principles Recurrence and stochasticity for neuro-inspired computation |
Publikationsportal JuSER |
Uncovering principles of information processing in neural systems continues to be an active field of research. For the visual system it is well known that it processes signals in a hierarchical manner [1,2]. Feed-forward networks are commonly used models in machine learning that perform hierarchical computations. We here study deep feed-forward networks with the aim of deducing general functional aspects of such systems. These networks implement mappings between probability distributions, where the probability distribution are iteratively transformed from layer to layer. We develop a formalism for expressing signal transformations in each layer as information transfers between different orders of correlation functions. We show that the processing within internal network layers is captured by correlations up to second order. In addition, we demonstrate how the input layer also extracts higher order correlations from the data. Thus, by presenting different correlation orders in the input, we identify key statistics in the data. As a next step, we consider recurrent time-continuous networks, reminiscent of biological neuronal networks (NeuralODEs, [3]). We derive a Fokker-Planck equation describing the evolution of the probability distribution. This formulation allows us to study time-dependent information flow between different interaction terms. In summary, this work provides insights into functional principles of information processing in neural networks.References[1] Hubel, D. H., & Wiesel, T. N. (1962). Receptive fields, binocular interaction and functional architecture in the cat's visual cortex. The Journal of physiology, 160(1), 106.[2] Zhuang, C., Yan, S., Nayebi, A., Schrimpf, M., Frank, M. C., DiCarlo, J. J., & Yamins, D. L. (2021). Unsupervised neural network models of the ventral visual stream. Proceedings of the National Academy of Sciences, 118(3), e2014196118.[3] Chen, R. T., Rubanova, Y., Bettencourt, J., & Duvenaud, D. K. (2018). Neural ordinary differential equations. Advances in neural information processing systems, 31. |