This title appears in the Scientific Report :
2023
Please use the identifier:
http://dx.doi.org/10.34734/FZJ-2023-02506 in citations.
Dendritic modulation for multitask representation learning in deep feedforward networks
Dendritic modulation for multitask representation learning in deep feedforward networks
Feedforward sensory processing in the brain is generally construed as proceeding through a hierar- chy of layers, each constructing increasingly abstract and invariant representations of sensory inputs. This interpretation is at odds with the observation that activity in sensory processing layers is...
Saved in:
Personal Name(s): | Wybo, Willem (Corresponding author) |
---|---|
Tran, Viet Anh Khoa / Tsai, Matthias / Illing, Bernd / Jordan, Jakob / Senn, Walter / Morrison, Abigail | |
Contributing Institute: |
Computational and Systems Neuroscience; INM-6 Jara-Institut Brain structure-function relationships; INM-10 Computational and Systems Neuroscience; IAS-6 |
Imprint: |
2023
|
DOI: |
10.34734/FZJ-2023-02506 |
Conference: | Cosyne 2023, Montreal (Canada), 2023-03-08 - 2023-03-16 |
Document Type: |
Poster |
Research Program: |
Towards an integrated data science of complex natural systems Human Brain Project Specific Grant Agreement 3 Human Brain Project Specific Grant Agreement 2 Human Brain Project Specific Grant Agreement 1 Computational Principles Recurrence and stochasticity for neuro-inspired computation |
Link: |
OpenAccess |
Publikationsportal JuSER |
Feedforward sensory processing in the brain is generally construed as proceeding through a hierar- chy of layers, each constructing increasingly abstract and invariant representations of sensory inputs. This interpretation is at odds with the observation that activity in sensory processing layers is heavily modulated by contextual signals, such as cross modal information or internal mental states [1]. While it is tempting to assume that such modulations bias the feedforward processing pathway towards de- tection of relevant input features given a context, this induces a dependence on the contextual state in hidden representations at any given layer. The next processing layer in the hierarchy thus has to be able to extract relevant information for each possible context. For this reason, most machine learning approaches to multitask learning apply task-specific output networks to context-independent representations of the inputs, generated by a shared trunk network.Here, we show that a network motif, where a layer of modulated hidden neurons targets an out- put neuron through task-independent feedforward weights, solves multitask learning problems, and that this network motif can be implemented with biophysically realistic neurons that receive context- modulating synaptic inputs on dendritic branches. The dendritic synapses in this motif evolve ac- cording to a Hebbian plasticity rule modulated by a global error signal. We then embed such a motif in each layer of a deep feedforward network, where it generates task-modulated representations of sensory inputs. To learn feedforward weights to the next layer in the network, we apply a contrastive learning objective that predicts whether representations originate either from different inputs, or from different task-modulations of the same input. This self-supervised approach results in deep represen- tation learning of feedforward weights that accommodate a multitude of contexts, without relying on error backpropagation between layers. |