This title appears in the Scientific Report :
2017
What computations are ‘brain-inspired’? - A view on neural information processing, functionality and learning
What computations are ‘brain-inspired’? - A view on neural information processing, functionality and learning
There has been a long tradition of casting models of information processing which entail elementary generic computing units arranged in multiple stacked layers to perform cascades of transformations of the incoming input as neurally inspired, or brain-like. However, what justifies casting a particul...
Saved in:
Personal Name(s): | Jitsev, Jenia (Corresponding author) |
---|---|
Contributing Institute: |
Jülich Supercomputing Center; JSC |
Imprint: |
2017
|
Conference: | 3rd International Workshop on Brain-inspired Computing, Cetraro (Italy), 2017-06-12 - 2017-06-16 |
Document Type: |
Talk (non-conference) |
Research Program: |
Supercomputing and Modelling for the Human Brain Data-Intensive Science and Federated Computing |
Subject (ZB): | |
Publikationsportal JuSER |
There has been a long tradition of casting models of information processing which entail elementary generic computing units arranged in multiple stacked layers to perform cascades of transformations of the incoming input as neurally inspired, or brain-like. However, what justifies casting a particular model neurally inspired is quite arbitrary and inconsistent. Often, a very limited set of properties of biological neural networks, like their hierarchical processing organization or spiking of single neurons, is taken to back up claim for neural plausibility, while completely ignoring vast range of other presumable relevant properties, e.g diversity of neuronal single cell dynamics, short-term synaptic plasticity or signal processing in active dendrites, to name only few. The same concerns taking into account different architectural features of brain networks, like different loops through subcortical structures, e.g thalamus or basal ganglia, or different fundamental modes of brain operation like sleep. Moreover, adding clearly biological implausible features into such network models to enforce a desired functionality further obscures terminology of brain-inspired computation. Furthermore, when arguing for certain brain-like functionality, the input and the tasks the networks are demonstrating their capabilities on have often very narrow and artificial character not plausible in a real world setting where nervous systems are operating. Here, to provide a perspective for a consistent framework of building neurally inspired information processing models, I would like to put forward a view that in face of haunting neural diversity on the one hand and still quite limited techniques to record large scale activities across multiple spatial scales from the brain on the other hand, it is necessary to establish solid foundation for functional and computational essense of brain phenomenology before attempting to construct full scale neural network models. Working out functional and computational essense means here specifying the generic type of problems a brain has to solve in natural environment, together with the type of computations and the type of complex real-world input available to the brain. Error driven learning, defined within a closed sensory-motor loop of forming and correcting predictions about sensory input and hidden variables that are most likely causing it, is one potential candidate framework to establish such a generic functional description for brain-like information processing. Only then it will become indeed possible to interpret properly different neurophysiological observations of biological neural substrate and to come up with basic canonical models where both neural and functional properties will reflect the principles of brain-like information processing to satisfactory degree. |