This title appears in the Scientific Report :
2023
Please use the identifier:
http://dx.doi.org/10.34734/FZJ-2023-03600 in citations.
Gradient-Free Optimization of Artificial and Biological Networks using Learning to Learn
Gradient-Free Optimization of Artificial and Biological Networks using Learning to Learn
Understanding intelligence and how it allows humans to learn, to make decision and form memories, is a long-lasting quest in neuroscience. Our brain is formed by networks of neurons and other cells, however, it is not clear how those networks are trained to learn to solve specifictasks. In machine l...
Saved in:
Personal Name(s): | Yegenoglu, Alper (Corresponding author) |
---|---|
Contributing Institute: |
Jülich Supercomputing Center; JSC |
Imprint: |
Jülich
Forschungszentrum Jülich GmbH Zentralbibliothek, Verlag
2023
|
Physical Description: |
II, 136 |
Dissertation Note: |
Dissertation, RWTH Aachen University, 2023 |
ISBN: |
978-3-95806-719-6 |
DOI: |
10.34734/FZJ-2023-03600 |
Document Type: |
Book Dissertation / PhD Thesis |
Research Program: |
ohne Topic |
Series Title: |
Schriften des Forschungszentrums Jülich IAS Series
55 |
Link: |
OpenAccess |
Publikationsportal JuSER |
Understanding intelligence and how it allows humans to learn, to make decision and form memories, is a long-lasting quest in neuroscience. Our brain is formed by networks of neurons and other cells, however, it is not clear how those networks are trained to learn to solve specifictasks. In machine learning and artificial intelligence it is common to train and optimize neural networks with gradient descent and backpropagation. How to transfer this optimization strategy to biological, spiking networks (SNNs) is still a matter of research. Due to the binary communication scheme between neurons of an SNN via spikes, a direct application of gradient descent and backpropagation is not possible without further approximations. In my work, I present gradient-free optimization techniques that are directly applicable to artificial and biological neural networks. I utilize metaheuristics, such as genetic algorithms and the ensemble Kalman Filter, to optimize network parameters and train networks to learn to solve specific tasks. The optimization is embedded into the concept of meta-learning and learning to learn respectively. The learning to learn concept consists of a two loop optimization procedure. In the first, inner loop the algorithm or network is trained on a family of tasks, and in the second, outer loop the hyper-parameters and parameters of the network are optimized. |