Feedback Alignment methods for training neural networks

Abstract

In order to compute weight updates, backpropagation uses complete knowledge of the downstream weights in the network. In [1] it is shown that this can be significantly relaxed by using fixed random weight matrices to transport error signals in the feedback path. The algorithm, termed Feedback Alignment, shows similar performance to backpropagation in several machine learning tasks. In [2] it is shown that even a direct feedback path from the final error to each individual layer results in the learning of useful features and competitive performance compared to backpropagation. In this talk I will briefly introduce the method, give an explanation of how learning arises and discuss results of experiments. I am inspired by

[1] T. P. Lillicrap, D. Cownden, D.B. Tweed, C. J. Akerman, Random feedback weights support learning in deep neural networks

[2] A. Nøkland, Direct Feedback Alignment Provides Learning in Deep Neural Networks

Date
Mar 3, 2021 4:00 PM
Event
Intelligent Systems group seminar, Groningen, March 2021
Location
Bernoulli Institute for Mathematics, Computer Science and Artificial Intelligence
Nijenborgh 9, Groningen, Groningen 9700 AG

I got inspired by a talk by Timothy Lillicrap, which you can find below.

Avatar
Michiel Straat
Postdoc in Machine Learning

My research interests include Machine Learning, Computational Intelligence and Statistical Physics of Learning.