In order to compute weight updates, backpropagation uses complete knowledge of the downstream weights in the network. In [1] it is shown that this can be significantly relaxed by using fixed random weight matrices to transport error signals in the feedback path. The algorithm, termed Feedback Alignment, shows similar performance to backpropagation in several machine learning tasks. In [2] it is shown that even a direct feedback path from the final error to each individual layer results in the learning of useful features and competitive performance compared to backpropagation. In this talk I will briefly introduce the method, give an explanation of how learning arises and discuss results of experiments. I am inspired by
[1] T. P. Lillicrap, D. Cownden, D.B. Tweed, C. J. Akerman, Random feedback weights support learning in deep neural networks
[2] A. Nøkland, Direct Feedback Alignment Provides Learning in Deep Neural Networks
I got inspired by a talk by Timothy Lillicrap, which you can find below.