This shows you the differences between two versions of the page.
Both sides previous revision Previous revision | Next revision Both sides next revision | ||
publications [2019/12/28 15:02] admin |
publications [2020/06/21 07:23] admin |
||
---|---|---|---|
Line 4: | Line 4: | ||
Notable achievements: | Notable achievements: | ||
- | * 1986: Deep Neural Networks were first used for my [[#mphil86|MPhil thesis]] - these had the form o_i = f(\sum_n^i-1 w_ij o_j) and so were very deep having only one unit per layer, yet trained well as they had skip connections stretching back to the input. | + | * 1986: Deep Neural Networks were first used for my [[#mphil86|MPhil thesis]] - these had the form o_i = f(\sum_n^i-1 w_ij o_j) and so were very deep having only one unit per layer, yet trained well as they had skip connections stretching back to the input. |
+ | * 1987: Real Time Recurrent Learning was first published in (https://papers.nips.cc/paper/42-static-and-dynamic-error-propagation-networks-with-application-to-speech-coding) and later by | ||
+ | R. J. Williams and D. Zipser. A learning algorithm for continually running fully recurrent neural networks. Neural Computation, | ||
+ | 1(2):270–280, June 1989. ISSN 0899-7667 and Barak Pearlmutter. | ||
Full list: | Full list: |