arXiv Analytics

Sign in

arXiv:1802.01528 [cs.LG]AbstractReferencesReviewsResources

The Matrix Calculus You Need For Deep Learning

Terence Parr, Jeremy Howard

Published 2018-02-05Version 1

This paper is an attempt to explain all the matrix calculus you need in order to understand the training of deep neural networks. We assume no math knowledge beyond what you learned in calculus 1, and provide links to help you refresh the necessary math where needed. Note that you do not need to understand this material before you start learning to train and use deep learning in practice; rather, this material is for those who are already familiar with the basics of neural networks, and wish to deepen their understanding of the underlying math. Don't worry if you get stuck at some point along the way---just go back and reread the previous section, and try writing down and working through some examples. And if you're still stuck, we're happy to answer your questions in the Theory category at forums.fast.ai. Note: There is a reference section at the end of the paper summarizing all the key matrix calculus rules and terminology discussed here.

Comments: PDF version of mobile/web friendly version http://parrt.cs.usfca.edu/doc/matrix-calculus/index.html
Categories: cs.LG, stat.ML
Related articles: Most relevant | Search more
arXiv:1801.07648 [cs.LG] (Published 2018-01-23)
Clustering with Deep Learning: Taxonomy and New Methods
arXiv:1705.03341 [cs.LG] (Published 2017-05-09)
Stable Architectures for Deep Neural Networks
arXiv:1710.09513 [cs.LG] (Published 2017-10-26)
Maximum Principle Based Algorithms for Deep Learning