Hadi Ghauch: Large-scale training for deep neural networks
This talk will complement some of lectures in the course by combining large-scale learning, and deep neural networks (DNNs). We will start discuss some challenges for optimizing DNNs, namely, the complex loss surface, ill-conditioning, etc. We will then review some state-of-the-art training methods for DNNs, such as, backprop (review), stochastic gradient descent (review), and adaptive rate methods, RMSProp, ADAGrad, and ADAM.
This talk was a part of The Workshop on Fundamentals of Machine Learning Over N
7 views
96
13
6 years ago 01:01:07 7
Hadi Ghauch: Large-scale training for deep neural networks