Theoretical Deep Learning. The Information Bottleneck method. Part 2

In this class we continue discussing how can we use the information bottleneck framework for neural networks study. In particular, we learn how to prevent NNs from overfitting by introducing a specific penalty term into the loss function, and reveal an objective very similar to evidence lower bound from bayesian statistics. Find out more: Our open-source framework to develop and deploy conversational assistants:
Back to Top