Banach Space Representer Theorems for Neural Networks
Prof. Robert D. Nowak, University of Wisconsin-Madison
Abstract: This talk presents a variational framework to understand the properties of functions learned by neural networks fit to data. The framework is based on total variation semi-norms defined in the Radon domain, which is naturally suited to the analysis of neural activation functions (ridge functions). Finding a function that fits a dataset while having a small semi-norm is posed as an infinite dimensional variational optimization. We derive a representer theorem showing that finite-width neural networks are solutions to the variational problem. The representer theorem is reminiscent of the classical reproducing kernel Hilbert space representer theorem, but we show that neural networks are solutions in a non-Hilbertian Banach space. While the learning problems are posed in an infinite dimensional function space, similar to kernel methods, they can be recast as finite-dimensional neural network training problems. These neural network training
1 view
1037
324
12 months ago 00:05:43 1
MISIS Banach Space ЛЦТ
3 years ago 00:53:30 1
Banach Space Representer Theorems for Neural Networks
4 years ago 01:32:14 1
| О некоторых нелокальных задачах для операторно-дифференциальных уравнений
5 years ago 00:17:24 1
INTRODUCTION TO BANACH SPACE
5 years ago 00:03:16 9
Seeb, Olivia O’Brien, Space Primates - Fade Out (Official Music Video)