Feature Learning in Infinite-Width Neural Networks
Speaker: Greg Yang
Affiliation: Microsoft
Abstract: As its width tends to infinity, a deep neural network’s behavior under gradient descent can become simplified and predictable (e.g. given by the Neural Tangent Kernel (NTK)), if it is parametrized appropriately (e.g. the NTK parametrization). However, we show that the standard and NTK parametrizations of a neural network do not admit infinite-width limits that can learn representations (i.e. features), which is crucial for pretraining and transfer learning such as with BERT. We propose simple modifications to the standard parametrization to allow for feature learning in the limit. Using the *Tensor Programs* technique, we derive explicit formulas for such limits. On Word2Vec and few-shot learning on Omniglot via MAML, two canonical tasks that rely crucially on feature learning, we compute these limits exactly. We find that they outperform both NTK baselines and finite-width networks, with the latter approaching the infinite-width feature learning performanc
1 view
1
0
20 hours ago 00:08:06 1
How to Make Polygon Beaded Earrings
5 days ago 00:08:38 1
Can Curiosity Heal Division? | Scott Shigeoka | TED
2 weeks ago 02:48:56 1
291 ‒ Role of testosterone in men & women, performance-enhancing drugs, sustainable fat loss, & more
4 weeks ago 00:08:50 1
Just Dance 2025 Edition - The Making Of
4 weeks ago 00:23:23 2
Create CONSISTENT CHARACTERS from an INPUT IMAGE with FLUX! (ComfyUI Tutorial + Installation Guide)