FNet: Mixing Tokens with Fourier Transforms (Machine Learning Research Paper Explained)
#fnet #attention #fourier
Do we even need Attention? FNets completely drop the Attention mechanism in favor of a simple Fourier transform. They perform almost as well as Transformers, while drastically reducing parameter count, as well as compute and memory requirements. This highlights that a good token mixing heuristic could be as valuable as a learned attention matrix.
OUTLINE:
0:00 - Intro & Overview
0:45 - Giving up on Attention
5:00 - FNet Architecture
9:00 - Going deeper into the Fourier Transform
11:20 - The Importance of Mixing
22:20 - Experimental Results
33:00 - Conclusions & Comments
Paper:
ADDENDUM:
Of course, I completely forgot to discuss the connection between Fourier transforms and Convolutions, and that this might be interpreted as convolutions with very large kernels.
Abstract:
We show that Transformer encoder architectures can be massively sped up, with limited accuracy costs, by replacing the self-attention sublayers with simple linear transformations
16 views
3
1
4 years ago 00:34:23 16
FNet: Mixing Tokens with Fourier Transforms (Machine Learning Research Paper Explained)