The Future of Natural Language Processing

Transfer Learning in Natural Language Processing (NLP): Open questions, current trends, limits, and future directions. Slides: A walk through interesting papers and research directions in late 2019/early-2020 on: - model size and computational efficiency, - out-of-domain generalization and model evaluation, - fine-tuning and sample efficiency, - common sense and inductive biases. by Thomas Wolf (Science lead at HuggingFace) HuggingFace on Twitter:
Back to Top