Multitask Prompted Training Enables Zero-shot Task Generalization (Explained)
Can zero-shot generalization instead be directly induced by explicit multitask learning? Watch the video to find out!
0:00 - Intro
2:14 - Prompted training format
5:52 - Measuring generalization to unseen tasks
8:45 - Held-out tasks
10:45 - The future of NLP
11:48 - Model
12:17 - Experiment results
Connect
Linkedin
Twitter
email edwindeeplearning@
Paper
Code
Abstract
Large language models have recently been shown to attain reasonable zero-shot generalization on a diverse set of tasks. It has been hypothesized that this is a consequence of implicit multitask learning in language model training. Can zero-shot generalization instead be directly induced by explicit multitask learning? To test this question at scale, we develop a system for easily mapping general natu
15 views
30
10
9 months ago 00:11:44 1
Young Couple Beats The Housing Market & Lives in a Bus!
1 year ago 00:01:18 1
Sider 4.0 (ChatGPT Sidebar): Enhance workflow with ChatGPT, Claude, Bard for search, read, and write
2 years ago 00:14:23 1
“ THE CASE FOR COMPUTERS “ 1980s IBM SYSTEM/36 PERSONAL COMPUTER BILLING SYSTEM XD13134