Training a token classification model with 

Last week we covered how to use the tokenizer library to get our data to a state where we can train token classification tasks. This week, we’ll pick up where we left off and go over how to train a model and use it for inference on a Named Entity Recognition (NER) task. We’ll cover the various strategies you can use in labeling your tokens, scoring the results, and also the metrics you’ll likely want to use. Signup for the meetup series here: Follow: Wayde Gilliam: Sanyam Bhutani: Get started with W&B: ​ Follow us: Twitter: Linkedin: ​
Back to Top