Machine Learning Explainability & Bias Detection with Watson OpenScale

So you’ve built a model. It’s deployed. Now what? How do you know if it’s performing well? How do you keep track of predictions? Better yet, how do you explain them? In this video, you’ll learn how to do exactly that using Watson OpenScale. in 20ish minutes I’ll walk you through how to leverage Watson OpenScale for machine learning explainability, debiasing and drift detection. In this video you’ll learn how to: 1. Setting up Watson OpenScale 2. Viewing Model Performance Metrics like Accuracy, AUC, Precision 3. Debiasing Machine Learning Predictions 4. Explaining and Interpret Machine Learning Model Predictions Links Mentioned IBM Cloud Register: Watson OpenScale: Chapters 0:00 - Start 0:27 - Explainer 1:26 - How it Works 2:03 - Setup Watson OpenScale 6:21 - Evaluating Model Performance 12:30 - Mitigating and Detecting Bias in ML Models 14:39 - Explaining and Interpreting Predictions 17:09 - What-If Scenario
Back to Top