Executive briefing explaining machine learning models

"ML methods have been causing a revolution in several fields, including science and technology, finance, healthcare, cybersecurity, etc. For instance, ML can identify objects in images, perform language translation, enable web search, perform medical diagnosis, classify fraudulent transactions-...

Full description

Bibliographic Details
Main Author: Taly, Ankur
Format: eBook
Language:English
Published: [Place of publication not identified] O'Reilly Media 2019
Subjects:
Online Access:
Collection: O'Reilly - Collection details see MPG.ReNa
Description
Summary:"ML methods have been causing a revolution in several fields, including science and technology, finance, healthcare, cybersecurity, etc. For instance, ML can identify objects in images, perform language translation, enable web search, perform medical diagnosis, classify fraudulent transactions--all with surprising accuracy. Unfortunately, much of this progress has come with ML models, especially ones based on deep neural networks, getting more complex and opaque. An overarching question that arises is why the model made its prediction. This question is of importance to developers in debugging (mis- )predictions, evaluators in assessing the robustness and fairness of the model, and end users in deciding whether they can trust the model. Ankur Taly (Fiddler) explores the problem of understanding individual predictions by attributing them to input features--a problem that's received a lot of attention in the last couple of years. Ankur details an attribution method called integrated gradients that's applicable to a variety of deep neural networks (object recognition, text categorization, machine translation, etc.) and is backed by an axiomatic justification, and he covers applications of the method to debug model predictions, increase model transparency, and assess model robustness. He also dives into a classic result from cooperative game theory called the Shapley values, which has recently been extensively applied to explaining predictions made by nondifferentiable models such as decision trees, random forests, gradient-boosted trees, etc. Time permitting, you'll get a sneak peak of the Fiddler platform and how it incorporates several of these techniques to demystify models. This session is from the 2019 O'Reilly Artificial Intelligence Conference in San Jose, CA."--Resource description page
Item Description:Title from title screen (viewed July 22, 2020)
Physical Description:1 streaming video file (30 min., 33 sec.)