Interpretable Machine Learning Applications: Part 2

提供方
在此指導項目中,您將:
90-120 minutes
初級
無需下載
分屏視頻
英語(English)
僅限桌面

By the end of this project, you will be able to develop intepretable machine learning applications explaining individual predictions rather than explaining the behavior of the prediction model as a whole. This will be done via the well known Local Interpretable Model-agnostic Explanations (LIME) as a machine learning interpretation and explanation model. In particular, in this project, you will learn how to go beyond the development and use of machine learning (ML) models, such as regression classifiers, in that we add on explainability and interpretation aspects for individual predictions. In this sense, the project will boost your career as a ML developer and modeler in that you will be able to explain and justify the behaviour of your ML model. The project will also benefit your career as a decision-maker in an executive position interested in deploying trusted and accountable ML applications. This guided project is primarily targeting data scientists and machine learning modelers, who wish to enhance their machine learning application development with explanation components for predictions being made. The guided project is also targeting executive planners within business companies and public organizations interested in using machine learning applications for automating, or informing, human decision making, not as a ‘black box’, but also gaining some insight into the behavior of a machine learning classifier. Note: This guided project based course works best for learners who are based in the North America region. We’re currently working on providing the same experience in other regions.

您要培養的技能

  • Machine Learning Regression Classifiers

  • Programming in Python

  • Performance analysis of prediction models

  • Interpretable and Explainable Models

分步進行學習

在與您的工作區一起在分屏中播放的視頻中,您的授課教師將指導您完成每個步驟:

指導項目工作原理

您的工作空間就是瀏覽器中的雲桌面,無需下載

在分屏視頻中,您的授課教師會為您提供分步指導

常見問題