Explainable Al (XAI) with Python
Easy Way to Learn XAI
What you’ll learn
Explainable Al (XAI) with Python
- The value of XAI in the modern world.
- Differences between the ML models that use glass boxes, white boxes, and black boxes that use glass boxes.
- Categorization of XAI based on their scope, agnosticism, data types, and explanation methods, as well as how they explain things.
- The trade-off between being accurate and being able to understand.
- InterpretML is a software package from Microsoft that can be used to write explanations of ML models.
- Need for counterfactual and different explanations.
- The workings and mathematical models of XAI techniques like LIME, SHAP, DiCE, LRP, counterfactual and contrastive explanations, as well as their workings and models.
- The use of XAI techniques like LIME, SHAP, DiCE, and LRP to explain black-box models for tabular, textual, and image datasets.
- Use this tool from Google to look at data points and come up with things that might not have happened.
Requirements
-
Programming skills are not required. To use XAI, you will learn everything you need to know.
Description
XAI with Python
This class gives you a lot of information about the most recent developments in Explainable AI (XAI). Every day, we become more and more dependent on artificial intelligence models. It’s also becoming more important to explain how and why AI makes a certain decision.
Recent laws have also made it more important to explain and defend the decisions made by AI systems. This course talks about how to use Python to show, explain, and build AI systems that are safe and reliable.
This class explains how LIME (Local Interpretable Model Agnostic Explanations) and SHAP (SHapley Additive Explanations) work and how they work with math. In this section, we talk about why counterfactual and contrastive explanations are important and how they work. We also talk about how to model different techniques like Diverse Counterfactual Explanations (DiCE) to come up with actionable counterfactuals.
Google’s What-If Tool talks about AI fairness and how to make visual explanations (WIT). This class talks about the LRP (Layer-wise Relevance Propagation) method for coming up with explanations for neural networks.
In this course, you will learn how to use Python to make AI systems that are easy to understand, explain, and build trusting AI systems. The course looks at a lot of different examples to show how important it is to use explainable techniques in important applications.
There are lots of hands-on sessions where students learn how to use the code and apply it to their AI models. The dataset and code used to practice different XAI techniques are given to the students so they can try them out.
Who this course is for:
- The students who are taking a Machine Learning Course or an Artificial Intelligence Course.
- Students who want to work with AI.
- Beginner Python programmers who already know a little about machine learning libraries.
- Researchers who already use Python to build AI models can benefit from learning the most recent explainable AI techniques. These techniques can help them write explanations of their models.
- Data analysts and data scientists want to learn about AI tools and techniques that are easy to understand. They want to learn how to use Python for machine
- learning models.