![]() ![]() These tools assist the debugging process by analyzing whether/why a model has made a mistake, whether/why the model is unfair to some groups of people compared to others, what data features are contributing to the overall model error rates and predictions, how to explore alternative outcomes of a model through counterfactuals and assessing “What if?” scenarios…if some features of the datapoints are changed. These tools have been instrumental for data scientists and AI developers to better understand model behavior, discover and mitigate undesirable issues from AI models using ErrorAnalysis, InterpretML, Fairlearn, DiCE, and EconML. The RAI dashboard is built on the latest open-source tools developed by the leading academic institutions and organizations including Microsoft. Just as with software application development, machine learning models need to be debugged for errors and inaccuracies. Furthermore, we’ll show how data scientists and stakeholders can better communicate a model’s performance and behavior by using the RAI scorecard, which is a generated PDF summary report from insights gained from assessing and mitigating issues from the model. We will show you how to use machine learning tools such as Model Overview, Error Analysis, Data Explorer, Fairness Assessment, Model Interpretability, Casuals and Counterfactuals to discover and solve issues with the model or data. In this tutorial, we’ll learn how to create a Responsible AI (RAI) dashboard with its python SDK. In the last tutorial, we trained a model, to predict diabetes patient hospital readmission, that we will be using to analyze and identify issues from the Azure Machine Learning’s Responsible AI dashboard. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |