07/14/2022
By Amir Asrzad
Name: Amir Asrzad
Date: Monday, July 29th, 2024
Time: 1 – 2:30 p.m.
Location: Virtual (Zoom link)
Thesis/Dissertation Title: Creditability and Risk in Counterfactual Explanations for AI Model Predictions
Committee Members:
Xiao-bai (Bob) Li (chair), Ph.D., Department of Operations & Information Systems, Manning School of Business, UMass Lowell
Julie Zhang, Ph.D., Department of Operations & Information Systems, Manning School of Business, UMass Lowell
Amit Deokar, Ph.D., Department of Operations & Information Systems, Manning School of Business, UMass Lowell
Shakil Quayes, Ph.D., Department of Economics, College of Fine Arts, Humanities & Social Sciences, UMass Lowell
Abstract:
This dissertation proposal focuses on explaining the risks and uncertainties surrounding AI predictions. The dissertation presents three studies: (1) Counterfactual Explanations for Incorrect Predictions Made by AI Models, (2) Risk-Sensitive Counterfactual Explanations for AI Model Predictions, and (3) Leveraging Explainable AI for Robust Decision-Making: Explaining the Uncertainty Surrounding Missing Data.
Artificial intelligence (AI) and deep learning techniques excel at making accurate predictions for complex problems. However, the lack of transparency in these black-box models presents significant challenges. Explainable AI (XAI) aims to tackle these challenges by developing methods that provide meaningful explanations for humans to comprehend. Counterfactual Explanation is one of the promising XAI methods. The first essay (Chapter 1) centers around the problem of incorrect predictions made by black-box models. It is important to appropriately explain these incorrect predictions. In this essay, a new counterfactual explanation method is developed to provide explanations for misclassified cases made by black-box models. The proposed method takes a counterfactual explanation approach, building a decision tree to find the best counterfactual examples for explanations. Incorrect predictions are rectified using a trust score measure. The second essay (Chapter 2) focuses on the risk of inadequate or misleading explanations offered by XAI methods, which can cause mistrust and lack of confidence in AI technology. In this essay, a novel method is proposed to provide risk-sensitive counterfactual explanations for AI model predictions. The proposed method provides robust counterfactuals to mitigate the risk of inadequately weak counterfactuals and vigilant counterfactuals to reduce the risk of non-responsive counterfactuals. The third essay (Chapter 3) deals with the application of XAI to address missing values. The explanation process often stops once it is generated, rarely leading to actionable steps. In this essay, the applicability of counterfactual explanations, as a potent XAI method, is demonstrated to address the widespread and challenging problem of missing data in decision-making. A counterfactual explanation method is proposed that allows users to take action based on both missing and non-missing features, encouraging them to report missing features.
All interested students and faculty members are invited to attend.