03/27/2026
By Nolan Talaei

The Manning School of Business, Department of Operations and Information Systems, invites you to attend a doctoral dissertation defense by Nolan M. Talaei on "Advances in Explainable Artificial Intelligence and Data Science: Methodological Developments and Practical Applications."

Candidate Name: Nolan M. Talaei
Degree: Doctoral
Defense Date: Friday, April 10, 2026
Time: 10 a.m. – noon EST
Location: Via Zoom
Thesis/Dissertation Title: Advances in Explainable Artificial Intelligence and Data Science: Methodological Developments and Practical Applications.

Committee:

  • Advisor: Asil Oztekin, Ph.D., Department of Operations & Information Systems, Manning School of Business, UMass Lowell
  • Luvai Motiwalla, Ph.D., Department of Operations & Information Systems, Manning School of Business, UMass Lowell
  • Hongwei (Harry) Zhu, PhD., Department of Operations & Information Systems, Manning School of Business, UMass Lowell

Brief Abstract:

As AI becomes the leading force in business, understanding how these models make decisions is increasingly important. Without a certain level of transparency, users find it difficult to effectively trust or rely on model outputs. Explainability is crucial for making AI systems more interpretable, accountable, and ultimately more useful in real-world scenarios. This dissertation presents multiple studies on advances in Explainable Artificial Intelligence (XAI) and Data Science, highlighting both theoretical and practical applications in decision-making. Chapter 1 focuses on developing a method to explain latent labels. It introduces an ensemble explanation framework that combines supervised and unsupervised learning to produce clear feature-importance scores for latent targets, improving interpretability over existing methods. Chapter 2 presents a counterfactual explanation approach. It describes a new technique designed to generate sparse, diverse, plausible, and high-fidelity counterfactuals that provide more actionable and understandable insights into what-if scenarios and how to change model predictions to a favorable output. Chapter 3 introduces an adversarial framework for explaining model behavior. It uses adversarial perturbations to analyze robustness and sensitivity, employing a surrogate neural network to generate structured, gradient-guided modifications to the input. By examining how small, targeted changes to features impact model predictions, this approach helps reveal how the model behaves near decision boundaries and how its sensitivity to input variations acts as an explanatory signal. Overall, these studies aim to advance both the methodological foundations and practical relevance of explainable AI, offering novel approaches that improve how complex models are understood, evaluated, and ultimately used in decision-making.

All interested faculty members and students are invited to attend.