Institutional Scholarship

Quantifying Uncertainty in Shapley-value-based Explanations for Machine Learning Models

Show simple item record

dc.contributor.advisor Friedler, Sorelle Li, Ruiming 2021-07-12T11:57:41Z 2021-07-12T11:57:41Z 2021
dc.description.abstract In this thesis, I study an approach inspired by game theory to explain how features in the input data affect the output of machine learning models without access to the models' internal working. This approach assigns a number called Shapley value to each feature to indicate its contribution to the model output, but has a few limitations and uncertainty. I investigate three sources of uncertainty of Shapley value and respective methods to quantify the uncertainty, using Shapley Residuals to capture missing information in the game theory representation, Mean-Standard-Error to quantify the sampling error in Shapley value estimation, and Bayesian SHAP to calculate the statistical variations in SHAP explanation model. I aim to investigate and decompose the cause for each type of error and evaluate their combined effect on the trust-worthiness of Shapley explanations for real-life models. My goal is to make machine learning models more interpretable to humans so we can gain meaningful knowledge from them.
dc.description.sponsorship Haverford College. Department of Computer Science
dc.language.iso eng
dc.title Quantifying Uncertainty in Shapley-value-based Explanations for Machine Learning Models
dc.type Thesis
dc.rights.access Dark Archive until 2022-01-01, afterwards Open Access.

Files in this item

This item appears in the following Collection(s)

Show simple item record Except where otherwise noted, this item's license is described as



My Account