Model feature importance scores should reflect recourse

Date
2024
Journal Title
Journal ISSN
Volume Title
Publisher
Producer
Director
Performer
Choreographer
Costume Designer
Music
Videographer
Lighting Designer
Set Designer
Crew Member
Funder
Rehearsal Director
Concert Coordinator
Moderator
Panelist
Alternative Title
Department
Haverford College. Department of Computer Science
Type
Thesis
Original Format
Running Time
File Format
Place of Publication
Date Span
Copyright Date
Award
Language
eng
Note
Table of Contents
Terms of Use
Rights Holder
Access Restrictions
Bi-College users only until 01/01/2025. Afterwards Open access
Tripod URL
Identifier
Abstract
Because AI is used to make decisions that affect humans in domains such as loan applications, healthcare, criminal justice, and more, it's crucial that model deployers can explain those decisions. Existing methods for explaining complex classifiers fail to provide satisfactory results for the problems they seek to solve because they lack a way forward. To explain models, the popular methods LIME and SHAP measure feature importance. However, they don't provide recourse, which is an actionable step someone can take to change their adverse classification. In the context of loan applications, recourse means outputting action recommendations that somebody can take to be granted a loan, having been denied previously. This thesis catalogs the shortcomings of state-of-the-art explanation methods for turning feature importance scores into actions. We then show that manufacturing recourse from LIME and SHAP's outputs is insufficient, and thus we need to establish a new feature ranking system. We do this by proposing a new metric, r_ij, for recommending actions to people who received an adverse label.
Description
Citation
Collections