Research Paper Critiques SHAP and Non-Symbolic XAI Methods for Lacking Rigor
A research paper critiques the lack of rigor in non-symbolic methods used for explaining complex machine learning models, a problem particularly acute in high-stakes applications. The paper specifically targets the widespread adoption of Shapley values in explainable artificial intelligence, with the tool SHAP cited as a prominent example of this problematic approach. For approximately ten years, these non-symbolic techniques have been the dominant choice for model explanation. The work surveys ongoing initiatives to employ rigorous symbolic methods for XAI as a viable alternative, focusing on the specific task of assigning relative feature importance. This shift is presented as necessary because current methods can mislead human decision-makers. The paper is hosted on arXiv, a platform for sharing scientific research, under the computer science and artificial intelligence categories.
Key facts
- The paper critiques non-symbolic explainable AI methods for lacking rigor.
- It identifies the adoption of Shapley values in XAI as a prime example of this lack of rigor.
- The tool SHAP is highlighted as a ubiquitous example of this approach.
- Non-symbolic methods have been the dominant choice for explaining ML models for about a decade.
- The lack of rigor is especially problematic in high-stakes uses of machine learning.
- The paper surveys efforts to use rigorous symbolic methods as an alternative.
- The focus of these alternative methods is on assigning relative feature importance.
- The paper is available on the arXiv repository.
Entities
Institutions
- arXiv