Please Stop Explaining Black Box Models for High Stakes Decisions

Authors: Cynthia Rudin

NIPS 2018 Workshop on Critiquing and Correcting Trends in Machine Learning, Longer version. Expands also on NSF Statistics at a Crossroads Webinar
License: CC BY-SA 4.0

Abstract: There are black box models now being used for high stakes decision-making throughout society. The practice of trying to explain black box models, rather than creating models that are interpretable in the first place, is likely to perpetuate bad practices and can potentially cause catastrophic harm to society. There is a way forward -- it is to design models that are inherently interpretable.

Submitted to arXiv on 26 Nov. 2018

Explore the paper tree

Click on the tree nodes to be redirected to a given paper and access their summaries and virtual assistant

Also access our AI generated Summaries, or ask questions about this paper to our AI assistant.

Look for similar papers (in beta version)

By clicking on the button above, our algorithm will scan all papers in our database to find the closest based on the contents of the full papers and not just on metadata. Please note that it only works for papers that we have generated summaries for and you can rerun it from time to time to get a more accurate result while our database grows.