by Olivia Seow, Dave Ludgin, and John Liu
Abstract
A primary barrier to using artificial intelligence is a proper understanding of how the system works, which enables trusts. To confront this, explainable visualizations provide a scaffolding for providing transparency to users. In order to provide this equitably, explainable visualizations should balance completeness and interpretability.