ClarityAI is a Python package designed to empower machine learning practitioners with a range of interpretability methods to enhance the transparency and explainability of their CNN models. Currently, ClarityAI can calculate attention and saliency maps.
For a brain tumor MRI scan, the attention maps of different layers of a CNN generated by ClarityAI could look like this:
This shows which parts of the image the CNN is focusing on and helps explain the CNN.
Similarly, here is what the saliency map for another MRI scan could look like:
You can install ClarityAI using pip:
pip install ClarityAI==1.0.0
For more information, please refer to our wiki for detailed instructions.
- Documentation for attention map generation can be found here
- Documentation for saliency map generation can be found here
ClarityAI is designed to help users quickly integrate interpretability methods into their personal projects. However, ClarityAI is just a tool meant to help users - not replace users' own judgments on interpretability and ethical use cases of their ML models.
Please also note that ClarityAI is a package created for fun/educational purposes! There exist several popular interpretability libraries available in the Python ecosystem, which are much better designed and maintained, such as SHAP, LIME, Yellowbrick, and InterpretML.