Skip to content

ClarityAI is a Python package designed to empower machine learning practitioners with a wide range of interpretability methods to enhance the transparency and explainability of their ML models.

License

Notifications You must be signed in to change notification settings

JasmineZhangxyz/clarityai-pypkg

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

14 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

ClarityAI

License

ClarityAI is a Python package designed to empower machine learning practitioners with a range of interpretability methods to enhance the transparency and explainability of their CNN models. Currently, ClarityAI can calculate attention and saliency maps.

Examples

For a brain tumor MRI scan, the attention maps of different layers of a CNN generated by ClarityAI could look like this: Attention Map for MRI Scan

This shows which parts of the image the CNN is focusing on and helps explain the CNN.

Similarly, here is what the saliency map for another MRI scan could look like: Saliency Map for MRI Scan

Installation + Usage

You can install ClarityAI using pip:

pip install ClarityAI==1.0.0

For more information, please refer to our wiki for detailed instructions.

Features

  • Documentation for attention map generation can be found here
  • Documentation for saliency map generation can be found here

Limitations

ClarityAI is designed to help users quickly integrate interpretability methods into their personal projects. However, ClarityAI is just a tool meant to help users - not replace users' own judgments on interpretability and ethical use cases of their ML models.

Please also note that ClarityAI is a package created for fun/educational purposes! There exist several popular interpretability libraries available in the Python ecosystem, which are much better designed and maintained, such as SHAP, LIME, Yellowbrick, and InterpretML.

About

ClarityAI is a Python package designed to empower machine learning practitioners with a wide range of interpretability methods to enhance the transparency and explainability of their ML models.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages