Skip to content

Latest commit

 

History

History
13 lines (8 loc) · 769 Bytes

InterpretExplain.md

File metadata and controls

13 lines (8 loc) · 769 Bytes

In general, Interpretable models are easy for human to understand, by accessing their decision making process or parameters (like decision and logistic regression). Explainable models are too complicated for human to understand directly. Third part methods have to be used to explain their performance.

Techniques to understand explainable models

  • Global: feature importance scores.
  • Local: LIME, SHAP values.

Trade-off between Accuracy and Interpretability

  • Simple models with more interpretability but less accuracy.
  • Feature engineer can help simple models to gain accuracy comparable with complex models.