In general, Interpretable models are easy for human to understand, by accessing their decision making process or parameters (like decision and logistic regression). Explainable models are too complicated for human to understand directly. Third part methods have to be used to explain their performance.
- Global: feature importance scores.
- Local: LIME, SHAP values.
- Simple models with more interpretability but less accuracy.
- Feature engineer can help simple models to gain accuracy comparable with complex models.