Skip to content

Latest commit

 

History

History
61 lines (49 loc) · 1.8 KB

README.md

File metadata and controls

61 lines (49 loc) · 1.8 KB

Classifier Calibration

A survey on how to assess and improve predicted class probabilities

Note

This content is deprecated and has been moved to the repository classifier-calibration.git

Peter Flach, University of Bristol, UK, [email protected] , www.cs.bris.ac.uk/~flach/

Miquel Perello-Nieto, University of Bristol, UK, [email protected], https://www.perellonieto.com/

Hao Song, University of Bristol, UK, [email protected]

Meelis Kull, University of Tartu, Estonia, [email protected]

Telmo Silva Filho, Federal University of Paraiba, Brazil, [email protected]

Tools

We are developing a Python library with tools to evaluate the calibration of models. PyCalib has its own documentation page, and can be installed from the Python Package Index Pypi pip install pycalib.

Citation

This work has been published in the Machine Learning journal. You may want to use the following citation if you want to reference this work.

@Article{SilvaFilho2023,
author={Silva Filho, Telmo
and Song, Hao
and Perello-Nieto, Miquel
and Santos-Rodriguez, Raul
and Kull, Meelis
and Flach, Peter},
title={Classifier calibration: a survey on how to assess and improve predicted class probabilities},
journal={Machine Learning},
year={2023},
month={May},
day={16},
issn={1573-0565},
doi={10.1007/s10994-023-06336-7},
url={https://doi.org/10.1007/s10994-023-06336-7}
}