You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
If we look at log_loss and brier_score_loss independently as indicators of calibration quality (as described here), then it looks like we should be able to swap out pygam with sci-kit learn's IsotonicRegression and either maintain or improve performance:
Is your feature request related to a problem? Please describe.
pygam
pinsscipy < 1.12
, which blocks the use ofnumpy 2.x
andCython 3.x
.Describe the solution you'd like
Currently, it is used in
propensity.calibrate()
.We can evaluate alternative calibration methods, e.g.,
scikit-learn
's probability calibration.Describe alternatives you've considered
Alternatively, we can work on
pygam
directly to see if we can make it work withscipy >= 1.12
Additional context
N/A
The text was updated successfully, but these errors were encountered: