You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
but for the purpose of comparing different stacks of classifiers to each other, wouldn't it be much better to use the CV score that is already calculated when pystacknet is run? It shows the score for each CV fold in the output section, but does not give an overall average of all the folds at the end. Isn't that the accuracy I am looking for if I want to for example see if my pystacknet result is better than using the base classifier by itself.
If I am trying to decide which are the best classifiers to put into pystacknet, wouldn't that CV score be the best way to compare the results of different pystacknet tests (with each test having a different list of classifiers)?
The text was updated successfully, but these errors were encountered:
impulsecorp
changed the title
Generrating An Accuracy Score
Generating An Accuracy Score
Sep 26, 2018
I know I can generate an accuracy score like in your example file:
model.fit(X,y )
preds=model.predict_proba(X_test)[:,1]
print ("auc test 2 , auc %f " % (roc_auc_score(y_test,preds)))
but for the purpose of comparing different stacks of classifiers to each other, wouldn't it be much better to use the CV score that is already calculated when pystacknet is run? It shows the score for each CV fold in the output section, but does not give an overall average of all the folds at the end. Isn't that the accuracy I am looking for if I want to for example see if my pystacknet result is better than using the base classifier by itself.
If I am trying to decide which are the best classifiers to put into pystacknet, wouldn't that CV score be the best way to compare the results of different pystacknet tests (with each test having a different list of classifiers)?
The text was updated successfully, but these errors were encountered: