Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Generating An Accuracy Score #2

Open
impulsecorp opened this issue Sep 26, 2018 · 0 comments
Open

Generating An Accuracy Score #2

impulsecorp opened this issue Sep 26, 2018 · 0 comments

Comments

@impulsecorp
Copy link

impulsecorp commented Sep 26, 2018

I know I can generate an accuracy score like in your example file:

model.fit(X,y )
preds=model.predict_proba(X_test)[:,1]
print ("auc test 2 , auc %f " % (roc_auc_score(y_test,preds)))

but for the purpose of comparing different stacks of classifiers to each other, wouldn't it be much better to use the CV score that is already calculated when pystacknet is run? It shows the score for each CV fold in the output section, but does not give an overall average of all the folds at the end. Isn't that the accuracy I am looking for if I want to for example see if my pystacknet result is better than using the base classifier by itself.

If I am trying to decide which are the best classifiers to put into pystacknet, wouldn't that CV score be the best way to compare the results of different pystacknet tests (with each test having a different list of classifiers)?

@impulsecorp impulsecorp changed the title Generrating An Accuracy Score Generating An Accuracy Score Sep 26, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

0 participants