Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Question] Calculation of precision in KPrecision #18

Open
wei-ann-Github opened this issue Mar 7, 2024 · 0 comments
Open

[Question] Calculation of precision in KPrecision #18

wei-ann-Github opened this issue Mar 7, 2024 · 0 comments

Comments

@wei-ann-Github
Copy link

Refering to

precision = 1.0 * num_common / len(prediction_tokens)

num_common is a count of unique overlapping tokens. When calculating precision, the denominator used is the length of prediction_tokens. prediction_tokens does not seem to consist of only unique tokens.

May I know:

  1. Is there a reason for using unique tokens in the numerator but all tokens in the denominator in the calculation of precision?
  2. if considering set(prediction_tokens) as the denominator will affect its correlation with human judgement? If so, is it less correlated or more?
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant