Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Benchmark containing undefined datasets cannot open #553

Open
qjiang002 opened this issue Dec 5, 2022 · 1 comment
Open

Benchmark containing undefined datasets cannot open #553

qjiang002 opened this issue Dec 5, 2022 · 1 comment

Comments

@qjiang002
Copy link
Collaborator

Issue from PR #540

Load this benchmark: https://dev.explainaboard.inspiredco.ai/benchmark?id=globalbench_ner

I got this error when loading this benchmark:

[0]   File "/Users/jiangqi/Desktop/Capstone/explainaboard_web/backend/src/gen/explainaboard_web/impl/db_utils/benchmark_db_utils.py", line 187, in <setcomp>
[0]     (x.dataset.dataset_name, x.dataset.sub_dataset, x.dataset.split)
[0] AttributeError: 'NoneType' object has no attribute 'dataset_name'

This is because this benchmark try to find all ner systems with 'system_query': {'task_name': 'named-entity-recognition'}, but there are systems with undefined/custom dataset, so their dataset is None. One way to deal with undefined datasets is to ignore undefined datasets in benchmark.

@neubig
Copy link
Contributor

neubig commented Dec 5, 2022

Yes, custom datasets should be ignored in benchmarks.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants