Skip to content
This repository has been archived by the owner on Jul 23, 2024. It is now read-only.

custom task_type and evaluation_metrics - test-task errors #29

Open
drndr opened this issue Aug 3, 2023 · 0 comments
Open

custom task_type and evaluation_metrics - test-task errors #29

drndr opened this issue Aug 3, 2023 · 0 comments
Assignees

Comments

@drndr
Copy link
Contributor

drndr commented Aug 3, 2023

Hi,

When creating a custom task.py , the test_task.py script fails unless there is an existing option present in the task_type and evaluation_metrics fields of the config file.

The task_type issue can be solved by adding a new enum value to the TaskType class in the api.py script.

The evaluation_metric issue raises an AssertionError issue for the EvaluationMetricConfig if 'hf_id' or 'best_score' is None. Therefore if there is no huggingface metric for a custom task (which is the case in our submission), there is no way to add a new metric

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants