You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Jul 23, 2024. It is now read-only.
When creating a custom task.py , the test_task.py script fails unless there is an existing option present in the task_type and evaluation_metrics fields of the config file.
The task_type issue can be solved by adding a new enum value to the TaskType class in the api.py script.
The evaluation_metric issue raises an AssertionError issue for the EvaluationMetricConfig if 'hf_id' or 'best_score' is None. Therefore if there is no huggingface metric for a custom task (which is the case in our submission), there is no way to add a new metric
The text was updated successfully, but these errors were encountered:
Hi,
When creating a custom task.py , the test_task.py script fails unless there is an existing option present in the task_type and evaluation_metrics fields of the config file.
The task_type issue can be solved by adding a new enum value to the TaskType class in the api.py script.
The evaluation_metric issue raises an AssertionError issue for the EvaluationMetricConfig if 'hf_id' or 'best_score' is None. Therefore if there is no huggingface metric for a custom task (which is the case in our submission), there is no way to add a new metric
The text was updated successfully, but these errors were encountered: