-
Notifications
You must be signed in to change notification settings - Fork 45
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Simple API to run load and evaluate datasets based on tasks and input data #1327
Comments
Is Lines 132 to 139 in 582d96f
|
I did not even think about produce. I'll check. It has to be more prominently documented (Currently it's under "Production and Inference"). |
I tried: Set up question answer pairs in a dictionarydata = [ test_dataset = produce(data,task="tasks.qa.open", template="templates.qa.open.title") But the output dataset does not contain references field, so it can not be evaluated. |
You are right. I forgot it was meant for inference. We should document it properly. |
We need a simple way for a user to load a Unitxt dataset based on existing files or data structures.
This is required for ilab (@Roni-Friedman ) and also for other use cases.
I thought that maybe a dataframe is a common interface that everyone knows.
load_dataset(task="tasks.classification.multi_class", test_set = pd.from_csv("test.csv"))
load_dataset(task="tasks.classification.multi_class", test_set = pd.from_csv("test.csv"), train_set=pd.from_csv("train.csv" )
And
evaluate(predications, task="tasks.classification.multi_class", test_set = pd.from_csv("test.csv"))
The assumption is that the fields of input dataframe are the same as the task, and a clear error message will be presented if not.
The text was updated successfully, but these errors were encountered: