From 0a0261740b29e01d6a571f2f5353a98d4e58bdaa Mon Sep 17 00:00:00 2001 From: Kenan Arslanbay <66200735+kenanarslanbay@users.noreply.github.com> Date: Sat, 20 Apr 2024 14:27:44 +0200 Subject: [PATCH] Update README.md Update roberta inference notebook url Signed-off-by: Kenan Arslanbay <66200735+kenanarslanbay@users.noreply.github.com> --- validated/text/machine_comprehension/roberta/README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/validated/text/machine_comprehension/roberta/README.md b/validated/text/machine_comprehension/roberta/README.md index 83c3dfb3d..a7db95830 100644 --- a/validated/text/machine_comprehension/roberta/README.md +++ b/validated/text/machine_comprehension/roberta/README.md @@ -29,7 +29,7 @@ Official tool from HuggingFace that can be used to convert transformers models t ## Inference We used [ONNX Runtime](https://github.com/microsoft/onnxruntime) to perform the inference. -Tutorial for running inference for RoBERTa-SequenceClassification model using onnxruntime can be found in the [inference](dependencies/roberta-inference.ipynb) notebook. +Tutorial for running inference for RoBERTa-SequenceClassification model using onnxruntime can be found in the [inference](https://github.com/onnx/models/blob/main/validated/text/machine_comprehension/roberta/dependencies/roberta-sequence-classification-inference.ipynb) notebook. ### Input input_ids: Indices of input tokens in the vocabulary. It's a int64 tensor of dynamic shape (batch_size, sequence_length). Text tokenized by RobertaTokenizer.