Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Judge before UseCachedDecoderOut #431

Merged
merged 1 commit into from
Nov 17, 2023

Conversation

HieDean
Copy link
Contributor

@HieDean HieDean commented Nov 17, 2023

When the cached decoder out of the entire batch exists, then use it.

if (is_batch_decoder_out_cached) {
auto &r = result->front();
std::vector<int64_t> decoder_out_shape = r.decoder_out.GetTensorTypeAndShapeInfo().GetShape();
decoder_out = Ort::Value::CreateTensor<float>(model_->Allocator(), decoder_out_shape.data(), decoder_out_shape.size());
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This line only allocates memory for decoder_out but it does not initialize it.

Please use

TEST(Cat, Test3DTensorsDim0) {

to concatenate the r.decoder_out along the batch dimension.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry for my mistake. Cat() seems to be a more overloaded option.

@csukuangfj
Copy link
Collaborator

By the way, could you follow

View(&offset),
View(&required_cache_size_tensor_),

to replace Clone() with View() in other code in a separate PR?

@HieDean HieDean force-pushed the judge_before_UseCachedDecoderOut branch from 87e9e58 to 6352006 Compare November 17, 2023 03:11
@csukuangfj
Copy link
Collaborator

Thank you for your first contribution!

@csukuangfj csukuangfj merged commit 1a6a41e into k2-fsa:master Nov 17, 2023
135 of 145 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants