Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Revert #579 #581

Merged
merged 1 commit into from
Apr 17, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 2 additions & 1 deletion models/lung_nodule_ct_detection/configs/metadata.json
Original file line number Diff line number Diff line change
@@ -1,7 +1,8 @@
{
"schema": "https://github.com/Project-MONAI/MONAI-extra-test-data/releases/download/0.8.1/meta_schema_20220324.json",
"version": "0.6.4",
"version": "0.6.5",
"changelog": {
"0.6.5": "remove notes for trt_export in readme",
"0.6.4": "add notes for trt_export in readme",
"0.6.3": "add load_pretrain flag for infer",
"0.6.2": "add checkpoint loader for infer",
Expand Down
2 changes: 0 additions & 2 deletions models/lung_nodule_ct_detection/docs/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -130,8 +130,6 @@ It is possible that your inference dataset should set `"affine_lps_to_ras": fals
python -m monai.bundle trt_export --net_id network_def --filepath models/model_trt.ts --ckpt_file models/model.pt --meta_file configs/metadata.json --config_file configs/inference.json --precision <fp32/fp16> --input_shape "[1, 1, 512, 512, 192]" --use_onnx "True" --use_trace "True" --onnx_output_names "['output_0', 'output_1', 'output_2', 'output_3', 'output_4', 'output_5']" --network_def#use_list_output "True"
```

Note that if you're using a container based on [PyTorch 24.03](nvcr.io/nvidia/pytorch:24.03-py3), and the size of your input exceeds (432, 432, 152), the TensorRT export might fail. In such cases, it would be necessary for users to manually adjust the input_shape downwards. Keep in mind that minimizing the input_shape could potentially impact performance. Hence, always reassess the model's performance after making such adjustments to validate if it continues to meet your requirements.

#### Execute inference with the TensorRT model

```
Expand Down
Loading