Skip to content

Commit

Permalink
Revert #579 (#581)
Browse files Browse the repository at this point in the history
After Project-MONAI/MONAI#7647 revert
#579

### Status
**Ready/Work in progress/Hold**

### Please ensure all the checkboxes:
<!--- Put an `x` in all the boxes that apply, and remove the not
applicable items -->
- [x] Codeformat tests passed locally by running `./runtests.sh
--codeformat`.
- [ ] In-line docstrings updated.
- [ ] Update `version` and `changelog` in `metadata.json` if changing an
existing bundle.
- [ ] Please ensure the naming rules in config files meet our
requirements (please refer to: `CONTRIBUTING.md`).
- [ ] Ensure versions of packages such as `monai`, `pytorch` and `numpy`
are correct in `metadata.json`.
- [ ] Descriptions should be consistent with the content, such as
`eval_metrics` of the provided weights and TorchScript modules.
- [ ] Files larger than 25MB are excluded and replaced by providing
download links in `large_file.yml`.
- [ ] Avoid using path that contains personal information within config
files (such as use `/home/your_name/` for `"bundle_root"`).

Signed-off-by: YunLiu <[email protected]>
  • Loading branch information
KumoLiu authored Apr 17, 2024
1 parent 04eef67 commit 0e1b0ed
Show file tree
Hide file tree
Showing 2 changed files with 2 additions and 3 deletions.
3 changes: 2 additions & 1 deletion models/lung_nodule_ct_detection/configs/metadata.json
Original file line number Diff line number Diff line change
@@ -1,7 +1,8 @@
{
"schema": "https://github.com/Project-MONAI/MONAI-extra-test-data/releases/download/0.8.1/meta_schema_20220324.json",
"version": "0.6.4",
"version": "0.6.5",
"changelog": {
"0.6.5": "remove notes for trt_export in readme",
"0.6.4": "add notes for trt_export in readme",
"0.6.3": "add load_pretrain flag for infer",
"0.6.2": "add checkpoint loader for infer",
Expand Down
2 changes: 0 additions & 2 deletions models/lung_nodule_ct_detection/docs/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -130,8 +130,6 @@ It is possible that your inference dataset should set `"affine_lps_to_ras": fals
python -m monai.bundle trt_export --net_id network_def --filepath models/model_trt.ts --ckpt_file models/model.pt --meta_file configs/metadata.json --config_file configs/inference.json --precision <fp32/fp16> --input_shape "[1, 1, 512, 512, 192]" --use_onnx "True" --use_trace "True" --onnx_output_names "['output_0', 'output_1', 'output_2', 'output_3', 'output_4', 'output_5']" --network_def#use_list_output "True"
```

Note that if you're using a container based on [PyTorch 24.03](nvcr.io/nvidia/pytorch:24.03-py3), and the size of your input exceeds (432, 432, 152), the TensorRT export might fail. In such cases, it would be necessary for users to manually adjust the input_shape downwards. Keep in mind that minimizing the input_shape could potentially impact performance. Hence, always reassess the model's performance after making such adjustments to validate if it continues to meet your requirements.

#### Execute inference with the TensorRT model

```
Expand Down

0 comments on commit 0e1b0ed

Please sign in to comment.