diff --git a/models/lung_nodule_ct_detection/configs/metadata.json b/models/lung_nodule_ct_detection/configs/metadata.json index b8986728..8e484df7 100644 --- a/models/lung_nodule_ct_detection/configs/metadata.json +++ b/models/lung_nodule_ct_detection/configs/metadata.json @@ -1,7 +1,8 @@ { "schema": "https://github.com/Project-MONAI/MONAI-extra-test-data/releases/download/0.8.1/meta_schema_20220324.json", - "version": "0.6.4", + "version": "0.6.5", "changelog": { + "0.6.5": "remove notes for trt_export in readme", "0.6.4": "add notes for trt_export in readme", "0.6.3": "add load_pretrain flag for infer", "0.6.2": "add checkpoint loader for infer", diff --git a/models/lung_nodule_ct_detection/docs/README.md b/models/lung_nodule_ct_detection/docs/README.md index 0e5c05ac..ea888ae5 100644 --- a/models/lung_nodule_ct_detection/docs/README.md +++ b/models/lung_nodule_ct_detection/docs/README.md @@ -130,8 +130,6 @@ It is possible that your inference dataset should set `"affine_lps_to_ras": fals python -m monai.bundle trt_export --net_id network_def --filepath models/model_trt.ts --ckpt_file models/model.pt --meta_file configs/metadata.json --config_file configs/inference.json --precision --input_shape "[1, 1, 512, 512, 192]" --use_onnx "True" --use_trace "True" --onnx_output_names "['output_0', 'output_1', 'output_2', 'output_3', 'output_4', 'output_5']" --network_def#use_list_output "True" ``` -Note that if you're using a container based on [PyTorch 24.03](nvcr.io/nvidia/pytorch:24.03-py3), and the size of your input exceeds (432, 432, 152), the TensorRT export might fail. In such cases, it would be necessary for users to manually adjust the input_shape downwards. Keep in mind that minimizing the input_shape could potentially impact performance. Hence, always reassess the model's performance after making such adjustments to validate if it continues to meet your requirements. - #### Execute inference with the TensorRT model ```