You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Setting all parameters to FP16 in the encoder leads to overflow in the LayerNorm layers. We suggest you to enable FP16 mode while forcing the LayerNorm layers to FP32 precision. This approach has been verified by us. I hope this addresses your question. Please feel free to reach out if you have additional concerns.
mkdir -p assets/export_models/efficientvit_sam/tensorrt/
Export Encoder
trtexec --onnx=assets/export_models/efficientvit_sam/onnx/efficientvit_sam_xl1_encoder.onnx --minShapes=input_image:1x3x1024x1024 --optShapes=input_image:4x3x1024x1024 --maxShapes=input_image:4x3x1024x1024 --saveEngine=assets/export_models/efficientvit_sam/tensorrt/efficientvit_sam_xl1_encoder.engine
部署tensorrt模型的时候导入编码器的时候没有指定fp16模式,是否用fp16模式测过,结果是否正确,论文里面测试的fp16结果编码器用的是fp16模型吗,还是只有解码器用fp16模型
The text was updated successfully, but these errors were encountered: