The segment anything tool is uses the Segment Anything Model (SAM). This folder contains the ONNX files that represent the SAM and are used by the browser to compute segmentations.
The SAM is provided by Facebook in the form of a pytorch model: a .pth
file.
There are three types of model. They are, from the biggest to the smallest, vit_h
, vit_l
and vit_b
.
It is advised to use vit_b
as it is smaller and faster, at the cost of a lower quality segmentation.
Facebook provides a script to convert the decoder to ONNX but not the encoder. The maintainers refuse to merge PRs adding this feature. You can use the script given in one of these PRs, but the chosen solution is to use the samexporter.
- Create a folder that will contain the conversion script and the original
checkpoint.pth
model downloaded in the previous step.
mkdir temp
cd temp
- Clone the samexporter repo and the segment anything repo
git clone [email protected]:vietanhdev/samexporter.git
git clone [email protected]:facebookresearch/segment-anything.git
- Install both segment anything using pip and the dependencies of the samexporter (tested with a virtual environnment with python 3.11.5)
pip install -e ./segment-anything
pip install torchvision==0.16.1 onnx==1.15.0 onnxruntime==1.15.1 timm==0.9.12
- Go in the samexporter folder and run the commands to export the encoder and the decoder (do not use quantization)
cd samexporter
python -m samexporter.export_encoder --checkpoint ../checkpoint.pth --output ../encoder.onnx --model-type vit_b
python -m samexporter.export_decoder --checkpoint ../checkpoint.pth --output ../decoder.onnx --model-type vit_b --return-single-mask
- Copy the encoder and decoder at the right location in the project (for now, the only model available is
vit_b
)
cd $NIMUS_IMAGE_DIR
cp encoder.onnx public/onnx-models/sam/$MODEL_NAME
cp decoder.onnx public/onnx-models/sam/$MODEL_NAME