A framework to accomplish the pre- and intra- operative visiual fusion (PIVF) in augmented reality laparoscopic partial nephrectomy (AR-LPN). It uses prior knowledge to train a deep learning network to distinguish 2D render results from different viewpoints, information about each viewpoint is stored in a rendering probe.
Place the data including:
- 2D intra-operative images
- 3D pre-operative mesh model (.obj file or .gltf file, can be generated from 3D images by
volume_to_mesh.py
) - segmentation labels of intra-operative images
of each case to the directory specified in paths.py
.
bash ./fast_run.sh
This command will do following steps:
- Install the dependencies.
- Generate mesh models from volume images.
- Generate probes. Run
python probe.py
to generate probes surrounding the 3D mesh model. - Train the model. Run
python trian.py
. - Do the fusion. Run
python fusion.py
.
Case1.
case1.mp4
Case4.