This repository contains the source code for the VMV 2024 paper:
SVDAG Compression for Segmentation Volume Path Tracing
Mirco Werner†, Max Piochowiak†, and Carsten Dachsbacher
† joint first authors
Abstract Many visualization techniques exist for interactive exploration of segmentation volumes, however, photorealistic renderings are usually computed using slow offline techniques. We present a novel compression technique for segmentation volumes which enables interactive path tracing-based visualization for datasets up to hundreds of gigabytes: For every label, we create a grid of fixed-size axis aligned bounding boxes (AABBs) which covers the occupied voxels. For each AABB we first construct a sparse voxel octree (SVO) representing the contained voxels of the respective label, and then build a sparse voxel directed acyclic graph (SVDAG) identifying identical sub-trees across all SVOs; the lowest tree levels are stored as an occupancy bit-field. As a last step, we build a bounding volume hierarchy for the AABBs as a spatial indexing structure. Our representation solves a compression rate limitation of related SVDAG works as labels only need to be stored along with each AABB and not in the graph encoding of their shape. Our compression is GPU-friendly as hardware raytracing efficiently finds AABB intersections which we then traverse using a custom accelerated SVDAG traversal. Our method is able to path-trace a 113 GB volume on a consumer-grade GPU with 1 sample per pixel with up to 32 bounces at 108 FPS in a lossless representation, or at up to 1017 FPS when using dynamic level of detail.
SegmentationVolumes/resources/shaders/trace
contains the GLSL shaders to traverse our proposed data structure.
- custom traversal of SVDAG and occupancy field inside an AABB:
svdag_occupancy_field
- called by
intersect
when an intersection with an AABB is found during the hardware-accelerated BVH traversal
- called by
Other files worth mentioning:
- main and rendering logic:
main
,SegmentationVolumes
- converter that builds our compressed format (data structure) from the raw data:
SegmentationVolumeConverter
- SVO builder:
Octree
- SVDAG builder:
DAG
- SVDAG builder (GPU variant):
DAGGPU
and passes inSegmentationVolumes/resources/shaders/builder
- SVO builder:
- Vulkan 1.3
- GPU with ray tracing support
- OpenMP
- TBB
Other dependencies are included as submodules in SegmentationVolumes/lib/
and VkRaven/lib/
.
Tested on Linux with GCC 14.1.1 and NVIDIA RTX 3070 GPU and NVIDIA RTX 4070 Ti SUPER.
Windows is not officially supported, however, we get it to run using MinGW.
# clone
git clone --recursive [email protected]:MircoWerner/SegmentationVolumeCompression.git
cd SegmentationVolumeCompression
# download test dataset
mkdir -p data/mouse
cd data/mouse
curl https://l4dense2019.brain.mpg.de/webdav/dendrites.hdf5 --output dendrites.hdf5
curl https://l4dense2019.brain.mpg.de/webdav/mapped-segmentation-volume/x0y0z0.hdf5 --output x0y0z0.hdf5 # you can download as many volume parts from the mouse cortex as you wish
cd ../..
# build
cd SegmentationVolumes
mkdir build
cd build
cmake -DCMAKE_BUILD_TYPE=Release ..
make
# convert test dataset
./segmentationvolumes ../../data mouse --convert # this will take a while
# run
./segmentationvolumes ../../data mouse
- Create a data folder. Inside, create a folder for the dataset (e.g., mouse) and place the raw data files in it.
- Write a converter similar to
MouseConverter
to extract non-empty voxels and labels from the raw data files. - Add the converter to the
main
file. - Write a scene/dataset file similar to
mouse
.
./segmentationvolumes /path/to/data yourdatasetname --convert
./segmentationvolumes /path/to/data yourdatasetname
Usage: segmentationvolumes [--help] [--version] [--rayquery] [--evaluate] [--convert] data scene
Positional arguments:
data path to the data folder (e.g. /path/to/data/) [required]
scene name of the scene (in resources/scenes/<name>.xml) [required]
Optional arguments:
-h, --help shows help message and exits
-v, --version prints version information and exits
--rayquery use ray query in compute shader instead of ray tracing pipeline
--evaluate perform evaluation on given scene (measure rendering performance and store rendered image)
--convert perform conversion from raw data to compressed format
WASD
to move the cameraSpace
orE
to move the camera upShift
orQ
to move the camera down- Hold
Ctrl
to move faster - Hold
Right Mouse Button
to look around Esc
to close the application
We would like to thank the creators and authors of the used datasets and libraries:
- Cells: ROSENBAUER, J., BERGHOFF, M., and SCHUG, A. “Emerging Tumor Development by Simulating Single-cell Events”. bioRxiv (2020). DOI: 10.1101/2020.08.24.264150
- C.Elegans: WITVLIET, D., MULCAHY, B., MITCHELL, J. K., et al. “Connectomes across development reveal principles of brain maturation”. Nature 596.7871 (2021), 257–261. DOI: 10.1038/s41586-021-03778-8
- Mouse: MOTTA, A., BERNING, M., BOERGENS, K. M., et al. “Dense connectomic reconstruction in layer 4 of the somatosensory cortex”. Science 366.6469 (2019), eaay3134. DOI: 10.1126/science.aay3134
- argsparse, glfw, glm, imgui, pugixml, SPIRV-Reflect, stb, tinygltf, tinyobjloader, and HighFive.