A contact solver for physics-based simulations involving π shells, πͺ΅ solids and πͺ’ rods. All made by ZOZO. Published in ACM Transactions on Graphics (TOG).
- πͺ Robust: Contact resolutions are penetration-free. No snagging intersections.
- β² Scalable: An extreme case includes beyond 150M contacts. Not just one million.
- π² Cache Efficient: All on the GPU runs in single precision. No double precision.
- π₯Ό Inextensible: Cloth never extends beyond very strict upper bounds, such as 1%.
- π Physically Accurate: Our deformable solver is driven by the Finite Element Method.
- π Massively Parallel: Both contact and elasticity solvers are run on the GPU.
- π³ Docker Sealed: Everything is designed to work out of the box.
- π JupyterLab Included: Open your browser and run examples right away [Video].
- β¨ Stay Clean: You can remove all traces after use.
- π Open: We have opted the Apache v2.0 license.
- π₯ Main video [Video]
- π₯ Additional video examples [Directory]
- π₯ Presentation videos [Short][Long]
- π Main paper [PDF][Hindsight]
- π Supplementary PDF [PDF]
- π€ Supplementary scripts [Directory]
- π Singular-value eigenanalysis [Markdown]
- π₯ A modern NVIDIA GPU (Turing or newer).
- π³ A Docker environment (see below).
- (2024.12.18) Added a frictional contact example: armadillo sliding on the slope [Video].
- (2024.12.18) Added a hindsight noting that the tilt angle was not
$30^\circ$ , but rather$26.57^\circ$ . - (2024.12.16) Removed thrust dependencies to fix runtime errors for the driver version
560.94
[Issue Link].
Our frontend is accessible through π a browser using our built-in JupyterLab π interface. All is set up when you open it for the first time. Results can be interactively viewed through the browser and exported as needed.
This allows you to interact with the simulator on your π» laptop while the actual simulation runs on a remote headless server over π the internet. This means that you don't have to buy βοΈ hardware, but can rent it at vast.ai or RunPod for less than π΅ $1 per hour. For example, this [Video] was recorded on a vast.ai instance. The experience is π good!
Here's an example of draping five sheets over a sphere with two corners pinned. Please look into the examples directory for more examples.
# import our frontend
from frontend import App
# make an app with the label "drape"
app = App("drape", renew=True)
# create a square mesh resolution 128 spanning the xz plane
V, F = app.mesh.square(res=128, ex=[1,0,0], ey=[0,0,1])
# add to the asset and name it "sheet"
app.asset.add.tri("sheet", V, F)
# create an icosphere mesh radius 0.5 and 5 subdivisions
V, F = app.mesh.icosphere(r=0.5, subdiv_count=5)
# add to the asset and name it "sphere"
app.asset.add.tri("sphere", V, F)
# create a scene "five-sheets"
scene = app.scene.create("five-sheets")
# define gap between sheets
gap = 0.01
for i in range(5):
# add a sheet to the scene
obj = scene.add("sheet")
# pick two vertices max towards directions [1,0,-1] and [-1,0,-1]
corner = obj.grab([1, 0, -1]) + obj.grab([-1, 0, -1])
# place it with a vertical offset and pin the corners
obj.at(0, gap * i, 0).pin(corner)
# set fiber directions required for the Baraff-Witkin model
obj.direction([1, 0, 0], [0, 0, 1])
# add a sphere mesh at a lower position and set it to a static collider
scene.add("sphere").at(0, -0.5 - gap, 0).pin()
# compile the scene and report stats
fixed = scene.build().report()
# interactively preview the built scene (image left)
fixed.preview()
# set simulation parameter(s)
param = app.session.param()
param.set("dt", 0.01)
# create a new session with a name
session = app.session.create("dt-001").init(fixed)
# start the simulation and live-preview the results (image right)
session.start(param).preview()
# also show streaming logs
session.stream()
# or interactively view the animation sequences
session.animate()
# export all simulated frames (downloadable from the file browser)
session.export_animation(f"export/{session.info.name}")
woven | stack | trampoline | needle |
cards | codim | hang | trapped |
domino | noodle | drape | quintuple |
ribbon | curtain | fishingknot | friction |
At the moment, not all examples are ready yet, but they will be added/updated one by one. The author is actively woriking on it.
π οΈ All the steps below are verified to run without errors via automated GitHub Actions βοΈ (see .github/workflows/getting-started.yml
).
The tested π runner is the Ubuntu NVIDIA GPU-Optimized Image for AI and HPC with an NVIDIA Tesla T4 (16 GB VRAM) with Driver version 550.127.05. This is not a self-hosted runner, meaning that each time the runner launches, all environments are π± fresh.
We provide uninterrupted recorded installation videos (πͺ Windows [Video], π§ Linux [Video] and β vast.ai [Video]) to reduce stress π£ during the installation process. We encourage you to π check them out to get a sense of how things go β³ and how long β±οΈ each step takes.
To get the ball β½ rolling, we'll configure a Docker environment π³ to minimize any trouble π€― that π₯ hits you.
Note
If you wish to install our solver on a headless remote machine, SSH into the server with port forwarding using the following command:
ssh -L 8080:localhost:8080 user@remote_server_address
This port will be used to access the frontend afterward.
The two port numbers of 8080
must match the value we set for $MY_WEB_PORT
below.
First, install the CUDA Toolkit [Link] along with the driver on your host system. Next, follow the instructions below specific to the operating system running on the host.
Install the latest version of Docker Desktop [Link] on the host computer. You may need to log out or reboot after the installation. After logging back in, launch Docker Desktop to ensure that Docker is running. Then, create a container π¦ by running the following Docker command in PowerShell:
$MY_WEB_PORT = 8080 # Port number for JupyterLab web browsing
$MY_TIME_ZONE = "Asia/Tokyo" # Your time zone
$MY_CONTAINER_NAME = "ppf-contact-solver" # Container name
docker run -it `
--gpus all `
-p ${MY_WEB_PORT}:8080 `
-e TERM `
-e TZ=$MY_TIME_ZONE `
-e LANG=en_US.UTF-8 `
--hostname ppf-dev `
--name $MY_CONTAINER_NAME `
-e NVIDIA_DRIVER_CAPABILITIES="graphics,compute,utility" `
nvidia/cuda:11.8.0-devel-ubuntu22.04
Windows users do not need to install the NVIDIA Container Toolkit.
Linux users will also need to install Docker π on their system. Please refer to the installation guide [Link]. Also, install the NVIDIA Container Toolkit by following the guide [Link]. Then, create a container π¦ by running the following Docker command:
MY_WEB_PORT=8080 # Port number for JupyterLab web browsing
MY_TIME_ZONE=Asia/Tokyo # Your time zone
MY_CONTAINER_NAME=ppf-contact-solver # Container name
docker run -it \
--gpus all \
-p $MY_WEB_PORT:8080 \
-e TERM -e TZ=$MY_TIME_ZONE \
-e LANG=en_US.UTF-8 \
--hostname ppf-dev \
--name $MY_CONTAINER_NAME -e \
NVIDIA_DRIVER_CAPABILITIES=graphics,compute,utility \
nvidia/cuda:11.8.0-devel-ubuntu22.04
At the end of the line, you should see:
root@ppf-dev:/#
From here on, all commands will happen in the π¦ container, not on your host. Next, we'll make sure that a NVIDIA driver is visible from the Docker container. Try this
nvidia-smi
If successful, this will get back to you with something like this
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 525.105.17 Driver Version: 525.105.17 CUDA Version: 12.0 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 NVIDIA GeForce ... On | 00000000:C1:00.0 Off | Off |
| 64% 51C P2 188W / 450W | 4899MiB / 24564MiB | 91% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
+-----------------------------------------------------------------------------+
Note
If an error occurs π₯΅, ensure that nvidia-smi
is working on your host. For Linux users, make sure the NVIDIA Container Toolkit is properly installed. If the issue persists, try running sudo service docker restart
on your host to resolve it.
Please confirm that your GPU is listed here. Now let's get the installation started. No worries π€; all the commands below only disturb things in the container, so your host environment stays clean β¨. First, install following packages
apt update
apt install -y git python3
Next, clone our respository
git clone https://github.com/st-tech/ppf-contact-solver.git
Move into the ppf-contact-solver
and let warmup.py
do all the rest π€:
cd ppf-contact-solver
python3 warmup.py
Note
If youβre suspicious, you can look around warmup.py
before you proceed. Run less warmup.py
, scroll all the way to the bottom, and hit q
to quit.
Now we're set. Let's kick in the compilation!π
source "$HOME/.cargo/env"
cargo build --release
Be patient; this takes some time... β°β° If the last line says
Finished `release` profile [optimized] target(s) in ...
We're done! π Start our frontend by
python3 warmup.py jupyter
and now you can access our JupyterLab frontend from http://localhost:8080 on your π browser.
The port number 8080
is the one we set for $MY_WEB_PORT
.
Enjoy! π
To remove all traces, simply stop π the container and β delete it. Be aware that all simulation data will be also lost. Back up any important data if needed.
docker stop $MY_CONTAINER_NAME
docker rm $MY_CONTAINER_NAME
Note
If you wish to completely wipe what weβve done here, you may also need to purge the Docker image by:
docker rmi $(docker images | grep 'nvidia/cuda' | grep '11.8.0-devel-ubuntu22.04' | awk '{print $3}')
but don't do this if you still need it.
β Running on vast.ai
The exact same steps above should work (see .github/workflows/getting-started-vast.yml
), except that you'll need to create a Docker template. Here's one:
- Image Path/Tag:
nvidia/cuda:11.8.0-devel-ubuntu22.04
- Docker Options:
-e TZ=Asia/Tokyo -p 8080:8080
(Your time zone, of course) - Make sure to select β Run interactive shell server, SSH.
- When connecting via SSH, make sure to include
-L 8080:localhost:8080
in the command. - For a better experience, choose a geographically nearby server with a high connection speed.
- Also, make sure to allocate a large disk space, such as 64GB.
π¦ Running on RunPod
You can deploy our solver on a RunPod instance. To do this, we need to select an official RunPod Docker image instead. Here's how
- Container Image:
runpod/pytorch:2.0.1-py3.10-cuda11.8.0-devel-ubuntu22.04
- Expose HTTP Ports: Empty
- Expose TCP Ports:
22
- When connecting via SSH, make sure to include
-L 8080:localhost:8080
in the command. - For a better experience, choose a geographically nearby server with a high connection speed.
- Also, make sure to allocate a large disk space, such as 64GB.
- β
Make sure to select
SSH Terminal Access
- β Deselect
Start Jupyter Notebook
π This project is licensed under Apache v2.0 license.
The author would like to thank ZOZO, Inc. for allowing him to work on this topic as part of his main workload. The author also extends thanks to the teams in the IP department for permitting the publication of our technical work and the release of our code, as well as to many others for assisting with the internal paperwork required for publication.
@article{Ando2024CB,
author = {Ando, Ryoichi},
title = {A Cubic Barrier with Elasticity-Inclusive Dynamic Stiffness},
year = {2024},
issue_date = {December 2024},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
volume = {43},
number = {6},
issn = {0730-0301},
url = {https://doi.org/10.1145/3687908},
doi = {10.1145/3687908},
journal = {ACM Trans. Graph.},
month = nov,
articleno = {224},
numpages = {13},
keywords = {collision, contact}
}
It should be emphasized that this work was strongly inspired by the IPC. The author kindly encourages citing their original work as well.