Skip to content

A contact solver for physics-based simulations involving πŸ‘š shells, πŸͺ΅ solids and πŸͺ’ rods.

License

Notifications You must be signed in to change notification settings

st-tech/ppf-contact-solver

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

39 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

ZOZO's Contact Solver 🫢 Getting Started

A contact solver for physics-based simulations involving πŸ‘š shells, πŸͺ΅ solids and πŸͺ’ rods. All made by ZOZO. Published in ACM Transactions on Graphics (TOG).

solver logo

✨ Highlights

  • πŸ’ͺ Robust: Contact resolutions are penetration-free. No snagging intersections.
  • ⏲ Scalable: An extreme case includes beyond 150M contacts. Not just one million.
  • 🚲 Cache Efficient: All on the GPU runs in single precision. No double precision.
  • πŸ₯Ό Inextensible: Cloth never extends beyond very strict upper bounds, such as 1%.
  • πŸ“ Physically Accurate: Our deformable solver is driven by the Finite Element Method.
  • πŸš€ Massively Parallel: Both contact and elasticity solvers are run on the GPU.
  • 🐳 Docker Sealed: Everything is designed to work out of the box.
  • 🌐 JupyterLab Included: Open your browser and run examples right away [Video].
  • ✨ Stay Clean: You can remove all traces after use.
  • πŸ‘Œ Open: We have opted the Apache v2.0 license.

πŸŽ“ Technical Materials

⚑️ Requirements

  • πŸ”₯ A modern NVIDIA GPU (Turing or newer).
  • 🐳 A Docker environment (see below).

πŸ“ Change History

  • (2024.12.18) Added a frictional contact example: armadillo sliding on the slope [Video].
  • (2024.12.18) Added a hindsight noting that the tilt angle was not $30^\circ$, but rather $26.57^\circ$.
  • (2024.12.16) Removed thrust dependencies to fix runtime errors for the driver version 560.94 [Issue Link].

🐍 How To Use

Our frontend is accessible through 🌐 a browser using our built-in JupyterLab 🐍 interface. All is set up when you open it for the first time. Results can be interactively viewed through the browser and exported as needed.

This allows you to interact with the simulator on your πŸ’» laptop while the actual simulation runs on a remote headless server over 🌍 the internet. This means that you don't have to buy βš™οΈ hardware, but can rent it at vast.ai or RunPod for less than πŸ’΅ $1 per hour. For example, this [Video] was recorded on a vast.ai instance. The experience is πŸ‘ good!

Here's an example of draping five sheets over a sphere with two corners pinned. Please look into the examples directory for more examples.

# import our frontend
from frontend import App

# make an app with the label "drape"
app = App("drape", renew=True)

# create a square mesh resolution 128 spanning the xz plane
V, F = app.mesh.square(res=128, ex=[1,0,0], ey=[0,0,1])

# add to the asset and name it "sheet"
app.asset.add.tri("sheet", V, F)

# create an icosphere mesh radius 0.5 and 5 subdivisions
V, F = app.mesh.icosphere(r=0.5, subdiv_count=5)

# add to the asset and name it "sphere"
app.asset.add.tri("sphere", V, F)

# create a scene "five-sheets"
scene = app.scene.create("five-sheets")

# define gap between sheets
gap = 0.01

for i in range(5):
    
    # add a sheet to the scene
    obj = scene.add("sheet")

    # pick two vertices max towards directions [1,0,-1] and [-1,0,-1]
    corner = obj.grab([1, 0, -1]) + obj.grab([-1, 0, -1])

    # place it with a vertical offset and pin the corners
    obj.at(0, gap * i, 0).pin(corner)

    # set fiber directions required for the Baraff-Witkin model
    obj.direction([1, 0, 0], [0, 0, 1])

# add a sphere mesh at a lower position and set it to a static collider
scene.add("sphere").at(0, -0.5 - gap, 0).pin()

# compile the scene and report stats
fixed = scene.build().report()

# interactively preview the built scene (image left)
fixed.preview()

# set simulation parameter(s)
param = app.session.param()
param.set("dt", 0.01)

# create a new session with a name
session = app.session.create("dt-001").init(fixed)

# start the simulation and live-preview the results (image right)
session.start(param).preview()

# also show streaming logs
session.stream()

# or interactively view the animation sequences
session.animate()

# export all simulated frames (downloadable from the file browser)
session.export_animation(f"export/{session.info.name}")

drape

πŸ–ΌοΈ Catalogue

woven stack trampoline needle
cards codim hang trapped
domino noodle drape quintuple
ribbon curtain fishingknot friction

At the moment, not all examples are ready yet, but they will be added/updated one by one. The author is actively woriking on it.

πŸ’¨ Getting Started

πŸ› οΈ All the steps below are verified to run without errors via automated GitHub Actions βš™οΈ (see .github/workflows/getting-started.yml ).

The tested πŸš€ runner is the Ubuntu NVIDIA GPU-Optimized Image for AI and HPC with an NVIDIA Tesla T4 (16 GB VRAM) with Driver version 550.127.05. This is not a self-hosted runner, meaning that each time the runner launches, all environments are 🌱 fresh.

πŸŽ₯ Installation Videos

We provide uninterrupted recorded installation videos (πŸͺŸ Windows [Video], 🐧 Linux [Video] and ☁ vast.ai [Video]) to reduce stress 😣 during the installation process. We encourage you to πŸ‘€ check them out to get a sense of how things go ⏳ and how long ⏱️ each step takes.

🐳 Installing Docker

To get the ball ⚽ rolling, we'll configure a Docker environment 🐳 to minimize any trouble 🀯 that πŸ₯Š hits you.

Note

If you wish to install our solver on a headless remote machine, SSH into the server with port forwarding using the following command:

ssh -L 8080:localhost:8080 user@remote_server_address

This port will be used to access the frontend afterward. The two port numbers of 8080 must match the value we set for $MY_WEB_PORT below.

First, install the CUDA Toolkit [Link] along with the driver on your host system. Next, follow the instructions below specific to the operating system running on the host.

πŸͺŸ Windows

Install the latest version of Docker Desktop [Link] on the host computer. You may need to log out or reboot after the installation. After logging back in, launch Docker Desktop to ensure that Docker is running. Then, create a container πŸ“¦ by running the following Docker command in PowerShell:

$MY_WEB_PORT = 8080  # Port number for JupyterLab web browsing
$MY_TIME_ZONE = "Asia/Tokyo"  # Your time zone
$MY_CONTAINER_NAME = "ppf-contact-solver"  # Container name

docker run -it `
    --gpus all `
    -p ${MY_WEB_PORT}:8080 `
    -e TERM `
    -e TZ=$MY_TIME_ZONE `
    -e LANG=en_US.UTF-8 `
    --hostname ppf-dev `
    --name $MY_CONTAINER_NAME `
    -e NVIDIA_DRIVER_CAPABILITIES="graphics,compute,utility" `
    nvidia/cuda:11.8.0-devel-ubuntu22.04

Windows users do not need to install the NVIDIA Container Toolkit.

🐧 Linux

Linux users will also need to install Docker πŸ‹ on their system. Please refer to the installation guide [Link]. Also, install the NVIDIA Container Toolkit by following the guide [Link]. Then, create a container πŸ“¦ by running the following Docker command:

MY_WEB_PORT=8080  # Port number for JupyterLab web browsing
MY_TIME_ZONE=Asia/Tokyo  # Your time zone
MY_CONTAINER_NAME=ppf-contact-solver  # Container name

docker run -it \
    --gpus all \
    -p $MY_WEB_PORT:8080 \
    -e TERM -e TZ=$MY_TIME_ZONE \
    -e LANG=en_US.UTF-8 \
    --hostname ppf-dev \
    --name $MY_CONTAINER_NAME -e \
    NVIDIA_DRIVER_CAPABILITIES=graphics,compute,utility \
    nvidia/cuda:11.8.0-devel-ubuntu22.04

πŸͺŸπŸ§ Both Systems

At the end of the line, you should see:

root@ppf-dev:/#

From here on, all commands will happen in the πŸ“¦ container, not on your host. Next, we'll make sure that a NVIDIA driver is visible from the Docker container. Try this

nvidia-smi

If successful, this will get back to you with something like this

+-----------------------------------------------------------------------------+
| NVIDIA-SMI 525.105.17   Driver Version: 525.105.17   CUDA Version: 12.0     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|===============================+======================+======================|
|   0  NVIDIA GeForce ...  On   | 00000000:C1:00.0 Off |                  Off |
| 64%   51C    P2   188W / 450W |   4899MiB / 24564MiB |     91%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+
                                                                               
+-----------------------------------------------------------------------------+
| Processes:                                                                  |
|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |
|        ID   ID                                                   Usage      |
|=============================================================================|
+-----------------------------------------------------------------------------+

Note

If an error occurs πŸ₯΅, ensure that nvidia-smi is working on your host. For Linux users, make sure the NVIDIA Container Toolkit is properly installed. If the issue persists, try running sudo service docker restart on your host to resolve it.

Please confirm that your GPU is listed here. Now let's get the installation started. No worries πŸ€™; all the commands below only disturb things in the container, so your host environment stays clean ✨. First, install following packages

apt update
apt install -y git python3

Next, clone our respository

git clone https://github.com/st-tech/ppf-contact-solver.git

Move into the ppf-contact-solver and let warmup.py do all the rest πŸ’€:

cd ppf-contact-solver
python3 warmup.py

Note

If you’re suspicious, you can look around warmup.py before you proceed. Run less warmup.py, scroll all the way to the bottom, and hit q to quit.

Now we're set. Let's kick in the compilation!πŸƒ

source "$HOME/.cargo/env"
cargo build --release

Be patient; this takes some time... ⏰⏰ If the last line says

Finished `release` profile [optimized] target(s) in ...

We're done! πŸŽ‰ Start our frontend by

python3 warmup.py jupyter

and now you can access our JupyterLab frontend from http://localhost:8080 on your 🌐 browser. The port number 8080 is the one we set for $MY_WEB_PORT. Enjoy! πŸ˜„

🧹 Cleaning Up

To remove all traces, simply stop πŸ›‘ the container and ❌ delete it. Be aware that all simulation data will be also lost. Back up any important data if needed.

docker stop $MY_CONTAINER_NAME
docker rm $MY_CONTAINER_NAME

Note

If you wish to completely wipe what we’ve done here, you may also need to purge the Docker image by:

docker rmi $(docker images | grep 'nvidia/cuda' | grep '11.8.0-devel-ubuntu22.04' | awk '{print $3}')

but don't do this if you still need it.

☁ Running on vast.ai

The exact same steps above should work (see .github/workflows/getting-started-vast.yml), except that you'll need to create a Docker template. Here's one:

  • Image Path/Tag: nvidia/cuda:11.8.0-devel-ubuntu22.04
  • Docker Options: -e TZ=Asia/Tokyo -p 8080:8080 (Your time zone, of course)
  • Make sure to select βœ… Run interactive shell server, SSH.
  • When connecting via SSH, make sure to include -L 8080:localhost:8080 in the command.
  • For a better experience, choose a geographically nearby server with a high connection speed.
  • Also, make sure to allocate a large disk space, such as 64GB.

vast template

vast diskspace

πŸ“¦ Running on RunPod

You can deploy our solver on a RunPod instance. To do this, we need to select an official RunPod Docker image instead. Here's how

  • Container Image: runpod/pytorch:2.0.1-py3.10-cuda11.8.0-devel-ubuntu22.04
  • Expose HTTP Ports: Empty
  • Expose TCP Ports: 22
  • When connecting via SSH, make sure to include -L 8080:localhost:8080 in the command.
  • For a better experience, choose a geographically nearby server with a high connection speed.
  • Also, make sure to allocate a large disk space, such as 64GB.
  • βœ… Make sure to select SSH Terminal Access
  • ❌ Deselect Start Jupyter Notebook

runpod template

runpod deploy

πŸ“ƒ License

πŸ“ This project is licensed under Apache v2.0 license.

πŸ™ Acknowledgements

The author would like to thank ZOZO, Inc. for allowing him to work on this topic as part of his main workload. The author also extends thanks to the teams in the IP department for permitting the publication of our technical work and the release of our code, as well as to many others for assisting with the internal paperwork required for publication.

πŸ–‹ Citation

@article{Ando2024CB,
author = {Ando, Ryoichi},
title = {A Cubic Barrier with Elasticity-Inclusive Dynamic Stiffness},
year = {2024},
issue_date = {December 2024},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
volume = {43},
number = {6},
issn = {0730-0301},
url = {https://doi.org/10.1145/3687908},
doi = {10.1145/3687908},
journal = {ACM Trans. Graph.},
month = nov,
articleno = {224},
numpages = {13},
keywords = {collision, contact}
}

It should be emphasized that this work was strongly inspired by the IPC. The author kindly encourages citing their original work as well.

About

A contact solver for physics-based simulations involving πŸ‘š shells, πŸͺ΅ solids and πŸͺ’ rods.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published