diff --git a/.DS_Store b/.DS_Store index e6769cf..c8e417b 100644 Binary files a/.DS_Store and b/.DS_Store differ diff --git a/files/.DS_Store b/files/.DS_Store index 6078d29..292d887 100644 Binary files a/files/.DS_Store and b/files/.DS_Store differ diff --git a/files/paper.pdf b/files/paper.pdf index 35621ff..3da919f 100644 Binary files a/files/paper.pdf and b/files/paper.pdf differ diff --git a/index.html b/index.html index 3f0a637..44e6520 100644 --- a/index.html +++ b/index.html @@ -126,7 +126,7 @@

Peiye Zhuang1 - Chuang Gan2,4 + Chuang Gan2 Evangelos Kalogerakis2,3 @@ -145,15 +145,14 @@

1 Snap Inc 2 UMass Amherst 3 TU Crete - 4 MIT-IBM Watson AI Lab

- [Paper]      - [Arxiv (soon)]      - [Code (soon)]     + [Paper]      + [Arxiv (soon)]      + [Code (soon)]     [BibTeX]

@@ -232,7 +231,7 @@

Abstract

Motivation

- +

Existing motion prediction methods struggle with short-term, sparse predictions and often fail to deliver accurate 3D motion estimations while optimization-based approaches require substantial time to process a single video. We are the first method capable of efficiently tracking every pixel in 3D space over hundreds of frames from monocular videos, and achieves @@ -502,8 +501,8 @@

Detailed quantitative results can be found in our Acknowledgements: We borrow this template from MonST3R and 4Real. The tracking visualization is inspired by CoTracker. The camera pose visualization tool is borrowed from MonST3R. We sincerely thank the authors for the open-source code. -

+