This repository was created for the only purpose of showing a successful method for installing the CMU Perceptual Computing Lab's OpenPose, commit 1d95a1a67f543ac76e6519941a3cfea55e2d5743 (openpose/master)
at the moment of creation, in CPU_ONLY
mode with the linked Caffe version in a MacOS computer with M1 chip.
**Update 2024/06/05 ** Python API could also be build, without making any modification. Just check the box of BUILD_PYTHON
when you open the Cmake GUI.
Caffe's documentation requests a Python 2 version, but the usual environment manager on a Mac with an M1 chip doesn't support setting a 2.X version, leaving only the 3.X option. This demands the changes of some files. Additionally, the CMake files provided in the openpose repository (including caffe) did not come with the C++ compiler definitions and flags suitable for a smooth installation. Not finding the suitable C++ version while compiling the files produced various errors and time-consuming repairs during the build process. The highest C++ version required was C++17, therefore this version was set as default for the complete build.
Below is a list of the common errors encountered by the community and experimented by your server.
The changes done in the last commit e0e5833 (this repo/master)
prevent these errors and guarantee a smooth build of OpenPose and Caffe.
- ' C++ versions less than C++14 are not supported '
- ' 'random_shuffle': is not a member of 'std' '
- ' no member/type names 'xx' in namespace 'yy' (a) (b)'
- ' Could NOT find vecLib (missing: vecLib_INCLUDE_DIR) '
- ' 'cblas.h' file not found #include <cblas.h> '
Disclaimer:
This repository and Caffe are a copy of the OpenPose repository and the contained caffe git submodule, therefore I do not own the content uploaded in this repository. Only the changes done in the commits with my sign (franzcrs) including commit e0e5833 (this repo/master)
are of my authorship.
- Creation of virtual environment. This assumes you already have a python environment manager. I work with miniforge3 and following the steps here
conda create -n openpose python=3.9
conda activate openpose
- Download models from alternative source
- Open terminal in the folder you wish and clone this repository
git clone https://github.com/franzcrs/openpose-with-caffe-for-MacM1
- Copy the models in the corresponding folder (face, hand, pose) inside the repository
models/
folder - Install Xcode Command Line Tools and homebrew if you haven't
xcode-select --install
bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install.sh)"
brew update
- Install cmake cask
brew install --cask cmake
- Enter the cloned repo and install the required dependencies in modified install_deps.sh
cd openpose-with-caffe-for-MacM1
bash scripts/osx/install_deps.sh
- Create a build folder
mkdir build
cd build
- Run cmake gui
cmake-gui ..
-
Make sure the folder paths point to the repository folder and the build folder. Image is example.
-
Make sure
BUILD_CAFFE
is checked andGPU_MODE
is set asCPU_ONLY
. Optionally you can checkBUILD_PYTHON
to build the Python API (tested). Click on Configure. Then with the default options click Finish. The monitor should show:
Configuring done (x.xs)
- Click on Generate. The monitor should show:
Generating done (x.xs)
- Finally run make. (In case of error, you may want to activate the verbose, by adding
VERBOSE=1
at the end of the instruction)
make -j`sysctl -n hw.logicalcpu`
- The installation should finish by showing in the terminal:
[100%] Built target openpose_wrapper
Built target openpose_lib
- Test by using the webcam real-time video
cd ..
./build/examples/openpose/openpose.bin
Build Type | Linux |
MacOS |
Windows |
---|---|---|---|
Build Status |
OpenPose has represented the first real-time multi-person system to jointly detect human body, hand, facial, and foot keypoints (in total 135 keypoints) on single images.
It is authored by Ginés Hidalgo, Zhe Cao, Tomas Simon, Shih-En Wei, Yaadhav Raaj, Hanbyul Joo, and Yaser Sheikh. It is maintained by Ginés Hidalgo and Yaadhav Raaj. OpenPose would not be possible without the CMU Panoptic Studio dataset. We would also like to thank all the people who have helped OpenPose in any way.
Authors Ginés Hidalgo (left) and Hanbyul Joo (right) in front of the CMU Panoptic Studio
Testing OpenPose: (Left) Crazy Uptown Funk flashmob in Sydney video sequence. (Center and right) Authors Ginés Hidalgo and Tomas Simon testing face and hands
Tianyi Zhao testing the OpenPose 3D Module
Tianyi Zhao and Ginés Hidalgo testing the OpenPose Unity Plugin
We show an inference time comparison between the 3 available pose estimation libraries (same hardware and conditions): OpenPose, Alpha-Pose (fast Pytorch version), and Mask R-CNN. The OpenPose runtime is constant, while the runtime of Alpha-Pose and Mask R-CNN grow linearly with the number of people. More details here.
Main Functionality:
- 2D real-time multi-person keypoint detection:
- 15, 18 or 25-keypoint body/foot keypoint estimation, including 6 foot keypoints. Runtime invariant to number of detected people.
- 2x21-keypoint hand keypoint estimation. Runtime depends on number of detected people. See OpenPose Training for a runtime invariant alternative.
- 70-keypoint face keypoint estimation. Runtime depends on number of detected people. See OpenPose Training for a runtime invariant alternative.
- 3D real-time single-person keypoint detection:
- 3D triangulation from multiple single views.
- Synchronization of Flir cameras handled.
- Compatible with Flir/Point Grey cameras.
- Calibration toolbox: Estimation of distortion, intrinsic, and extrinsic camera parameters.
- Single-person tracking for further speedup or visual smoothing.
Input: Image, video, webcam, Flir/Point Grey, IP camera, and support to add your own custom input source (e.g., depth camera).
Output: Basic image + keypoint display/saving (PNG, JPG, AVI, ...), keypoint saving (JSON, XML, YML, ...), keypoints as array class, and support to add your own custom output code (e.g., some fancy UI).
OS: Ubuntu (20, 18, 16, 14), Windows (10, 8), Mac OSX, Nvidia TX2.
Hardware compatibility: CUDA (Nvidia GPU), OpenCL (AMD GPU), and non-GPU (CPU-only) versions.
Usage Alternatives:
- Command-line demo for built-in functionality.
- C++ API and Python API for custom functionality. E.g., adding your custom inputs, pre-processing, post-posprocessing, and output steps.
For further details, check the major released features and release notes docs.
- OpenPose training code
- OpenPose foot dataset
- OpenPose Unity Plugin
- OpenPose papers published in IEEE TPAMI and CVPR. Cite them in your publications if OpenPose helps your research! (Links and more details in the Citation section below).
If you want to use OpenPose without installing or writing any code, simply download and use the latest Windows portable version of OpenPose!
Otherwise, you could build OpenPose from source. See the installation doc for all the alternatives.
Simply use the OpenPose Demo from your favorite command-line tool (e.g., Windows PowerShell or Ubuntu Terminal). E.g., this example runs OpenPose on your webcam and displays the body keypoints:
# Ubuntu
./build/examples/openpose/openpose.bin
:: Windows - Portable Demo
bin\OpenPoseDemo.exe --video examples\media\video.avi
You can also add any of the available flags in any order. E.g., the following example runs on a video (--video {PATH}
), enables face (--face
) and hands (--hand
), and saves the output keypoints on JSON files on disk (--write_json {PATH}
).
# Ubuntu
./build/examples/openpose/openpose.bin --video examples/media/video.avi --face --hand --write_json output_json_folder/
:: Windows - Portable Demo
bin\OpenPoseDemo.exe --video examples\media\video.avi --face --hand --write_json output_json_folder/
Optionally, you can also extend OpenPose's functionality from its Python and C++ APIs. After installing OpenPose, check its official doc for a quick overview of all the alternatives and tutorials.
Our library is open source for research purposes, and we want to improve it! So let us know (create a new GitHub issue or pull request, email us, etc.) if you...
- Find/fix any bug (in functionality or speed) or know how to speed up or improve any part of OpenPose.
- Want to add/show some cool functionality/demo/project made on top of OpenPose. We can add your project link to our Community-based Projects section or even integrate it with OpenPose!
Please cite these papers in your publications if OpenPose helps your research. All of OpenPose is based on OpenPose: Realtime Multi-Person 2D Pose Estimation using Part Affinity Fields, while the hand and face detectors also use Hand Keypoint Detection in Single Images using Multiview Bootstrapping (the face detector was trained using the same procedure as the hand detector).
@article{8765346,
author = {Z. {Cao} and G. {Hidalgo Martinez} and T. {Simon} and S. {Wei} and Y. A. {Sheikh}},
journal = {IEEE Transactions on Pattern Analysis and Machine Intelligence},
title = {OpenPose: Realtime Multi-Person 2D Pose Estimation using Part Affinity Fields},
year = {2019}
}
@inproceedings{simon2017hand,
author = {Tomas Simon and Hanbyul Joo and Iain Matthews and Yaser Sheikh},
booktitle = {CVPR},
title = {Hand Keypoint Detection in Single Images using Multiview Bootstrapping},
year = {2017}
}
@inproceedings{cao2017realtime,
author = {Zhe Cao and Tomas Simon and Shih-En Wei and Yaser Sheikh},
booktitle = {CVPR},
title = {Realtime Multi-Person 2D Pose Estimation using Part Affinity Fields},
year = {2017}
}
@inproceedings{wei2016cpm,
author = {Shih-En Wei and Varun Ramakrishna and Takeo Kanade and Yaser Sheikh},
booktitle = {CVPR},
title = {Convolutional pose machines},
year = {2016}
}
Paper links:
- OpenPose: Realtime Multi-Person 2D Pose Estimation using Part Affinity Fields:
- Hand Keypoint Detection in Single Images using Multiview Bootstrapping
- Realtime Multi-Person 2D Pose Estimation using Part Affinity Fields
- Convolutional Pose Machines
OpenPose is freely available for free non-commercial use, and may be redistributed under these conditions. Please, see the license for further details. Interested in a commercial license? Check this FlintBox link. For commercial queries, use the Contact
section from the FlintBox link and also send a copy of that message to Yaser Sheikh.