MyGit

zju3dv/EasyMocap

Fork: 456 Star: 3688 (更新于 2024-11-06 13:36:11)

license: NOASSERTION

Language: Python .

Make human motion capture easier.

最后发布版本: v0.1 ( 2021-03-29 23:03:23)

GitHub网址

EasyMocap is an open-source toolbox for markerless human motion capture and novel view synthesis from RGB videos. In this project, we provide a lot of motion capture demos in different settings.

python star

News


Core features

Multiple views of a single person

report Open In Colab

This is the basic code for fitting SMPL[^loper2015]/SMPL+H[^romero2017]/SMPL-X[^pavlakos2019]/MANO[^romero2017] model to capture body+hand+face poses from multiple views.



Videos are from ZJU-MoCap, with 23 calibrated and synchronized cameras.

Captured with 8 cameras.

Internet video

This part is the basic code for fitting SMPL[^loper2015] with 2D keypoints estimation[^cao2018][^hrnet] and CNN initialization[^kolotouros2019].


The raw video is from Youtube.

Internet video with a mirror

report quickstart


The raw video is from Youtube.

Multiple Internet videos with a specific action (Coming soon)

report quickstart


Internet videos of Roger Federer's serving

Multiple views of multiple people

report quickstart


Captured with 8 consumer cameras

Novel view synthesis from sparse views

report quickstart


Novel view synthesis for chanllenge motion(coming soon)

Novel view synthesis for human interaction

ZJU-MoCap

With our proposed method, we release two large dataset of human motion: LightStage and Mirrored-Human. See the website for more details.

If you would like to download the ZJU-Mocap dataset, please sign the agreement, and email it to Qing Shuai (s_q@zju.edu.cn) and cc Xiaowei Zhou (xwzhou@zju.edu.cn) to request the download link.


LightStage: captured with LightStage system

Mirrored-Human: collected from the Internet

Many works have achieved wonderful results based on our dataset:

Other features

3D Realtime visualization

quickstart

Camera calibration


Calibration for intrinsic and extrinsic parameters

Annotator


Annotator for bounding box, keypoints and mask

Updates

  • 11/03/2022: Support MultiNeuralBody.
  • 12/25/2021: Support mediapipe keypoints detector.
  • 08/09/2021: Add a colab demo here.
  • 06/28/2021: The Multi-view Multi-person part is released!
  • 06/10/2021: The real-time 3D visualization part is released!
  • 04/11/2021: The calibration tool and the annotator are released.
  • 04/11/2021: Mirrored-Human part is released.

Installation

See documentation for more instructions.

Acknowledgements

Here are the great works this project is built upon:

  • SMPL models and layer are from MPII SMPL-X model.
  • Some functions are borrowed from SPIN, VIBE, SMPLify-X
  • The method for fitting 3D skeleton and SMPL model is similar to SMPLify-X(with 3D keypoints loss), TotalCapture(without using point clouds).
  • We integrate some easy-to-use functions for previous great work:
    • easymocap/estimator/mediapipe_wrapper.py: MediaPipe
    • easymocap/estimator/SPIN : an SMPL estimator[^cao2018]
    • easymocap/estimator/YOLOv4: an object detector[^kolotouros2019]
    • easymocap/estimator/HRNet : a 2D human pose estimator[^bochkovskiy2020]

Contact

Please open an issue if you have any questions. We appreciate all contributions to improve our project.

Contributor

EasyMocap is built by researchers from the 3D vision group of Zhejiang University: Qing Shuai, Qi Fang, Junting Dong, Sida Peng, Di Huang, Hujun Bao, and Xiaowei Zhou.

We would like to thank Wenduo Feng, Di Huang, Yuji Chen, Hao Xu, Qing Shuai, Qi Fang, Ting Xie, Junting Dong, Sida Peng and Xiaopeng Ji who are the performers in the sample data. We would also like to thank all the people who has helped EasyMocap in any way.

Citation

This project is a part of our work iMocap, Mirrored-Human, mvpose, Neural Body, MultiNeuralBody, enerf.

Please consider citing these works if you find this repo is useful for your projects.

@Misc{easymocap,  
    title = {EasyMoCap - Make human motion capture easier.},
    howpublished = {Github},  
    year = {2021},
    url = {https://github.com/zju3dv/EasyMocap}
}

@inproceedings{shuai2022multinb,
  title={Novel View Synthesis of Human Interactions from Sparse
Multi-view Videos},
  author={Shuai, Qing and Geng, Chen and Fang, Qi and Peng, Sida and Shen, Wenhao and Zhou, Xiaowei and Bao, Hujun},
  booktitle={SIGGRAPH Conference Proceedings},
  year={2022}
}

@inproceedings{lin2022efficient,
  title={Efficient Neural Radiance Fields for Interactive Free-viewpoint Video},
  author={Lin, Haotong and Peng, Sida and Xu, Zhen and Yan, Yunzhi and Shuai, Qing and Bao, Hujun and Zhou, Xiaowei},
  booktitle={SIGGRAPH Asia Conference Proceedings},
  year={2022}
}

@inproceedings{dong2021fast,
  title={Fast and Robust Multi-Person 3D Pose Estimation and Tracking from Multiple Views},
  author={Dong, Junting and Fang, Qi and Jiang, Wen and Yang, Yurou and Bao, Hujun and Zhou, Xiaowei},
  booktitle={T-PAMI},
  year={2021}
}
    
@inproceedings{dong2020motion,
  title={Motion capture from internet videos},
  author={Dong, Junting and Shuai, Qing and Zhang, Yuanqing and Liu, Xian and Zhou, Xiaowei and Bao, Hujun},
  booktitle={European Conference on Computer Vision},
  pages={210--227},
  year={2020},
  organization={Springer}
}

@inproceedings{peng2021neural,
  title={Neural Body: Implicit Neural Representations with Structured Latent Codes for Novel View Synthesis of Dynamic Humans},
  author={Peng, Sida and Zhang, Yuanqing and Xu, Yinghao and Wang, Qianqian and Shuai, Qing and Bao, Hujun and Zhou, Xiaowei},
  booktitle={CVPR},
  year={2021}
}

@inproceedings{fang2021mirrored,
  title={Reconstructing 3D Human Pose by Watching Humans in the Mirror},
  author={Fang, Qi and Shuai, Qing and Dong, Junting and Bao, Hujun and Zhou, Xiaowei},
  booktitle={CVPR},
  year={2021}
}

[^loper2015]: Loper, Matthew, et al. "SMPL: A skinned multi-person linear model." ACM transactions on graphics (TOG) 34.6 (2015): 1-16.

[^romero2017]: Romero, Javier, Dimitrios Tzionas, and Michael J. Black. "Embodied hands: Modeling and capturing hands and bodies together." ACM Transactions on Graphics (ToG) 36.6 (2017): 1-17.

[^pavlakos2019]: Pavlakos, Georgios, et al. "Expressive body capture: 3d hands, face, and body from a single image." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2019.

[^cao2018]: Cao, Z., Hidalgo, G., Simon, T., Wei, S.E., Sheikh, Y.: Openpose: real-time multi-person 2d pose estimation using part affinity fields. arXiv preprint arXiv:1812.08008 (2018)

[^kolotouros2019]: Kolotouros, Nikos, et al. "Learning to reconstruct 3D human pose and shape via model-fitting in the loop." Proceedings of the IEEE/CVF International Conference on Computer Vision. 2019

[^bochkovskiy2020]: Bochkovskiy, Alexey, Chien-Yao Wang, and Hong-Yuan Mark Liao. "Yolov4: Optimal speed and accuracy of object detection." arXiv preprint arXiv:2004.10934 (2020).

[^hrnet]: Sun, Ke, et al. "Deep high-resolution representation learning for human pose estimation." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2019.

最近版本更新:(数据更新于 2024-09-24 17:17:03)

2021-03-29 23:03:23 v0.1

主题(topics):

motion-capture

zju3dv/EasyMocap同语言 Python最近更新仓库

2024-11-22 19:26:55 ultralytics/ultralytics

2024-11-22 08:12:43 jxxghp/MoviePilot

2024-11-22 06:12:44 dagster-io/dagster

2024-11-22 02:39:01 goauthentik/authentik

2024-11-22 00:15:39 jumpserver/jumpserver

2024-11-22 00:03:47 comfyanonymous/ComfyUI