v1.1.0
版本发布时间: 2023-07-04 21:35:15
open-mmlab/mmpose最新发布版本:v1.3.2(2024-07-12 20:18:03)
New Datasets
We are glad to support 3 new datasets:
- (CVPR 2023) Human-Art
- (CVPR 2022) Animal Kingdom
- (AAAI 2020) LaPa
(CVPR 2023) Human-Art
Human-Art is a large-scale dataset that targets multi-scenario human-centric tasks to bridge the gap between natural and artificial scenes.
Contents of Human-Art:
- 50,000 images including human figures in 20 scenarios (5 natural scenarios, 3 2D artificial scenarios, and 12 2D artificial scenarios)
- Human-centric annotations include human bounding box, 21 2D human keypoints, human self-contact keypoints, and description text
- baseline human detector and human pose estimator trained on the joint of MSCOCO and Human-Art
Models trained on Human-Art:
Thanks @juxuan27 for helping with the integration of Human-Art!
(CVPR 2022) Animal Kingdom
Animal Kingdom provides multiple annotated tasks to enable a more thorough understanding of natural animal behaviors.
Results comparison:
Arch | Input Size | PCK(0.05) Ours | Official Repo | Paper |
---|---|---|---|---|
P1_hrnet_w32 | 256x256 | 0.6323 | 0.6342 | 0.6606 |
P2_hrnet_w32 | 256x256 | 0.3741 | 0.3726 | 0.393 |
P3_mammals_hrnet_w32 | 256x256 | 0.571 | 0.5719 | 0.6159 |
P3_amphibians_hrnet_w32 | 256x256 | 0.5358 | 0.5432 | 0.5674 |
P3_reptiles_hrnet_w32 | 256x256 | 0.51 | 0.5 | 0.5606 |
P3_birds_hrnet_w32 | 256x256 | 0.7671 | 0.7636 | 0.7735 |
P3_fishes_hrnet_w32 | 256x256 | 0.6406 | 0.636 | 0.6825 |
For more details, see this page
Thanks @Dominic23331 for helping with the integration of Animal Kingdom!
(AAAI 2020) LaPa
Landmark guided face Parsing dataset (LaPa) consists of more than 22,000 facial images with abundant variations in expression, pose and occlusion, and each image of LaPa is provided with an 11-category pixel-level label map and 106-point landmarks.
Supported by @Tau-J
New Config Type
MMEngine introduced the pure Python style configuration file:
- Support navigating to base configuration file in IDE
- Support navigating to base variable in IDE
- Support navigating to source code of class in IDE
- Support inheriting two configuration files containing the same field
- Load the configuration file without other third-party requirements
Refer to the tutorial for more detailed usages.
We provided some examples here. Also, new config type of YOLOX-Pose is supported here. Feel free to try this new feature and give us your feedback!
Improved RTMPose
We combined public datasets and released more powerful RTMPose models:
- 17-kpt and 26-kpt body models
- 21-kpt hand models
- 106-kpt face models
List of examples to deploy RTMPose:
- RTMPose-Deploy @HW140701 @Dominic23331
- RTMPose-Deploy is a C++ code example for RTMPose localized deployment.
- RTMPose inference with ONNXRuntime (Python) @IRONICBo
- This example shows how to run RTMPose inference with ONNXRuntime in Python.
- PoseTracker Android Demo
- PoseTracker Android Demo Prototype based on mmdeploy.
Check out this page to know more.
Supported by @Tau-J
3D Pose Lifter Refactory
We have migrated SimpleBaseline3D and VideoPose3D into MMPose v1.1.0. Users can easily use Inferencer and body3d demo to conduct inference.
Below is an example of how to use Inferencer to predict 3d pose:
python demo/inferencer_demo.py tests/data/coco/000000000785.jpg \
--pose3d human3d --vis-out-dir vis_results/human3d \
--rebase-keypoint-height
Video result:
Supported by @LareinaM
Inference Speed-up & Webcam Inference
We have made a lot of improvements to our demo scripts:
- Much higher inference speed
- OpenCV-backend visualizer
- All demos support inference with webcam
Take topdown_demo_with_mmdet.py
as example, you can conduct inference with webcam by specifying --input webcam
:
# inference with webcam
python demo/topdown_demo_with_mmdet.py \
projects/rtmpose/rtmdet/person/rtmdet_nano_320-8xb32_coco-person.py \
https://download.openmmlab.com/mmpose/v1/projects/rtmpose/rtmdet_nano_8xb32-100e_coco-obj365-person-05d8511e.pth \
projects/rtmpose/rtmpose/body_2d_keypoint/rtmpose-m_8xb256-420e_coco-256x192.py \
https://download.openmmlab.com/mmpose/v1/projects/rtmposev1/rtmpose-m_simcc-aic-coco_pt-aic-coco_420e-256x192-63eb25f7_20230126.pth \
--input webcam \
--show
Supported by @Ben-Louis and @LareinaM
New Contributors
- @xin-li-67 made their first contribution in https://github.com/open-mmlab/mmpose/pull/2205
- @irexyc made their first contribution in https://github.com/open-mmlab/mmpose/pull/2216
- @lu-minous made their first contribution in https://github.com/open-mmlab/mmpose/pull/2225
- @FishBigOcean made their first contribution in https://github.com/open-mmlab/mmpose/pull/2286
- @ATang0729 made their first contribution in https://github.com/open-mmlab/mmpose/pull/2201
- @HW140701 made their first contribution in https://github.com/open-mmlab/mmpose/pull/2316
- @IRONICBo made their first contribution in https://github.com/open-mmlab/mmpose/pull/2323
- @shuheilocale made their first contribution in https://github.com/open-mmlab/mmpose/pull/2340
- @Dominic23331 made their first contribution in https://github.com/open-mmlab/mmpose/pull/2139
- @notplus made their first contribution in https://github.com/open-mmlab/mmpose/pull/2365
- @juxuan27 made their first contribution in https://github.com/open-mmlab/mmpose/pull/2304
- @610265158 made their first contribution in https://github.com/open-mmlab/mmpose/pull/2366
- @CescMessi made their first contribution in https://github.com/open-mmlab/mmpose/pull/2385
- @huangjiyi made their first contribution in https://github.com/open-mmlab/mmpose/pull/2467
- @Billccx made their first contribution in https://github.com/open-mmlab/mmpose/pull/2417
- @mareksubocz made their first contribution in https://github.com/open-mmlab/mmpose/pull/2474
Full Changelog: https://github.com/open-mmlab/mmpose/compare/v1.0.0...v1.1.0