This repository is forked from edvardHua/PoseEstimationForMobile when the original repository was closed.
edvardHua/PoseEstimationForMobile repository is reopened! I'll maintain it separately. π
This repository currently implemented the Hourglass model using TensorFlow 2.0 with Keras API.
- Goals
- Getting Started
- Results
- Converting To Mobile Model
- Tuning
- Details
- TODO
- Related Projects
- Acknowledgements
- Reference
- Contributing
- License
- π Easy to train
- πβ Easy to use the model on mobile device
Create new environment.
conda create -n {env_name} python={python_version} anaconda
# in my case
# conda create -n mpe-env-tf2-alpha0 python=3.7 anaconda
Start the environment.
source activate {env_name}
# in my case
# source activate mpe-env-tf2-alpha0
cd {tf2-mobile-pose-estimation_path}
pip install -r requirements.txt
pip install git+https://github.com/philferriere/cocoapi.git@2929bd2ef6b451054755dfd7ceb09278f935f7ad#subdirectory=PythonAPI
Download original COCO dataset.
Special script that will help you to download and unpack needed COCO datasets. Please fill COCO_DATASET_PATH with path that is used in current version of repository. You can check needed path in file train.py
Warning Your system should have approximately 40gb of free space for datasets
python downloader.py --download-path=COCO_DATASET_PATH
In order to use the project you have to:
- Prepare the dataset(ai_challenger dataset) and unzip.
- Run the model using:
python train.py \
--dataset_config config/dataset/coco_single_person_only-gpu.cfg \
--experiment_config config/training/coco_single_experiment01-cpm-sg4-gpu.cfg
Dataset Name | Doanload | Size | Number of images train/valid |
Number of Keypoints | Note |
---|---|---|---|---|---|
ai challenge | google drive | 2GB | 22k/1.5k | 14 | default dataset of this repo |
coco single person only | google drive | 4GB | 25k/1k | 17 | filtered by showing only one person in an image which is from coco 2017 keypoint dataset |
- ai challenge's keypoint names:
['top_head', 'neck', 'left_shoulder', 'right_shoulder', 'left_elbow', 'right_elbow', 'left_wrist', 'right_wrist', 'left_hip', 'right_hip', 'left_knee', 'right_knee', 'left_ankle', 'right_ankle']
- coco's keypoint names:
['nose', 'left_eye', 'right_eye', 'left_ear', 'right_ear', 'left_shoulder', 'right_shoulder', 'left_elbow', 'right_elbow', 'left_wrist', 'right_wrist', 'left_hip', 'right_hip', 'left_knee', 'right_knee', 'left_ankle', 'right_ankle']
Model Name | Backbone | Stage Or Depth | [email protected] | Size | Total Epoch | Total Training Time | Note |
---|---|---|---|---|---|---|---|
MobileNetV2 based CPM | cpm-b0 | Stage 1 | .. | .. | .. | .. | Default CPM |
MobileNetV2 based CPM | cpm-b0 | Stage 2 | .. | .. | .. | .. | |
MobileNetV2 based CPM | cpm-b0 | Stage 3 | .. | .. | .. | .. | |
MobileNetV2 based CPM | cpm-b0 | Stage 4 | .. | .. | .. | .. | |
MobileNetV2 based CPM | cpm-b0 | Stage 5 | .. | .. | .. | .. | |
MobileNetV2 based Hourglass | hg-b0 | Depth 4 | .. | .. | .. | .. | Default Hourglass |
Model Name | Backbone | Stage Or Depth | OKS | Size | Total Epoch | Total Training Time | Note |
---|---|---|---|---|---|---|---|
MobileNetV2 based CPM | cpm-b0 | Stage 1 | .. | .. | .. | .. | Default CPM |
MobileNetV2 based CPM | cpm-b0 | Stage 2 | .. | .. | .. | .. | |
MobileNetV2 based CPM | cpm-b0 | Stage 3 | .. | .. | .. | .. | |
MobileNetV2 based CPM | cpm-b0 | Stage 4 | .. | .. | .. | .. | |
MobileNetV2 based CPM | cpm-b0 | Stage 5 | .. | .. | .. | .. | |
MobileNetV2 based Hourglass | hg-b0 | Depth 4 | .. | .. | .. | .. | Default Hourglass |
If you train the model, it will create tflite models per evaluation step.
Check convert_to_coreml.py
script. The converted .mlmodel
support iOS14+.
This section will be separated to other
.md
file.
tf2-mobile-pose-estimation
βββ config
| βββ model_config.py
| βββ train_config.py
βββ data_loader
| βββ data_loader.py
| βββ dataset_augment.py
| βββ dataset_prepare.py
| βββ pose_image_processor.py
βββ models
| βββ common.py
| βββ mobilenet.py
| βββ mobilenetv2.py
| βββ mobilenetv3.py
| βββ resnet.py
| βββ resneta.py
| βββ resnetd.py
| βββ senet.py
| βββ simplepose_coco.py
| βββ simpleposemobile_coco.py
βββ train.py - the main training script
βββ common.py
βββ requirements.txt
βββ outputs - this folder will be generated automatically when start training
βββ 20200312-sp-ai_challenger
| βββ saved_model
| βββ image_results
βββ 20200312-sp-ai_challenger
βββ ...
My SSD
βββ datasets - this folder contains the datasets of the project.
βββ ai_challenger
βββ train.json
βββ valid.json
βββ train
βββ valid
Save model to saved_modelConvert the model(saved_model) to TFLite model(.tflite
)Convert the model(saved_model) to Core ML model(.mlmodel
)Run the model on iOS- Release 1.0 models
- Support distributed GPUs training
- Make DEMO gif running on mobile device
- Run the model on Android
[1] Paper of Convolutional Pose Machines
[2] Paper of Stack Hourglass
[3] Paper of MobileNet V2
[4] Repository PoseEstimation-CoreML
[5] Repository of tf-pose-estimation
[6] Devlope guide of TensorFlow Lite
[7] Mace documentation
- tucan9389/PoseEstimation-CoreML
- tucan9389/PoseEstimation-TFLiteSwift (Preparing...)
- tucan9389/KeypointAnnotation
- osmr/imgclsmob
- edvardHua/PoseEstimationForMobile
- jwkanggist/tf-tiny-pose-estimation
- dongseokYang/Body-Pose-Estimation-Android-gpu
This section will be separated to other
.md
file.
Any contributions are welcome including improving the project.