Skip to content

tucan9389/tf2-mobile-2d-single-pose-estimation

Repository files navigation

πŸ’ƒ Mobile 2D Single Person (Or Your Own Object) Pose Estimation for TensorFlow 2.0

This repository is forked from edvardHua/PoseEstimationForMobile when the original repository was closed.
edvardHua/PoseEstimationForMobile repository is reopened! I'll maintain it separately. πŸ‘

This repository currently implemented the Hourglass model using TensorFlow 2.0 with Keras API.

Table of contents

Goals

  • πŸ“š Easy to train
  • πŸƒβ€ Easy to use the model on mobile device

Getting Started

Install Anaconda (~10 min)

Create Virtual Environment (~2 min)

Create new environment.

conda create -n {env_name} python={python_version} anaconda
# in my case
# conda create -n mpe-env-tf2-alpha0 python=3.7 anaconda

Start the environment.

source activate {env_name}
# in my case
# source activate mpe-env-tf2-alpha0

Install the requirements (~1 min)

cd {tf2-mobile-pose-estimation_path}
pip install -r requirements.txt
pip install git+https://github.com/philferriere/cocoapi.git@2929bd2ef6b451054755dfd7ceb09278f935f7ad#subdirectory=PythonAPI
Download original COCO dataset.

Download original COCO dataset

Special script that will help you to download and unpack needed COCO datasets. Please fill COCO_DATASET_PATH with path that is used in current version of repository. You can check needed path in file train.py

Warning Your system should have approximately 40gb of free space for datasets

python downloader.py --download-path=COCO_DATASET_PATH

Run The Project

In order to use the project you have to:

  1. Prepare the dataset(ai_challenger dataset) and unzip.
  2. Run the model using:
python train.py \
--dataset_config config/dataset/coco_single_person_only-gpu.cfg \
--experiment_config config/training/coco_single_experiment01-cpm-sg4-gpu.cfg

Compatiable Datasets

Dataset Name Doanload Size Number of images
train/valid
Number of Keypoints Note
ai challenge google drive 2GB 22k/1.5k 14 default dataset of this repo
coco single person only google drive 4GB 25k/1k 17 filtered by showing only one person in an image which is from coco 2017 keypoint dataset
  • ai challenge's keypoint names: ['top_head', 'neck', 'left_shoulder', 'right_shoulder', 'left_elbow', 'right_elbow', 'left_wrist', 'right_wrist', 'left_hip', 'right_hip', 'left_knee', 'right_knee', 'left_ankle', 'right_ankle']
  • coco's keypoint names: ['nose', 'left_eye', 'right_eye', 'left_ear', 'right_ear', 'left_shoulder', 'right_shoulder', 'left_elbow', 'right_elbow', 'left_wrist', 'right_wrist', 'left_hip', 'right_hip', 'left_knee', 'right_knee', 'left_ankle', 'right_ankle']

Results

AI Challenge Dataset

Model Name Backbone Stage Or Depth [email protected] Size Total Epoch Total Training Time Note
MobileNetV2 based CPM cpm-b0 Stage 1 .. .. .. .. Default CPM
MobileNetV2 based CPM cpm-b0 Stage 2 .. .. .. ..
MobileNetV2 based CPM cpm-b0 Stage 3 .. .. .. ..
MobileNetV2 based CPM cpm-b0 Stage 4 .. .. .. ..
MobileNetV2 based CPM cpm-b0 Stage 5 .. .. .. ..
MobileNetV2 based Hourglass hg-b0 Depth 4 .. .. .. .. Default Hourglass

COCO Single persononly Dataset

Model Name Backbone Stage Or Depth OKS Size Total Epoch Total Training Time Note
MobileNetV2 based CPM cpm-b0 Stage 1 .. .. .. .. Default CPM
MobileNetV2 based CPM cpm-b0 Stage 2 .. .. .. ..
MobileNetV2 based CPM cpm-b0 Stage 3 .. .. .. ..
MobileNetV2 based CPM cpm-b0 Stage 4 .. .. .. ..
MobileNetV2 based CPM cpm-b0 Stage 5 .. .. .. ..
MobileNetV2 based Hourglass hg-b0 Depth 4 .. .. .. .. Default Hourglass

Converting To Mobile Model

TensorFLow Lite

If you train the model, it will create tflite models per evaluation step.

Core ML

Check convert_to_coreml.py script. The converted .mlmodel support iOS14+.

Details

This section will be separated to other .md file.

Folder Structure

tf2-mobile-pose-estimation
β”œβ”€β”€ config
|   β”œβ”€β”€ model_config.py
|   └── train_config.py
β”œβ”€β”€ data_loader
|   β”œβ”€β”€ data_loader.py
|   β”œβ”€β”€ dataset_augment.py
|   β”œβ”€β”€ dataset_prepare.py
|   └── pose_image_processor.py
β”œβ”€β”€ models
|   β”œβ”€β”€ common.py
|   β”œβ”€β”€ mobilenet.py
|   β”œβ”€β”€ mobilenetv2.py
|   β”œβ”€β”€ mobilenetv3.py
|   β”œβ”€β”€ resnet.py
|   β”œβ”€β”€ resneta.py
|   β”œβ”€β”€ resnetd.py
|   β”œβ”€β”€ senet.py
|   β”œβ”€β”€ simplepose_coco.py
|   └── simpleposemobile_coco.py
β”œβ”€β”€ train.py            - the main training script
β”œβ”€β”€ common.py 
β”œβ”€β”€ requirements.txt
└── outputs             - this folder will be generated automatically when start training
    β”œβ”€β”€ 20200312-sp-ai_challenger
    |   β”œβ”€β”€ saved_model
    |   └── image_results
    └── 20200312-sp-ai_challenger
        └── ...

My SSD    
└── datasets            - this folder contains the datasets of the project.
    └── ai_challenger
        β”œβ”€β”€ train.json
        β”œβ”€β”€ valid.json
        β”œβ”€β”€ train
        └── valid

TODO

  • Save model to saved_model
  • Convert the model(saved_model) to TFLite model(.tflite)
  • Convert the model(saved_model) to Core ML model(.mlmodel)
  • Run the model on iOS
  • Release 1.0 models
  • Support distributed GPUs training
  • Make DEMO gif running on mobile device
  • Run the model on Android

Reference

[1] Paper of Convolutional Pose Machines
[2] Paper of Stack Hourglass
[3] Paper of MobileNet V2
[4] Repository PoseEstimation-CoreML
[5] Repository of tf-pose-estimation
[6] Devlope guide of TensorFlow Lite
[7] Mace documentation

Related Projects

Other Pose Estimation Projects

Contributing

This section will be separated to other .md file.

Any contributions are welcome including improving the project.

License

Apache License 2.0