Skip to content

Latest commit

 

History

History
106 lines (88 loc) · 3.7 KB

INSTALL.md

File metadata and controls

106 lines (88 loc) · 3.7 KB

Installation

Requirements

Code installation

(Recommended) Install with conda

Install conda from here.

# 1. Create a conda virtual environment.
conda create -n alphapose python=3.6 -y
conda activate alphapose

# 2. Install PyTorch
conda install pytorch==1.1.0 torchvision==0.3.0

# 3. Get AlphaPose
git clone https://github.com/MVIG-SJTU/AlphaPose.git
cd AlphaPose

# 4. install
export PATH=/usr/local/cuda/bin/:$PATH
export LD_LIBRARY_PATH=/usr/local/cuda/lib64/:$LD_LIBRARY_PATH
python -m pip install cython
sudo apt-get install libyaml-dev
python setup.py build develop

Install with pip

# 1. Install PyTorch
pip3 install torch==1.1.0 torchvision==0.3.0

# 2. Get AlphaPose
git clone https://github.com/MVIG-SJTU/AlphaPose.git
cd AlphaPose

# 3. install
export PATH=/usr/local/cuda/bin/:$PATH
export LD_LIBRARY_PATH=/usr/local/cuda/lib64/:$LD_LIBRARY_PATH
pip install cython
sudo apt-get install libyaml-dev
python setup.py build develop --user

Windows

Windows users should install Visual Studio due to the problem mentioned here. If you do not want to install Visual Studio and want to use AlphaPose, you can refer to our previous version that do not require Visual Studio.

For Windows user, if you meet error with PyYaml, you can download and install it manually from here: https://pyyaml.org/wiki/PyYAML. If your OS platform is Windows, make sure that Windows C++ build tool like visual studio 15+ or visual c++ 2015+ is installed for training.

Models

  1. Download the object detection model manually: yolov3-spp.weights(Google Drive | Baidu pan). Place it into detector/yolo/data.

  2. For pose tracking, download the object tracking model manually: JDE-1088x608-uncertainty(Google Drive | Baidu pan). Place it into detector/tracker/data.

  3. Download our pose models. Place them into pretrained_models. All models and details are available in our Model Zoo.

Prepare dataset (optional)

MSCOCO

If you want to train the model by yourself, please download data from MSCOCO (train2017 and val2017). Download and extract them under ./data, and make them look like this:

|-- json
|-- exp
|-- alphapose
|-- configs
|-- test
|-- data
`-- |-- coco
    `-- |-- annotations
        |   |-- person_keypoints_train2017.json
        |   `-- person_keypoints_val2017.json
        |-- train2017
        |   |-- 000000000009.jpg
        |   |-- 000000000025.jpg
        |   |-- 000000000030.jpg
        |   |-- ... 
        `-- val2017
            |-- 000000000139.jpg
            |-- 000000000285.jpg
            |-- 000000000632.jpg
            |-- ... 

MPII

Please download images from MPII. We also provide the annotations in json format [annot_mpii.zip]. Download and extract them under ./data, and make them look like this:

|-- data
`-- |-- mpii
    `-- |-- annot_mpii.json
        `-- images
            |-- 027457270.jpg
            |-- 036645665.jpg
            |-- 045572740.jpg
            |-- ...