Official implementation in python. https://arxiv.org/abs/1904.04189
conda create --name ute --file requirements.txt
conda actiavte ute
install torch=1.0 for your machine
one file per video
# rows = # frames in video
# columns = dimensionality of frame-wise features
to extract frame-wise features use improved dense trajectories (this step can be substituted by smth else)
--dataset_root=/path/to/your/root/data/folder/
--action='coffee' # dataset dependant, see below
# set feature extension and dimensionality
--ext = 'txt | tar.gz | npy'
--feature_dim=64
# default is 'mlp', set 'nothing' for no embedding (just raw features)
--model_name='mlp'
# resume training
--resume=True | False
--load_model=True | False
# if load_model == True then specify
--loaded_model_name='name_of_your_model'
# if dataset has background class (like YTI) set to True
--bg=False
set parameters as arguments for the script or update them in the corresponding one
for each dataset create separate folder (specify path --dataset_root) where the inner folders structure is as following:
features/
groundTruth/
mapping/
models/
during testing will be created several folders which by default stored at --dataset_root, change if necessary --output_dir
segmentation/
likelihood/
logs/
- Breakfast features link
- Breakfast ground-truth link
- pretrained models link
- actions: 'coffee', 'cereals', 'tea', 'milk', 'juice', 'sanwich', 'scrambledegg', 'friedegg', 'salat',
'pancake'
use 'all' to run test on all actions in series - log files for bf_test.py (all) and bf_global.py (test) link
# to test pretrained models
python data_utils/BF_utils/bf_test.py
# to train models from scratch
python data_utils/BF_utils/bf_train.py
# to test / train global pipeline
python data_utils/BF_utils/bf_global.py
comments on global pipeline: pretrained model available for the setting K=10, K'=5. To switch between test / train mode use parameter 'load_model'.
- YouTube Instructions features link
- YouTube Instructions ground-truth link
- pretrained models link
- actions: 'changing_tire', 'coffee', 'jump_car', 'cpr', 'repot'
use 'all' to run test on all actions in series
# to test pretrained models
python data_utils/YTI_utils/yti_test.py
# to train models from scratch
python data_utils/YTI_utils/yti_train.py
for 50Salad dataset there is only one model since people make just some variations of the same salad and there is no other activity classes
# to test pretrained models
python data_utils/FS_utils/fs_test.py
# to train models from scratch
python data_utils/FS_utils/fs_train.py
# to test pretrained model
python data_utils/dummy_utils/dummy_test.py
# to train model from scratch
python data_utils/dummy_utils/dummy_train.py
see folders
dummy_data/
data_utils/dummy_utils/
and modify with respect to your own data