This code is for the following paper.
- Unsupervised Co-activity Detection from Multiple Videos using Absorbing Markov Chain,
Donghun Yeo, Bohyung Han, Joon Hee Han, AAAI'16
- Make a directory as <input_root>/<folder_name>/
- Put the videos in this directory.
- Recommend you to use .avi files. If you want different type, then you should edit "Codes/Initialization/UT_make_gt.m".
- Please note that the vidoes are ordered by their name.
- Put ground-truth information <input_root>/<folder_name>/folder_name_GT.txt
Here is an example of GT.txt file.
1 1 470
2 1 750
3 198 500
3 681 1281
- Each line consists of 1) video number, 2) the start frame and 3) the end frame of the co-activity in the video.
- For some videos which have multiple instances of the co-activity, you should put the information about each instance in each line like the last two lines of the example.
- The original videos used in the experiments of the paper is in the following link
- Download: YouTube co-activity dataset
To run this git code, you should use following feature extraction code.
- Download: Feature extraction link
- Edit and run CoActDiscovery_Feature_Extraction_Youtube.m with your own input video path <input_root>.
- Run the ".bat" files, the output of running CoActDiscovery_Feature_Extraction_Youtube.m., to get the features.
This code translate information about input, ground truth and the extracted features into ".mat" files.
Edit and run this code with your own directories.
This code returns the co-activity frames of the input videos and precision, recall and F-measure. Edit and run this code with your own directories.