Skip to content

ry-jojo/CoActivity

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

45 Commits
 
 
 
 
 
 
 
 

Repository files navigation

Co-Activity Detction from Multiple videos using Absorbing Markov Chain

This code is for the following paper.

Step 1 - Prepare input videos and ground truth file

  1. Make a directory as <input_root>/<folder_name>/
  2. Put the videos in this directory.
  • Recommend you to use .avi files. If you want different type, then you should edit "Codes/Initialization/UT_make_gt.m".
  1. Please note that the vidoes are ordered by their name.
  2. Put ground-truth information <input_root>/<folder_name>/folder_name_GT.txt
    Here is an example of GT.txt file.

1 1 470
2 1 750
3 198 500
3 681 1281
  • Each line consists of 1) video number, 2) the start frame and 3) the end frame of the co-activity in the video.
  • For some videos which have multiple instances of the co-activity, you should put the information about each instance in each line like the last two lines of the example.
  1. The original videos used in the experiments of the paper is in the following link

Step 2 - Feature Extraction

To run this git code, you should use following feature extraction code.

  • Download: Feature extraction link
    1. Edit and run CoActDiscovery_Feature_Extraction_Youtube.m with your own input video path <input_root>.
    2. Run the ".bat" files, the output of running CoActDiscovery_Feature_Extraction_Youtube.m., to get the features.

Step 3 - Run UT_Initial_Step.m

This code translate information about input, ground truth and the extracted features into ".mat" files.
Edit and run this code with your own directories.

Step 4 - Run UT_RUN_THIS.m

This code returns the co-activity frames of the input videos and precision, recall and F-measure. Edit and run this code with your own directories.

Releases

No releases published

Packages

No packages published

Languages

  • MATLAB 100.0%