-
Notifications
You must be signed in to change notification settings - Fork 6
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How can I find the original real images? #3
Comments
Hi, In the original paper, they used FaceForensics++ real videos as their training data. FaceForensics++ is a forensics dataset consisting of 1000 original video sequences that have been manipulated with four automated face manipulation methods. They are videos, but the model only takes 2D images as input rather than 3D video sequences. That is why you need a tool to get each frames from the videos. In this case, I use ffmpeg. You can download FF++ dataset from their github. I've pasted the link in the readme. Good luck! |
Thank you for your reply! I seem to understand a little. However, I think the FF++ video sequences that have been manipulated are not the real videos. What is the meaning of "real" here? |
Hi, Real means the videos or images are not being deepfaked or manipulated. https://github.com/ondyari/FaceForensics/tree/master/dataset |
Thank you! I find the original_sequences folder. I am sorry that I don't know How many 2D images need to train as the input of I2G and what is the size of the images. |
Hi, The dataset size is not specified in the paper, however, for FF++, the dataset has train, val, test labels for each pair of videos (real/manipulated). I assume that they used the train set in FF++ as their training data. |
Excuse me,could you please tell me how I can find the original real images?how can I use the FaceForensics++?And I don't know why I need extract frames from videos (with ffmpeg) at the first step of preprocessing. Thank you!
The text was updated successfully, but these errors were encountered: