Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How can I find the original real images? #3

Open
Wbiscuits opened this issue May 3, 2022 · 5 comments
Open

How can I find the original real images? #3

Wbiscuits opened this issue May 3, 2022 · 5 comments

Comments

@Wbiscuits
Copy link

Excuse me,could you please tell me how I can find the original real images?how can I use the FaceForensics++?And I don't know why I need extract frames from videos (with ffmpeg) at the first step of preprocessing. Thank you!

@jtchen0528
Copy link
Owner

jtchen0528 commented May 3, 2022

Hi,

In the original paper, they used FaceForensics++ real videos as their training data.
In Section 4.1, preprocessing, they said that: For each raw video frame, face crops are
detected and tracked by using [26] and landmarks are detected by public toolbox [4].

FaceForensics++ is a forensics dataset consisting of 1000 original video sequences that have been manipulated with four automated face manipulation methods. They are videos, but the model only takes 2D images as input rather than 3D video sequences. That is why you need a tool to get each frames from the videos. In this case, I use ffmpeg.

You can download FF++ dataset from their github. I've pasted the link in the readme.

Good luck!

@Wbiscuits
Copy link
Author

Thank you for your reply! I seem to understand a little. However, I think the FF++ video sequences that have been manipulated are not the real videos. What is the meaning of "real" here?

@jtchen0528
Copy link
Owner

Hi,

Real means the videos or images are not being deepfaked or manipulated.
I believe FF++ contains real (pristine) videos in the dataset.

https://github.com/ondyari/FaceForensics/tree/master/dataset
under original_sequences folder.

@Wbiscuits
Copy link
Author

Hi,

Real means the videos or images are not being deepfaked or manipulated. I believe FF++ contains real (pristine) videos in the dataset.

https://github.com/ondyari/FaceForensics/tree/master/dataset under original_sequences folder.

Thank you! I find the original_sequences folder. I am sorry that I don't know How many 2D images need to train as the input of I2G and what is the size of the images.

@jtchen0528
Copy link
Owner

Hi,

The dataset size is not specified in the paper, however, for FF++, the dataset has train, val, test labels for each pair of videos (real/manipulated). I assume that they used the train set in FF++ as their training data.
Size of images: 256×256 (in the paper, 4.1 section). I use the same size setting.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants