FID measures the similarity between two datasets of images. It was shown to correlate well with human judgement of visual quality and is most often used to evaluate the quality of samples of Generative Adversarial Networks. FID is calculated by computing the Fréchet distance between two Gaussians fitted to feature representations of the Inception network.
参考
- https://github.com/mseitzer/pytorch-fid
- GANs Trained by a Two Time-Scale Update Rule Converge to a Local Nash Equilibrium
- Are GANs Created Equal? A Large-Scale Study
通常, 我们把下载的 inception 网络的特征统计数据 (用于计算FID) 放在 basicsr/metrics
.
⏬ 百度网盘: 评价指标数据
⏬ Google Drive: metrics data
File Name | Dataset | Image Shape | Sample Numbers |
---|---|---|---|
inception_FFHQ_256-0948f50d.pth | FFHQ | 256 x 256 | 50,000 |
inception_FFHQ_512-f7b384ab.pth | FFHQ | 512 x 512 | 50,000 |
inception_FFHQ_1024-75f195dc.pth | FFHQ | 1024 x 1024 | 50,000 |
inception_FFHQ_256_stylegan2_pytorch-abba9d31.pth | FFHQ | 256 x 256 | 50,000 |
- All the FFHQ inception feature statistics calculated on the resized 299 x 299 size.
inception_FFHQ_256_stylegan2_pytorch-abba9d31.pth
is converted from the statistics in stylegan2-pytorch.