Skip to content

Latest commit

 

History

History
66 lines (44 loc) · 2.15 KB

README.md

File metadata and controls

66 lines (44 loc) · 2.15 KB

OSCAR-Net: Object-centric Scene Graph Attention for Image Attribution

This repo contains the demo code to run our OSCAR-Net model.

See our main website for project highlights.

We also provide the dataset IDs (50mb) for the 4.7 million stock images from Adobe.

See Adobe APIs on how to retrieve the images.

Requirements

Nvidia driver >= 418.39
setuptools >= 41.0.0
h5py >= 2.9.0
h5py-cache >= 1.0
opencv-python >= 4.2.0
pandas >= 0.24.1
scikit-image >= 0.15.0
tqdm >= 4.43.0
reportlab >= 3.5.23
numpy >= 1.16.4
scipy >= 1.4.1
requests >= 2.22.0
cython

A GPU with compute capability >= 3.0 and at least 8GB GPU memory.

Download

Download the weight zip from here, and put the contents into the project weight directory (i.e., replace the weight directory).

Optional: dockerfile provided

Run inference on an image:

python inference.py -i examples/original.jpg -w weight/best.pt

This should output a 64-bit hash code.

Run the demo

python demo.py

This demo loads an original image docs/examples/original.jpg, a benign-transformed version docs/examples/benign.jpg and a manipulated version docs/examples/manipulated.jpg of that image; then compare the Hamming distance of the original-benign and original-manipulated pairs.

The output should look like this:

Hamming (original.jpg, benign.jpg): 3
Hamming (original.jpg, manipulated.jpg): 22
Original Benign transform Manipulated