Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Training loss is huge #3

Open
Primadual opened this issue Jan 4, 2022 · 1 comment
Open

Training loss is huge #3

Primadual opened this issue Jan 4, 2022 · 1 comment

Comments

@Primadual
Copy link

Hi, Thanks for the code. I am training the model using a small subset of the CASIA-WebFace dataset that was originally used for Facenet. However, the training loss after 100 epochs only comes down to 14.09 from 21! I am using your train code as is and the dataset as is. Do you know if I need to preprocess the data or something? Also, when I try to predict two different face images I get very similar values. Here is the code I use for predicting:

import tensorflow as tf
from PIL import Image
from tensorflow import keras
import numpy as np

gpus = tf.config.experimental.list_physical_devices(device_type='GPU')
for gpu in gpus:
    tf.config.experimental.set_memory_growth(gpu, True)
    
model = tf.keras.models.load_model('saved_model/FaceVerify')

# Check its architecture
model.summary()


# Load images
image_1 = "img/2_001.jpg"
try:
    image_1 = Image.open(image_1)
except:
    print('Image_1 Open Error! Try again!')

image_2 = "img/1_002.jpg"
try:
    image_2 = Image.open(image_2)
except:
    print('Image_2 Open Error! Try again!')


def letterbox_image(image, size):
        if input_shape[-1] == 1:
            image = image.convert("RGB")
        iw, ih  = image.size
        w, h    = size
        scale   = min(w/iw, h/ih)
        nw      = int(iw*scale)
        nh      = int(ih*scale)

        image   = image.resize((nw,nh), Image.BICUBIC)
        new_image = Image.new('RGB', size, (128,128,128))
        new_image.paste(image, ((w-nw)//2, (h-nh)//2))
        if input_shape[-1] == 1:
            new_image = new_image.convert("L")
        return new_image


# Resize the input image without distortion
input_shape = [250, 250, 3]
image_1 = letterbox_image(image_1, [input_shape[1], input_shape[0]])
image_2 = letterbox_image(image_2, [input_shape[1], input_shape[0]])

# Normalize the picture
image_1 = np.asarray(image_1).astype(np.float64) / 255
image_2 = np.asarray(image_2).astype(np.float64) / 255

# Add the batch dimension before it can be put into the network to predict
photo1 = np.expand_dims(image_1, 0)
photo2 = np.expand_dims(image_2, 0)

# The picture is transmitted to the network for prediction
output1 = model.predict(photo1)
output2 = model.predict(photo2)

# Calculate the distance between the two
l1 = np.sqrt(np.sum(np.square(output1 - output2), axis=-1))
print("probability = %5.3f" % l1)
@harxish
Copy link
Owner

harxish commented Jul 17, 2022

How small is the subset ?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants