You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, Thanks for the code. I am training the model using a small subset of the CASIA-WebFace dataset that was originally used for Facenet. However, the training loss after 100 epochs only comes down to 14.09 from 21! I am using your train code as is and the dataset as is. Do you know if I need to preprocess the data or something? Also, when I try to predict two different face images I get very similar values. Here is the code I use for predicting:
import tensorflow as tf
from PIL import Image
from tensorflow import keras
import numpy as np
gpus = tf.config.experimental.list_physical_devices(device_type='GPU')
for gpu in gpus:
tf.config.experimental.set_memory_growth(gpu, True)
model = tf.keras.models.load_model('saved_model/FaceVerify')
# Check its architecture
model.summary()
# Load images
image_1 = "img/2_001.jpg"
try:
image_1 = Image.open(image_1)
except:
print('Image_1 Open Error! Try again!')
image_2 = "img/1_002.jpg"
try:
image_2 = Image.open(image_2)
except:
print('Image_2 Open Error! Try again!')
def letterbox_image(image, size):
if input_shape[-1] == 1:
image = image.convert("RGB")
iw, ih = image.size
w, h = size
scale = min(w/iw, h/ih)
nw = int(iw*scale)
nh = int(ih*scale)
image = image.resize((nw,nh), Image.BICUBIC)
new_image = Image.new('RGB', size, (128,128,128))
new_image.paste(image, ((w-nw)//2, (h-nh)//2))
if input_shape[-1] == 1:
new_image = new_image.convert("L")
return new_image
# Resize the input image without distortion
input_shape = [250, 250, 3]
image_1 = letterbox_image(image_1, [input_shape[1], input_shape[0]])
image_2 = letterbox_image(image_2, [input_shape[1], input_shape[0]])
# Normalize the picture
image_1 = np.asarray(image_1).astype(np.float64) / 255
image_2 = np.asarray(image_2).astype(np.float64) / 255
# Add the batch dimension before it can be put into the network to predict
photo1 = np.expand_dims(image_1, 0)
photo2 = np.expand_dims(image_2, 0)
# The picture is transmitted to the network for prediction
output1 = model.predict(photo1)
output2 = model.predict(photo2)
# Calculate the distance between the two
l1 = np.sqrt(np.sum(np.square(output1 - output2), axis=-1))
print("probability = %5.3f" % l1)
The text was updated successfully, but these errors were encountered:
Hi, Thanks for the code. I am training the model using a small subset of the CASIA-WebFace dataset that was originally used for Facenet. However, the training loss after 100 epochs only comes down to 14.09 from 21! I am using your train code as is and the dataset as is. Do you know if I need to preprocess the data or something? Also, when I try to predict two different face images I get very similar values. Here is the code I use for predicting:
The text was updated successfully, but these errors were encountered: