You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When I was using the CRAG dataset for training and testing, I found that the final Dice score of 0.8737 was close to the 0.8831 in the paper. However, the IoU was only 0.779, which was quite different from the 0.8841 in the paper. Have you ever encountered such a situation? The following is my evaluation code:
import numpy as np
import albumentations as A
from PIL import Image
import matplotlib.pyplot as plt
import os
import csv
from medpy.metric.binary import dc,jc
When I was using the CRAG dataset for training and testing, I found that the final Dice score of 0.8737 was close to the 0.8831 in the paper. However, the IoU was only 0.779, which was quite different from the 0.8841 in the paper. Have you ever encountered such a situation? The following is my evaluation code:
import numpy as np
import albumentations as A
from PIL import Image
import matplotlib.pyplot as plt
import os
import csv
from medpy.metric.binary import dc,jc
def resize_mask(mask_path, target_size=(1536, 1536)):
mask = Image.open(mask_path)
transform = A.Compose([
A.Resize(target_size[1], target_size[0])
])
mask_np = np.array(mask)
mask_resized = transform(image=mask_np)["image"]
return mask_resized
def calculate_metrics_for_folders(folder1, folder2):
iou_values = []
dice_values = []
if name == "main":
folder1 = "test_result/CRAG"
folder2 = "dataset/merged/mask"
The text was updated successfully, but these errors were encountered: