forked from arokem/SynthSeg
-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy path4-generation_advanced.py
114 lines (94 loc) · 6.29 KB
/
4-generation_advanced.py
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
"""
This script shows how to generate synthetic images with narrowed intensity distributions (e.g. T1-weighted scans) and
at a specific resolution. All the arguments shown here can be used in the training function.
These parameters were not explained in the previous tutorials as they were not used for the training of SynthSeg.
Specifically, this script generates 5 examples of training data simulating 3mm axial T1 scans, which have been resampled
at 1mm resolution to be segmentated.
Contrast-specificity is achieved by now imposing Gaussian priors (instead of uniform) over the GMM parameters.
Resolution-specificity is achieved by first blurring and downsampling to the simulated LR. The data will then be
upsampled back to HR, so that the downstream network is trained to segment at HR. This upsampling step mimics the
process that will happen at test time.
"""
import os
import numpy as np
from ext.lab2im import utils
from SynthSeg.brain_generator import BrainGenerator
# script parameters
n_examples = 5 # number of examples to generate in this script
result_dir = './generated_examples' # folder where examples will be saved
# path training label maps
path_label_map = '../../data/training_label_maps'
generation_labels = '../../data/labels_classes_priors/generation_labels.npy'
output_labels = '../../data/labels_classes_priors/segmentation_labels.npy'
n_neutral_labels = 18
output_shape = 160
# ---------- GMM sampling parameters ----------
# Here we use Gaussian priors to control the means and standard deviations of the GMM.
prior_distributions = 'normal'
# Here we still regroup labels into classes of similar tissue types:
# Example: (continuing the example of tutorial 1) generation_labels = [0, 24, 507, 2, 3, 4, 17, 25, 41, 42, 43, 53, 57]
# generation_classes = [0, 1, 2, 3, 4, 5, 4, 6, 3, 4, 5, 4, 6]
# Note that structures with right/left labels are now associated with the same class.
generation_classes = '../../data/labels_classes_priors/generation_classes_contrast_specific.npy'
# We specify here the hyperparameters governing the prior distribution of the GMM.
# As these prior distributions are Gaussians, they are each controlled by a mean and a standard deviation.
# Therefore, the numpy array pointed by prior_means is of size (2, K), where K is the total nummber of classes specified
# in generation_classes. The first row of prior_means correspond to the means of the Gaussian priors, and the second row
# correspond to standard deviations.
#
# Example: (continuing the previous one) prior_means = np.array([[0, 30, 80, 110, 95, 40, 70]
# [0, 10, 50, 15, 10, 15, 30]])
# This means that intensities of label 3 and 17, which are both in class 4, will be drawn from the Gaussian
# distribution, whose mean will be sampled from the Gaussian distribution with index 4 in prior_means N(95, 10).
# Here is the complete table of correspondance for this example:
# mean of Gaussian for label 0 drawn from N(0,0)=0
# mean of Gaussian for label 24 drawn from N(30,10)
# mean of Gaussian for label 507 drawn from N(80,50)
# mean of Gaussian for labels 2 and 41 drawn from N(110,15)
# mean of Gaussian for labels 3, 17, 42, 53 drawn from N(95,10)
# mean of Gaussian for labels 4 and 43 drawn from N(40,15)
# mean of Gaussian for labels 25 and 57 drawn from N(70,30)
# These hyperparameters were estimated with the function SynthSR/estimate_priors.py/build_intensity_stats()
prior_means = '../../data/labels_classes_priors/prior_means_t1.npy'
# same as for prior_means, but for the standard deviations of the GMM.
prior_stds = '../../data/labels_classes_priors/prior_stds_t1.npy'
# ---------- Resolution parameters ----------
# here we aim to synthesise data at a specific resolution, thus we do not randomise it anymore !
randomise_res = False
blur_range = 1.03
# blurring/downsampling parameters
# We specify here the slice spacing/thickness that we want the synthetic scans to mimic. The axes refer to the *RAS*
# axes, as all the provided data (label maps and images) will be automatically aligned to those axes during training.
# RAS refers to Right-left/Anterior-posterior/Superior-inferior axes, i.e. sagittal/coronal/axial directions.
data_res = np.array([1., 1., 3.]) # slice spacing i.e. resolution to mimic
thickness = np.array([1., 1., 3.]) # slice thickness
# Because we have a large gap between the resolution at which we sample the GMM (1mm iso) and the LR we want to
# simulate, we decide here to downsample the Gaussian image to LR. If downsampled, the data will then be upsampled back
# to the target HR resolution (the one of the training label maps by default). This downsampling/upsampling step
# enables to reproduce the process that will happen at test time: real LR scans will be upsampled to HR, and run through
# the network to obtain the HR regressed scan.
downsample = True
# ------------------------------------------------------ Generate ------------------------------------------------------
# instantiate BrainGenerator object
brain_generator = BrainGenerator(labels_dir=path_label_map,
generation_labels=generation_labels,
output_labels=output_labels,
n_neutral_labels=n_neutral_labels,
output_shape=output_shape,
prior_distributions=prior_distributions,
generation_classes=generation_classes,
prior_means=prior_means,
prior_stds=prior_stds,
randomise_res=randomise_res,
data_res=data_res,
thickness=thickness,
downsample=downsample,
blur_range=blur_range)
for n in range(n_examples):
# generate new image and corresponding labels
im, lab = brain_generator.generate_brain()
# save output image and label map
utils.save_volume(im, brain_generator.aff, brain_generator.header,
os.path.join(result_dir, 'image_t1_%s.nii.gz' % n))
utils.save_volume(lab, brain_generator.aff, brain_generator.header,
os.path.join(result_dir, 'labels_t1_%s.nii.gz' % n))