+ +
+ +
+
+ + + + + +

+

+ +

+Analysis code for the paper "RF shimming in the cervical spinal cord at 7T" +

+
+

+Daniel Papp1, Kyle M. Gilbert2,3, Gaspard Cereza1, Alexandre D’Astous1, Nibardo Lopez-Rios1, Mathieu Boudreau1, Marcus Couch4, Pedram Yazdanbakhsh5, Robert L. Barry6,7,8, Eva Alonso Ortiz1, Julien Cohen-Adad1,9,10 +

+ +
+

+1NeuroPoly Lab, Institute of Biomedical Engineering, Polytechnique Montreal, Montreal, QC, Canada, +2 Centre for Functional and Metabolic Mapping, The University of Western Ontario, London, ON, Canada, +3 Department of Medical Biophysics, The University of Western Ontario, London, ON, Canada, +4 Siemens Healthcare Limited, Montreal, QC, Canada, +5 McConnell Brain Imaging Centre, Montreal Neurological Institute, McGill University, Montreal, QC, Canada, +6 Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA, USA, +7 Harvard Medical School, Boston, MA, USA, +8 Harvard-Massachusetts Institute of Technology Health Sciences & Technology, Cambridge, MA, USA, +9 Mila - Quebec AI Institute, Montreal, QC, Canada, +10 Functional Neuroimaging Unit, Centre de recherche de l'Institut universitaire de gériatrie de Montréal QC, Canada +

+

+

+

Summary

+

Data was collected from five participants between two 7T sites with a custom 8Tx/20Rx parallel transmission (pTx) coil. We explored two RF shimming approaches from an MRI vendor and four from an open-source toolbox, showcasing their ability to enhance transmit field and signal homogeneity along the cervical spinal cord.

+

The results indicate significant improvements in B1+ efficiency and cerebrospinal fluid / spinal cord signal ratio across various RF shimming conditions compared to the vendor based methods.

+

The study’s findings highlight the potential of RF shimming to advance 7T MRI’s clinical utility for central nervous system imaging by enabling more homogenous and efficient spinal cord imaging. Additionally, the research incorporates a reproducible Jupyter Notebook, enhancing the study’s transparency and facilitating peer verification. By also making the data openly accessible on OpenNeuro, we ensure that the scientific community can further explore, validate, and build upon our findings, contributing to the collective advancement in high-resolution imaging techniques.

+
+_images/featured.png +
+

+Figure 1. Overview of the RF shimming procedure. The top panel shows the RF coil used for the experiments, alongside the Tx coil geometry and the electromagnetic simulation results (on Gustav model) yielding the CP mode used for this coil. The bottom panel shows the RF shimming procedure (with approximate duration). First, GRE and tfl_rfmap scans are acquired (4min30s). Second, these images are transferred via ethernet socket from the MRI console onto a separate laptop running Shimming Toolbox and SCT (əs). Third, the spinal cord is automatically segmented to produce a mask that is resampled into the space of the individual coil magnitude and phase images of the tfl_rfmap scan (~5s). Fourth, the RF shim weights are calculated according to the defined constraints for each shim scenario (1min total). +

Statement of Need

+

Advancing the development of 7T MRI for spinal cord imaging is crucial for the enhanced diagnosis and monitoring of various neurodegenerative diseases [Kearney et al., 2015] and traumas [David et al., 2019]. However, a significant challenge at this field strength is the transmit field inhomogeneity [Collins et al., 2005, Ibrahim et al., 2001, Röschmann, 1987, Yang et al., 2002]. Such inhomogeneity is particularly problematic for imaging the small, deep anatomical structures of the cervical spinal cord , as it can cause uneven signal intensity and elevate the local specific absorption ratio, compromising image quality. This multi-site study explores several radiofrequency (RF) shimming techniques in the cervical spinal cord at 7T.

+
+

1     |     Data#

+

The data can be downloaded from: https://openneuro.org/datasets/ds004906

+

The structure of the input dataset is as follows (JSON sidecars are not listed for clarity):

+
+
ds004906
+├── CHANGES
+├── README
+├── dataset_description.json
+├── participants.json
+├── participants.tsv
+├── sub-01
+│   ├── anat
+│   │   ├── sub-01_acq-CP_T1w.nii.gz
+│   │   ├── sub-01_acq-CP_T2starw.nii.gz
+│   │   ├── sub-01_acq-CoV_T1w.nii.gz
+│   │   ├── sub-01_acq-CoV_T2starw.nii.gz
+│   │   ├── sub-01_acq-SAReff_T2starw.nii.gz
+│   │   ├── sub-01_acq-patient_T2starw.nii.gz
+│   │   ├── sub-01_acq-phase_T2starw.nii.gz
+│   │   ├── sub-01_acq-target_T2starw.nii.gz
+│   │   ├── sub-01_acq-volume_T2starw.nii.gz
+│   └── fmap
+│       ├── sub-01_acq-anatCP_TB1TFL.nii.gz
+│       ├── sub-01_acq-anatCoV_TB1TFL.nii.gz
+│       ├── sub-01_acq-anatSAReff_TB1TFL.nii.gz
+│       ├── sub-01_acq-anatpatient_TB1TFL.nii.gz
+│       ├── sub-01_acq-anatphase_TB1TFL.nii.gz
+│       ├── sub-01_acq-anattarget_TB1TFL.nii.gz
+│       ├── sub-01_acq-anatvolume_TB1TFL.nii.gz
+│       ├── sub-01_acq-fampCP_TB1TFL.nii.gz
+│       ├── sub-01_acq-fampCoV_TB1TFL.nii.gz
+│       ├── sub-01_acq-fampSAReff_TB1TFL.nii.gz
+│       ├── sub-01_acq-famppatient_TB1TFL.nii.gz
+│       ├── sub-01_acq-fampphase_TB1TFL.nii.gz
+│       ├── sub-01_acq-famptarget_TB1TFL.nii.gz
+│       └── sub-01_acq-fampvolume_TB1TFL.nii.gz
+├── sub-02
+├── sub-03
+├── sub-04
+└── sub-05
+
+
+
+
+
+

2     |     Overview of processing pipeline#

+

During the data acquisition stage, RF shimming was done using the Shimming Toolbox [D'Astous et al., 2023] during the acquisition stage.

+

The post-processing pipeline uses the Spinal Cord Toolbox [De Leener et al., 2017].

+

For each subject:

+
    +
  • Process anat/T2starw (GRE)

    +
      +
    • Segment the spinal cord (SC)

    • +
    • Label vertebral levels using existing manual disc labels

    • +
    • Create a mask of the cerebrospinal fluid (CSF)

    • +
    • Extract the SC/CSF magnitude signal to assess the stability of the flip angle across shim methods

    • +
    +
  • +
  • Process fmap/TFL (flip angle maps)

    +
      +
    • Register each B1 map (CP, CoV, etc.) to the GRE scan

    • +
    • Apply the computed warping field to bring the segmentation and vertebral levels to the B1 map

    • +
    • Convert the B1 map to nT/V units

    • +
    • Extract the B1 map value within the SC

    • +
    +
  • +
+
+

Slow processes are indicated with the emoji ⏳

+
+
+
+

3     |     Requirements#

+ +
# Install SCT ⏳
+!git clone --depth 1 https://github.com/spinalcordtoolbox/spinalcordtoolbox.git
+!yes | spinalcordtoolbox/install_sct
+os.environ['PATH'] += f":/content/spinalcordtoolbox/bin"
+
+
+
    +
  • Download the project repository

  • +
+
git clone https://github.com/shimming-toolbox/rf-shimming-7t.git
+cd rf-shimming-7t
+git checkout main
+
+
+
    +
  • Install requirements

  • +
+
pip install -r binder/requirements.txt
+
+
+
    +
  • Download data

  • +
+
cd content
+repo2data -r ../binder/data_requirement.json
+
+
+
+
+

4     |     Environment setup#

+

In a Python shell:

+

Import the necessary modules.

+
+
+
from datetime import datetime, timedelta
+import os
+import re
+import json
+import subprocess
+import glob
+import itertools
+import matplotlib.pyplot as plt
+import numpy as np
+import nibabel as nib
+import pandas as pd
+from scipy.interpolate import interp1d
+from scipy.ndimage import uniform_filter1d
+from scipy.stats import f_oneway, ttest_rel
+import shutil
+from shutil import rmtree
+from pathlib import Path
+from statsmodels.stats.multicomp import pairwise_tukeyhsd
+from statsmodels.stats.anova import AnovaRM
+import statsmodels.api as sm
+
+
+
+
+
+
+
+
+
+
+

Define variables.

+
+
+
path_data = os.getcwd()
+print(f"path_data: {path_data}")
+path_labels = os.path.join(path_data, "derivatives", "labels")
+path_qc = os.path.join(path_data, "qc")
+shim_modes = ["CP", "patient", "volume", "phase", "CoV", "target", "SAReff"]
+shim_modes_MPRAGE = ["CP", "CoV"] 
+
+print(f"shim_modes: {shim_modes}")
+subjects = sorted(glob.glob("sub-*"))
+print(f"subjects: {subjects}")
+
+# Create output folder
+path_results = os.path.join(path_data, 'derivatives', 'results')
+os.makedirs(path_results, exist_ok=True)
+
+
+
+
+
+
+

5     |     Process anat/T2starw (GRE)#

+

Run segmentation on GRE scan

+
+
+
if notebook != 'neurolibre-figures':
+    for subject in subjects:
+        os.chdir(os.path.join(path_data, subject, "anat"))
+        # The "CoV" RF shimming scenario was chosen as the segmentation baseline due to the more homogeneous signal intensity in the I-->S direction, which results in a better segmentation peformance in the C7-T2 region
+        fname_manual_seg = os.path.join(path_labels, subject, "anat", f"{subject}_acq-CoV_T2starw_label-SC_seg.nii.gz")
+        if os.path.exists(fname_manual_seg):
+            # Manual segmentation already exists. Copy it to local folder
+            print(f"{subject}: Manual segmentation found\n")
+            shutil.copyfile(fname_manual_seg, f"{subject}_acq-CoV_T2starw_seg.nii.gz")
+            # Generate QC report to make sure the manual segmentation is correct
+            !sct_qc -i {subject}_acq-CoV_T2starw.nii.gz -s {subject}_acq-CoV_T2starw_seg.nii.gz -p sct_deepseg_sc -qc {path_qc} -qc-subject {subject}
+        else:
+            # Manual segmentation does not exist. Run automatic segmentation.
+            print(f"{subject}: Manual segmentation not found")
+            !sct_deepseg_sc -i {subject}_acq-CoV_T2starw.nii.gz -c t2s -qc {path_qc}
+
+
+
+
+

Copy CSF masks from the derivatives folder.

+

For more details about how these masks were created, see: https://github.com/shimming-toolbox/rf-shimming-7t/issues/67.

+
+
+
if notebook != 'neurolibre-figures':
+    for subject in subjects:
+        os.chdir(os.path.join(path_data, subject, "anat"))
+        fname_manual_seg = os.path.join(path_labels, subject, "anat", f"{subject}_acq-CoV_T2starw_label-CSF_seg.nii.gz")
+        if os.path.exists(fname_manual_seg):
+            # Manual segmentation already exists. Copy it to local folder
+            print(f"{subject}: Manual segmentation found\n")
+            shutil.copyfile(fname_manual_seg, f"{subject}_acq-CoV_T2starw_label-CSF_seg.nii.gz")
+            # Generate QC report to make sure the manual segmentation is correct
+            !sct_qc -i {subject}_acq-CoV_T2starw.nii.gz -s {subject}_acq-CoV_T2starw_label-CSF_seg.nii.gz -p sct_deepseg_sc -qc {path_qc} -qc-subject {subject}
+        else:
+            # Manual segmentation does not exist. Panic!
+            print(f"{subject}: Manual segmentation not found 😱")
+
+
+
+
+

Crop GRE scan for faster processing and better registration results

+
+
+
if notebook != 'neurolibre-figures':
+    for subject in subjects:
+        os.chdir(os.path.join(path_data, subject, "anat"))
+        !sct_crop_image -i {subject}_acq-CoV_T2starw.nii.gz -m {subject}_acq-CoV_T2starw_seg.nii.gz -dilate 20x20x0 -o {subject}_acq-CoV_T2starw_crop.nii.gz
+        !sct_crop_image -i {subject}_acq-CoV_T2starw_seg.nii.gz -m {subject}_acq-CoV_T2starw_seg.nii.gz -dilate 20x20x0 -o {subject}_acq-CoV_T2starw_crop_seg.nii.gz
+        !sct_crop_image -i {subject}_acq-CoV_T2starw_label-CSF_seg.nii.gz -m {subject}_acq-CoV_T2starw_seg.nii.gz -dilate 20x20x0 -o {subject}_acq-CoV_T2starw_crop_label-CSF_seg.nii.gz
+
+
+
+
+

Label vertebrae on GRE scan

+
+
+
# Given the low resolution of the GRE scan, the automatic detection of C2-C3 disc is unreliable. Therefore we need to use the manual disc labels that are part of the dataset.
+if notebook != 'neurolibre-figures':
+    for subject in subjects:
+        os.chdir(os.path.join(path_data, subject, "anat"))
+        fname_label_discs = os.path.join(path_data, "derivatives", "labels", subject, "anat", f"{subject}_acq-CoV_T2starw_label-discs_dseg.nii.gz")
+        !sct_label_utils -i {subject}_acq-CoV_T2starw_crop_seg.nii.gz -disc {fname_label_discs} -o {subject}_acq-CoV_T2starw_crop_seg_labeled.nii.gz
+        # Generate QC report to assess labeled segmentation
+        !sct_qc -i {subject}_acq-CoV_T2starw_crop.nii.gz -s {subject}_acq-CoV_T2starw_crop_seg_labeled.nii.gz -p sct_label_vertebrae -qc {path_qc} -qc-subject {subject}
+
+
+
+
+

Register *_T2starw to CoV_T2starw ⏳

+
+
+
if notebook != 'neurolibre-figures':
+    for subject in subjects:
+        os.chdir(os.path.join(path_data, subject, "anat"))
+        for shim_mode in shim_modes:
+            # Don't do it for CoV_T2starw
+            if shim_mode != 'CoV':
+                !sct_register_multimodal -i {subject}_acq-{shim_mode}_T2starw.nii.gz -d {subject}_acq-CoV_T2starw_crop.nii.gz -dseg {subject}_acq-CoV_T2starw_crop_seg.nii.gz -param step=1,type=im,algo=dl -qc {path_qc}
+
+
+
+
+

Extract the signal intensity on the GRE scan within the spinal cord between levels C3 and T2 (included), which correspond to the region where RF shimming was prescribed.

+
+
+
if notebook != 'neurolibre-figures':
+    for subject in subjects:
+        os.chdir(os.path.join(path_data, subject, "anat"))
+        for shim_mode in shim_modes:
+            # Shim methods are registered to the CoV T2starw scan, so we need to use the added suffix to identify them
+            if shim_mode == 'CoV':
+                file_suffix = 'crop'
+            else:
+                file_suffix = 'reg'
+            fname_result_sc = os.path.join(path_results, f"{subject}_acq-{shim_mode}_T2starw_label-SC.csv")
+            !sct_extract_metric -i {subject}_acq-{shim_mode}_T2starw_{file_suffix}.nii.gz -f {subject}_acq-CoV_T2starw_crop_seg.nii.gz -method wa -vert 3:9 -vertfile {subject}_acq-CoV_T2starw_crop_seg_labeled.nii.gz -perslice 1 -o {fname_result_sc}
+            fname_result_csf = os.path.join(path_results, f"{subject}_acq-{shim_mode}_T2starw_label-CSF.csv")
+            !sct_extract_metric -i {subject}_acq-{shim_mode}_T2starw_{file_suffix}.nii.gz -f {subject}_acq-CoV_T2starw_crop_label-CSF_seg.nii.gz -method wa -vert 3:9 -vertfile {subject}_acq-CoV_T2starw_crop_seg_labeled.nii.gz -perslice 1 -o {fname_result_csf}
+
+
+
+
+

Prepare data for figure of CSF/SC signal ratio from T2starw scan

+
+
+
%matplotlib inline
+
+# Go back to root data folder
+os.chdir(path_data)
+
+def smooth_data(data, window_size=50):
+    """ Apply a simple moving average to smooth the data. """
+    return uniform_filter1d(data, size=window_size, mode='nearest')
+
+# Fixed grid for x-axis
+x_grid = np.linspace(0, 1, 100)
+
+# z-slices corresponding to levels C3 to T2 on the PAM50 template. These will be used to scale the x-label of each subject.
+original_vector = np.array([907, 870, 833, 800, 769, 735, 692, 646])
+
+# Normalize the PAM50 z-slice numbers to the 1-0 range (to show inferior-superior instead of superior-inferior)
+min_val = original_vector.min()
+max_val = original_vector.max()
+normalized_vector = 1 - ((original_vector - min_val) / (max_val - min_val))
+
+# Use this normalized vector as x-ticks
+custom_xticks = normalized_vector
+
+# Vertebral level labels
+vertebral_levels = ["C3", "C4", "C5", "C6", "C7", "T1", "T2"]
+label_positions = normalized_vector[:-1] + np.diff(normalized_vector) / 2
+
+# Number of subjects determines the number of rows in the subplot
+n_rows = len(subjects)
+
+# Data storage for statistics
+data_stats = []
+
+# Data storage for Plotly
+t2_data_plotly = {}
+
+# Iterate over each subject and create a subplot
+for i, subject in enumerate(subjects):
+
+    t2_data_plotly[subject]={}
+    for shim_mode in shim_modes:
+        # Initialize list to collect data for this shim method
+        method_data = []
+
+        # Get signal in SC
+        file_csv = os.path.join(path_results, f"{subject}_acq-{shim_mode}_T2starw_label-SC.csv")
+        df = pd.read_csv(file_csv)
+        data_sc = df['WA()']
+        data_sc_smoothed = smooth_data(data_sc)
+
+        # Get signal in CSF
+        file_csv = os.path.join(path_results, f"{subject}_acq-{shim_mode}_T2starw_label-CSF.csv")
+        df = pd.read_csv(file_csv)
+        data_csf = df['WA()']
+        data_csf_smoothed = smooth_data(data_csf)
+        
+        # Compute ratio
+        data_sc_csf_ratio = data_csf_smoothed / data_sc_smoothed
+
+        # Normalize the x-axis to a 1-0 scale for each subject (to go from superior-inferior direction)
+        x_subject = np.linspace(1, 0, len(data_sc_csf_ratio))
+
+        # Interpolate to the fixed grid
+        interp_func = interp1d(x_subject, data_sc_csf_ratio, kind='linear', bounds_error=False, fill_value='extrapolate')
+        resampled_data = interp_func(x_grid)
+
+        method_data.append(resampled_data)
+
+        # If there's data for this shim method, plot it and compute stats
+        if method_data:
+            t2_data_plotly[subject][shim_mode]=method_data
+        
+            # Compute stats on the non-resampled data (to avoid interpolation errors)
+            mean_data = np.mean(data_sc_csf_ratio)
+            sd_data = np.std(data_sc_csf_ratio)
+            data_stats.append([subject, shim_mode, mean_data, sd_data])
+        else:
+            t2_data_plotly[subject][shim_mode]=None
+
+
+
+
+

Perform statistics

+
+
+
# Perform statistics
+
+# Convert to DataFrame and save to CSV
+df_stats = pd.DataFrame(data_stats, columns=['Subject', 'Shim_Mode', 'Average', 'Standard_Deviation'])
+df_stats.to_csv(os.path.join(path_results, 'stats_t2starw.csv'), index=False)
+
+# Compute statistics across subjects
+grouped_means = df_stats.groupby('Shim_Mode').agg({'Average': ['mean', 'std'], 'Standard_Deviation': ['mean', 'std']})
+
+# Format mean ± standard deviation
+grouped_means['Average_formatted'] = grouped_means['Average']['mean'].map("{:.2f}".format) + " ± " + grouped_means['Average']['std'].map("{:.2f}".format)
+grouped_means['Standard_Deviation_formatted'] = grouped_means['Standard_Deviation']['mean'].map("{:.2f}".format) + " ± " + grouped_means['Standard_Deviation']['std'].map("{:.2f}".format)
+
+# Drop multi-level index and only keep formatted columns
+grouped_means = grouped_means.drop(columns=['Average', 'Standard_Deviation'])
+grouped_means.columns = ['Average', 'Standard_Deviation']  # Rename columns for clarity
+grouped_means.reset_index().to_csv(os.path.join(path_results, 'stats_t2starw_average_across_subjects.csv'), index=False)
+
+# Perform Repeated Measures ANOVA
+aovrm = AnovaRM(df_stats, 'Average', 'Subject', within=['Shim_Mode'])
+res = aovrm.fit()
+print(res.summary())
+
+# Perform Post Hoc paired t-tests
+
+# Filter out subjects that don't have all shim modes
+shim_modes = df_stats['Shim_Mode'].unique()
+valid_subjects = df_stats.groupby('Subject').filter(lambda x: all(mode in x['Shim_Mode'].values for mode in shim_modes))
+
+# Pairwise comparisons
+pairs = list(itertools.combinations(shim_modes, 2))
+p_values = []
+
+for pair in pairs:
+    data1 = valid_subjects[valid_subjects['Shim_Mode'] == pair[0]]['Average']
+    data2 = valid_subjects[valid_subjects['Shim_Mode'] == pair[1]]['Average']
+    _, p_value = ttest_rel(data1, data2)
+    p_values.append(p_value)
+
+# Function for Benjamini-Hochberg FDR correction
+def benjamini_hochberg(p_values):
+    n = len(p_values)
+    sorted_p_values = np.sort(p_values)
+    sorted_index = np.argsort(p_values)
+    adjusted_p_values = np.zeros(n)
+
+    for i, p in enumerate(sorted_p_values):
+        adjusted_p_values[sorted_index[i]] = min(p * n / (i + 1), 1)
+
+    return adjusted_p_values
+
+# Applying Benjamini-Hochberg FDR correction
+adjusted_p_values = benjamini_hochberg(p_values)
+
+# Creating a summary DataFrame
+comparison_results = pd.DataFrame({'Pair': pairs, 'P-Value': p_values, 'Adjusted P-Value': adjusted_p_values})
+
+# Save to CSV
+comparison_results.to_csv(os.path.join(path_results, 'stats_t2starw_paired_ttest.csv'), index=False)
+
+print(f"Paired t-tests:\n{comparison_results}")
+
+
+
+
+
                 Anova
+=======================================
+          F Value Num DF  Den DF Pr > F
+---------------------------------------
+Shim_Mode  9.0936 6.0000 24.0000 0.0000
+=======================================
+
+
+
Paired t-tests:
+                 Pair   P-Value  Adjusted P-Value
+0       (CP, patient)  0.005972          0.025083
+1        (CP, volume)  0.075373          0.131902
+2         (CP, phase)  0.011296          0.033889
+3           (CP, CoV)  0.000156          0.003279
+4        (CP, target)  0.030643          0.080437
+5        (CP, SAReff)  0.197466          0.276452
+6   (patient, volume)  0.205065          0.269148
+7    (patient, phase)  0.296440          0.366191
+8      (patient, CoV)  0.001437          0.010061
+9   (patient, target)  0.154750          0.232125
+10  (patient, SAReff)  0.031202          0.072804
+11    (volume, phase)  0.413730          0.457281
+12      (volume, CoV)  0.404085          0.471432
+13   (volume, target)  0.552388          0.552388
+14   (volume, SAReff)  0.075590          0.122108
+15       (phase, CoV)  0.009898          0.034641
+16    (phase, target)  0.496343          0.521160
+17    (phase, SAReff)  0.005612          0.029461
+18      (CoV, target)  0.043267          0.082600
+19      (CoV, SAReff)  0.001394          0.014638
+20   (target, SAReff)  0.036743          0.077159
+
+
+
+
+
+
+

6     |     Process fmap/TFL (flip angle maps)#

+

Register TFL flip angle maps to the GRE scan ⏳

+
+
+
if notebook != 'neurolibre-figures':
+    for subject in subjects:
+        os.chdir(os.path.join(path_data, subject, "fmap"))
+        for shim_mode in shim_modes:
+            !sct_register_multimodal -i {subject}_acq-anat{shim_mode}_TB1TFL.nii.gz -d ../anat/{subject}_acq-CoV_T2starw_crop.nii.gz -dseg ../anat/{subject}_acq-CoV_T2starw_crop_seg.nii.gz -param step=1,type=im,algo=slicereg,metric=CC -qc {path_qc}
+
+
+
+
+

Warping spinal cord segmentation and vertebral level to each flip angle map

+
+
+
if notebook != 'neurolibre-figures':
+    for subject in subjects:
+        os.chdir(os.path.join(path_data, subject, "fmap"))
+        for shim_mode in shim_modes:
+            # Use linear interpolation to preserve partial volume information (needed when extracting values along the cord)
+            !sct_apply_transfo -i ../anat/{subject}_acq-CoV_T2starw_crop_seg.nii.gz -d {subject}_acq-anat{shim_mode}_TB1TFL.nii.gz -w warp_{subject}_acq-CoV_T2starw_crop2{subject}_acq-anat{shim_mode}_TB1TFL.nii.gz -x linear -o {subject}_acq-anat{shim_mode}_TB1TFL_seg.nii.gz
+            # Use nearest neighbour (nn) interpolation because we are dealing with non-binary discrete labels
+            !sct_apply_transfo -i ../anat/{subject}_acq-CoV_T2starw_crop_seg_labeled.nii.gz -d {subject}_acq-anat{shim_mode}_TB1TFL.nii.gz -w warp_{subject}_acq-CoV_T2starw_crop2{subject}_acq-anat{shim_mode}_TB1TFL.nii.gz -x nn -o {subject}_acq-anat{shim_mode}_TB1TFL_seg_labeled.nii.gz
+
+
+
+
+

Convert the flip angle maps to B1+ efficiency maps [nT/V] (inspired by code from Kyle Gilbert).

+

The approach consists in calculating the B1+ efficiency using a 1ms, pi-pulse at the acquisition voltage, then scale the efficiency by the ratio of the measured flip angle to the requested flip angle in the pulse sequence.

+
+
+
GAMMA = 2.675e8;  # [rad / (s T)]
+requested_fa = 90  # saturation flip angle -- hard-coded in sequence
+
+if notebook != 'neurolibre-figures':
+    for subject in subjects:
+        os.chdir(os.path.join(path_data, subject, "fmap"))
+        for shim_mode in shim_modes:
+            # Fetch the reference voltage from the JSON sidecar to the TFL B1map sequence
+            with open(f"{subject}_acq-famp{shim_mode}_TB1TFL.json", "r") as f:
+                metadata = json.load(f)
+                ref_voltage = metadata.get("TxRefAmp", "N/A")
+                print(f"ref_voltage [V]: {ref_voltage} ({subject}_acq-famp{shim_mode}_TB1TFL)")
+
+            # Open flip angle map with nibabel
+            nii = nib.load(f"{subject}_acq-famp{shim_mode}_TB1TFL.nii.gz")
+            acquired_fa = nii.get_fdata()
+
+            # Siemens maps are in units of flip angle * 10 (in degrees)
+            acquired_fa = acquired_fa / 10
+
+            # Account for the power loss between the coil and the socket. That number was given by Siemens.
+            voltage_at_socket = ref_voltage * 10 ** -0.095
+
+            # Compute B1 map in [T/V]
+            b1_map = (acquired_fa / requested_fa) * (np.pi / (GAMMA * 1e-3 * voltage_at_socket))
+
+            # Convert to [nT/V]
+            b1_map = b1_map * 1e9
+
+            # Save as NIfTI file
+            nii_b1 = nib.Nifti1Image(b1_map, nii.affine, nii.header)
+            nib.save(nii_b1, f"{subject}_acq-{shim_mode}_TB1map.nii.gz")
+
+
+
+
+

Extract B1+ value along the spinal cord between levels C3 and T2 (included)

+
+
+
if notebook != 'neurolibre-figures':
+    for subject in subjects:
+        os.chdir(os.path.join(path_data, subject, "fmap"))
+        for shim_mode in shim_modes:
+            fname_result_b1plus = os.path.join(path_results, f"{subject}_TB1map_{shim_mode}.csv")
+            !sct_extract_metric -i {subject}_acq-{shim_mode}_TB1map.nii.gz -f {subject}_acq-anat{shim_mode}_TB1TFL_seg.nii.gz -method wa -vert 3:9 -vertfile {subject}_acq-anat{shim_mode}_TB1TFL_seg_labeled.nii.gz -perslice 1 -o {fname_result_b1plus}
+
+
+
+
+

Prepare data for figure of B1+ values along the spinal cord across shim methods

+
+
+
# Go back to root data folder
+os.chdir(os.path.join(path_data))
+
+def smooth_data(data, window_size=20):
+    """ Apply a simple moving average to smooth the data. """
+    return uniform_filter1d(data, size=window_size, mode='nearest')
+
+# Fixed grid for x-axis
+x_grid = np.linspace(0, 1, 100)
+
+# z-slices corresponding to levels C3 to T2 on the PAM50 template. These will be used to scale the x-label of each subject.
+original_vector = np.array([907, 870, 833, 800, 769, 735, 692, 646])
+
+# Normalize the PAM50 z-slice numbers to the 1-0 range (to show inferior-superior instead of superior-inferior)
+min_val = original_vector.min()
+max_val = original_vector.max()
+normalized_vector = 1 - ((original_vector - min_val) / (max_val - min_val))
+
+# Use this normalized vector as x-ticks
+custom_xticks = normalized_vector
+
+# Vertebral level labels
+vertebral_levels = ["C3", "C4", "C5", "C6", "C7", "T1", "T2"]
+# Calculate midpoints for label positions
+label_positions = normalized_vector[:-1] + np.diff(normalized_vector) / 2
+
+# Number of subjects determines the number of rows in the subplot
+n_rows = len(subjects)
+
+# Data storage for statistics
+data_stats = []
+
+# Data storage for Plotly
+b1_data_plotly = {}
+
+# Iterate over each subject and create a subplot
+for i, subject in enumerate(subjects):
+        
+    os.chdir(os.path.join(path_data, subject, "fmap"))
+    b1_data_plotly[subject]={}
+    
+    for shim_mode in shim_modes:
+        # Initialize list to collect data for this shim method
+        method_data = []
+
+        file_csv = os.path.join(path_results, f"{subject}_TB1map_{shim_mode}.csv")
+        df = pd.read_csv(file_csv)
+        wa_data = df['WA()']
+
+        # Normalize the x-axis to a 1-0 scale for each subject (to go from superior-inferior direction)
+        x_subject = np.linspace(1, 0, len(wa_data))
+
+        # Interpolate to the fixed grid
+        interp_func = interp1d(x_subject, wa_data, kind='linear', bounds_error=False, fill_value='extrapolate')
+        resampled_data = interp_func(x_grid)
+
+        # Apply smoothing
+        smoothed_data = smooth_data(resampled_data)
+
+        method_data.append(smoothed_data)
+
+        # If there's data for this shim method, plot it
+        if method_data:
+            # Plotting each file's data separately
+            for resampled_data in method_data:
+                b1_data_plotly[subject][shim_mode]=resampled_data
+
+            # Compute stats on the non-resampled data (to avoid interpolation errors)
+            mean_data = np.mean(wa_data)
+            sd_data = np.std(wa_data)
+            data_stats.append([subject, shim_mode, mean_data, sd_data])
+        else:
+            b1_data_plotly[subject][shim_mode]=None
+
+
+
+
+

Perform statistics

+
+
+
# Convert to DataFrame and save to CSV
+df_stats = pd.DataFrame(data_stats, columns=['Subject', 'Shim_Mode', 'Average', 'Standard_Deviation'])
+df_stats.to_csv(os.path.join(path_results, 'stats_b1plus.csv'), index=False)
+
+# Compute statistics across subjects
+grouped_means = df_stats.groupby('Shim_Mode').agg({'Average': ['mean', 'std'], 'Standard_Deviation': ['mean', 'std']})
+
+# Format mean ± standard deviation
+grouped_means['Average_formatted'] = grouped_means['Average']['mean'].map("{:.2f}".format) + " ± " + grouped_means['Average']['std'].map("{:.2f}".format)
+grouped_means['Standard_Deviation_formatted'] = grouped_means['Standard_Deviation']['mean'].map("{:.2f}".format) + " ± " + grouped_means['Standard_Deviation']['std'].map("{:.2f}".format)
+
+# Drop multi-level index and only keep formatted columns
+grouped_means = grouped_means.drop(columns=['Average', 'Standard_Deviation'])
+grouped_means.columns = ['Average', 'Standard_Deviation']  # Rename columns for clarity
+grouped_means.reset_index().to_csv(os.path.join(path_results, 'stats_b1plus_average_across_subjects.csv'), index=False)
+
+# Perform Repeated Measures ANOVA
+aovrm = AnovaRM(df_stats, 'Average', 'Subject', within=['Shim_Mode'])
+res = aovrm.fit()
+print(res.summary())
+
+# Perform Post Hoc paired t-tests
+
+# Filter out subjects that don't have all shim modes
+shim_modes = df_stats['Shim_Mode'].unique()
+valid_subjects = df_stats.groupby('Subject').filter(lambda x: all(mode in x['Shim_Mode'].values for mode in shim_modes))
+
+# Pairwise comparisons
+pairs = list(itertools.combinations(shim_modes, 2))
+p_values = []
+
+for pair in pairs:
+    data1 = valid_subjects[valid_subjects['Shim_Mode'] == pair[0]]['Average']
+    data2 = valid_subjects[valid_subjects['Shim_Mode'] == pair[1]]['Average']
+    _, p_value = ttest_rel(data1, data2)
+    p_values.append(p_value)
+
+# Function for Benjamini-Hochberg FDR correction
+def benjamini_hochberg(p_values):
+    n = len(p_values)
+    sorted_p_values = np.sort(p_values)
+    sorted_index = np.argsort(p_values)
+    adjusted_p_values = np.zeros(n)
+
+    for i, p in enumerate(sorted_p_values):
+        adjusted_p_values[sorted_index[i]] = min(p * n / (i + 1), 1)
+
+    return adjusted_p_values
+
+# Applying Benjamini-Hochberg FDR correction
+adjusted_p_values = benjamini_hochberg(p_values)
+
+# Creating a summary DataFrame
+comparison_results = pd.DataFrame({'Pair': pairs, 'P-Value': p_values, 'Adjusted P-Value': adjusted_p_values})
+
+# Save to CSV
+comparison_results.to_csv(os.path.join(path_results, 'stats_b1plus_paired_ttest.csv'), index=False)
+
+print(f"Paired t-tests:\n{comparison_results}")
+
+
+
+
+
                 Anova
+=======================================
+          F Value Num DF  Den DF Pr > F
+---------------------------------------
+Shim_Mode 14.0743 6.0000 24.0000 0.0000
+=======================================
+
+Paired t-tests:
+                 Pair   P-Value  Adjusted P-Value
+0       (CP, patient)  0.007728          0.040570
+1        (CP, volume)  0.009006          0.037825
+2         (CP, phase)  0.079896          0.104863
+3           (CP, CoV)  0.318010          0.371012
+4        (CP, target)  0.010284          0.030853
+5        (CP, SAReff)  0.563249          0.622539
+6   (patient, volume)  0.821572          0.862651
+7    (patient, phase)  0.005550          0.116560
+8      (patient, CoV)  0.036644          0.059195
+9   (patient, target)  0.006302          0.066176
+10  (patient, SAReff)  0.010083          0.035289
+11    (volume, phase)  0.006565          0.045958
+12      (volume, CoV)  0.032112          0.056197
+13   (volume, target)  0.027717          0.052915
+14   (volume, SAReff)  0.012891          0.030078
+15       (phase, CoV)  0.989088          0.989088
+16    (phase, target)  0.011515          0.030226
+17    (phase, SAReff)  0.070778          0.099089
+18      (CoV, target)  0.236842          0.292570
+19      (CoV, SAReff)  0.061149          0.091724
+20   (target, SAReff)  0.018986          0.039871
+
+
+
+
+
+
+

7     |     Paper figures#

+

Prepare RF shimming mask for figure

+
+
+
# Select subject to show
+subject = 'sub-05'
+os.chdir(os.path.join(path_data, subject, "fmap"))
+
+# Create RF shimming mask from the segmentation (re-create the live RF shimming procedure)
+file_mask = f"{subject}_acq-anatCP_TB1TFL_mask-shimming.nii.gz"
+if notebook != 'neurolibre-figures':
+
+    !sct_maths -i {subject}_acq-anatCP_TB1TFL_seg_labeled.nii.gz -thr 3 -uthr 9 -o {file_mask}
+    !sct_maths -i {file_mask} -bin 1 -o {file_mask}
+    !sct_create_mask -i {subject}_acq-anatCP_TB1TFL.nii.gz -p centerline,{file_mask} -size 28mm -f cylinder -o {file_mask}
+
+    # Dilate mask
+    !sct_maths -i {file_mask} -dilate 2 -shape disk -dim 2 -o {subject}_acq-anatCP_TB1TFL_mask-shimming_dil.nii.gz
+    # Subtract dilated mask with mask to get the edge
+    !sct_maths -i {subject}_acq-anatCP_TB1TFL_mask-shimming_dil.nii.gz -sub {file_mask} -o {file_mask}
+
+
+
+
+

Create figure of B1+ maps

+
+
+
%matplotlib inline
+# Select subject to show
+subject = 'sub-05'
+os.chdir(os.path.join(path_data, subject, "fmap"))
+file_mask = f"{subject}_acq-anatCP_TB1TFL_mask-shimming.nii.gz"
+
+# Load the statistics from the CSV file
+stats_df = pd.read_csv(os.path.join(path_results, 'stats_b1plus.csv'))
+
+# Filter for the specific subject
+subject_stats = stats_df[stats_df['Subject'] == subject]
+
+# Defining crop limits for resulting figure
+xmin = 20
+xmax = 110
+ymin = 20
+ymax = 150
+
+# Defining dynamic range
+dynmin = 5 
+dynmax = 30
+
+# Create a figure with multiple subplots
+fig, axes = plt.subplots(2, 4, figsize=(8, 6))
+font_size = 14
+axes=axes.flatten()
+
+# First, plot the anatomical image with an overlay of the mask
+
+# Load data
+CP_anat=nib.load(f"{subject}_acq-anatCP_TB1TFL.nii.gz")
+CP_SC=nib.load(file_mask)
+CP_nTpV=nib.load(f"{subject}_acq-CP_TB1map.nii.gz")
+
+# Defining mask based on the magnitude image intensity threshold
+cslice=CP_anat.shape[2] // 2 -2 #shows the SC seg best
+threshold=300
+mask=CP_anat.get_fdata()[xmin:xmax,ymin:ymax, cslice]
+mask=np.where(mask > threshold, 1, 0)
+
+# Cropping anat, SC, B1+
+CP_anat=CP_anat.get_fdata()[xmin:xmax,ymin:ymax, cslice]
+CP_anat=CP_anat*mask
+CP_SC=CP_SC.get_fdata()[xmin:xmax,ymin:ymax, cslice]
+CP_nTpV=CP_nTpV.get_fdata()[xmin:xmax,ymin:ymax, cslice]
+CP_nTpV=CP_nTpV*mask
+
+# All opacity overalys look ugly: workaround, set the anat slice to a max value where the segmentation exists
+CP_anat[CP_SC>0.5]=4000;
+
+# Plotting anat overlayed with SC
+splot=axes[0]
+splot.imshow((CP_anat.T), cmap='gray', origin='lower',vmin=0,vmax=2000)#, interpolation='spline36')
+splot.set_title('Anat', size=font_size)
+splot.axis('off')
+
+# Then, plot each B1+ map, with an overlay of the mean and CV inside the cord
+for i,shim_mode in enumerate(shim_modes):
+    # Load data
+    B1map=nib.load(f"{subject}_acq-{shim_mode}_TB1map.nii.gz")
+    B1map=B1map.get_fdata()[xmin:xmax,ymin:ymax, cslice]
+    B1map=B1map*mask
+
+    # Plot
+    splot=axes[i+1]
+    im = splot.imshow((B1map.T), cmap='viridis', origin='lower',vmin=dynmin,vmax=dynmax)#, interpolation='spline36')
+    splot.set_title(shim_mode, size=font_size)
+    splot.axis('off')
+
+    # Find the statistics for the current shim mode
+    shim_stats = subject_stats[subject_stats['Shim_Mode'] == shim_mode]
+    if not shim_stats.empty:
+        mean_val = shim_stats.iloc[0]['Average']
+        std_val = shim_stats.iloc[0]['Standard_Deviation']
+        cv = std_val / mean_val * 100  # Coefficient of variation in percentage
+        annotation_text = f"{mean_val:.2f} nT/V\n{cv:.2f}%"
+        splot.annotate(annotation_text, (0.05, 0.95), xycoords='axes fraction', 
+                       fontsize=10, color='white', 
+                       verticalalignment='top', horizontalalignment='left')
+
+plt.tight_layout()
+plt.subplots_adjust(wspace=0.1, hspace=0, right=0.9)
+
+# Colorbar
+# Assume that the colorbar should start at the bottom of the lower row of subplots and
+# extend to the top of the upper row of subplots
+cbar_bottom = 0.06  # This might need adjustment
+cbar_height = 0.83  # This represents the total height of both rows of subplots
+cbar_ax = fig.add_axes([0.93, cbar_bottom, 0.03, cbar_height])
+cbar = plt.colorbar(im, cax=cbar_ax)
+
+# cbar_ax = fig.add_axes([0.95, 0.5, 0.04, 0.4])
+# cbar = plt.colorbar(im, cax=cbar_ax)
+cbar_ax.set_title('nT/V', size=12)
+plt.show()
+
+
+
+
+_images/index_51_0.png +
+
+

Figure 2. B1+ efficiency for one participant (sub-05) across all seven RF shimming conditions. The top left panel shows the tfl_b1map magnitude image with an overlay of the mask that was used to perform RF shimming. Text inserts show the mean (in nT/V) and CoV (in %) of B1+ efficiency along the spinal cord between C3 and T2.

+
+
+
%matplotlib inline
+# PYTHON CODE
+# Module imports
+
+# Base python
+import os
+from os import path
+from pathlib import Path
+
+# Graphical
+
+import plotly.graph_objs as go
+from IPython.display import display, HTML
+from plotly import __version__
+from plotly.offline import init_notebook_mode, iplot, plot
+config={
+    'showLink': False,
+    'displayModeBar': True,
+    'toImageButtonOptions': {
+                'format': 'png', # one of png, svg, jpeg, webp
+                'filename': 'custom_image',
+                'height': 2500,
+                'width': 1250,
+                'scale': 2 # Multiply title/legend/axis/canvas sizes by this factor
+            }
+    }
+
+init_notebook_mode(connected=True)
+
+from plotly.subplots import make_subplots
+import plotly.graph_objects as go
+
+import seaborn as sns
+
+# Set the color palette
+pal=sns.color_palette()
+
+# Imports
+import warnings
+warnings.filterwarnings("ignore")
+
+## Setup for plots
+fig = make_subplots(rows=5, cols=2, vertical_spacing = 0.025,
+                    subplot_titles=(
+    '<b>sub-01</b>',
+    '<b>sub-01</b>',
+    '<b>sub-02</b>',
+    '<b>sub-02</b>',
+    '<b>sub-03</b>',
+    '<b>sub-03</b>',
+    '<b>sub-04</b>',
+    '<b>sub-04</b>',
+    '<b>sub-05</b>',
+    '<b>sub-05</b>',))
+
+t2_datasets={}
+b1_datasets={}
+
+t2_data = []
+b1_data = []
+
+legend_bool = True
+for subject in subjects:
+    index = 0
+    t2_datasets[subject]={}
+    b1_datasets[subject]={}
+
+    for shim_mode in shim_modes:
+        t2_datasets[subject][shim_mode]={}
+        b1_datasets[subject][shim_mode]={}
+
+        t2_data=go.Line(
+            x=x_grid,
+            y=t2_data_plotly[subject][shim_mode][0],
+            name=shim_mode,
+            legendgroup=shim_mode,
+            line=dict(color='rgb'+str(pal[index]), width=3),
+            showlegend=False
+            )
+
+        b1_data=go.Line(
+            x=x_grid,
+            y=b1_data_plotly[subject][shim_mode],
+            name=shim_mode,
+            legendgroup=shim_mode,
+            line=dict(color='rgb'+str(pal[index]), width=3),
+            showlegend=legend_bool
+            )        
+
+        
+        t2_datasets[subject][shim_mode]=t2_data
+        b1_datasets[subject][shim_mode]=b1_data
+
+        index += 1
+    legend_bool=False
+    
+
+index = 1
+# For z-ordering   
+for subject in subjects:
+    for shim_mode in shim_modes:
+        fig.add_trace(
+            t2_datasets[subject][shim_mode],
+            row=index, col=1
+        )
+        fig.add_trace(
+            b1_datasets[subject][shim_mode],
+            row=index, col=2
+        )
+    index+=1
+
+
+index = 1
+for subject in subjects:
+    if index == 5:
+        x_title = '<b>Vertebral Levels</b>'
+        showticklabels=True
+    else:
+        x_title = None
+        showticklabels=False
+
+    fig.update_xaxes(
+        type="linear",
+        autorange=True,
+        title=x_title,
+        showgrid=True,
+        gridcolor='rgb(169,169,169)',
+        tickvals=label_positions,
+        ticktext=vertebral_levels,
+        showticklabels=showticklabels,
+        title_font_family="Times New Roman",
+        title_font_size = 20,
+        linecolor='black',
+        linewidth=2,
+        tickfont=dict(
+            family='Times New Roman',
+            size=16,
+        ),
+        row=index, col=1
+        )
+    if index == 1:
+        fig.update_yaxes(
+            type="linear",
+            title={
+                'text':'<b>CSF/Cord T<sub>2</sub><sup>*</sup>w signal ratio</b>',
+                'standoff':0
+                },
+            range=[1.05, 1.4],
+            showgrid=True,
+            gridcolor='rgb(169,169,169)',
+            title_font_family="Times New Roman",
+            title_font_size = 20,
+            linecolor='black',
+            linewidth=2,
+            tickfont=dict(
+                family='Times New Roman',
+                size=16,
+            ),
+            row=index, col=1
+            )
+    else:
+        fig.update_yaxes(
+            type="linear",
+            title={
+                'text':'<b>CSF/Cord T<sub>2</sub><sup>*</sup>w signal ratio</b>',
+                'standoff':0
+                },
+            showgrid=True,
+            gridcolor='rgb(169,169,169)',
+            title_font_family="Times New Roman",
+            title_font_size = 20,
+            linecolor='black',
+            linewidth=2,
+            tickfont=dict(
+                family='Times New Roman',
+                size=16,
+            ),
+            row=index, col=1
+            )
+
+    fig.update_xaxes(
+        type="linear",
+        autorange=True,
+        title=x_title,
+        showgrid=True,
+        gridcolor='rgb(169,169,169)',
+        tickvals=label_positions,
+        ticktext=vertebral_levels,
+        showticklabels=showticklabels,
+        title_font_family="Times New Roman",
+        title_font_size = 20,
+        linecolor='black',
+        linewidth=2,
+        tickfont=dict(
+            family='Times New Roman',
+            size=16,
+        ),
+        row=index, col=2
+        )
+    fig.update_yaxes(
+        type="linear",
+        title={
+            'text':'<b>B<sub>1</sub><sup>+</sup> efficiency [nT/V]</b>',
+            'standoff':0
+            },
+        showgrid=True,
+        gridcolor='rgb(169,169,169)',
+        title_font_family="Times New Roman",
+        title_font_size = 20,
+        linecolor='black',
+        linewidth=2,
+        tickfont=dict(
+            family='Times New Roman',
+            size=16,
+        ),
+        row=index, col=2
+        )
+
+    index+=1
+
+fig.update_layout(height=1800, width=900)
+
+for i in fig['layout']['annotations']:
+    i['font'] = dict(size=22, family='Times New Roman')
+
+fig.update_layout(legend=dict(
+    yanchor="top",
+    y=0.999,
+    xanchor="left",
+    x=0.01,
+    font=dict(
+        family='Times New Roman',
+        size=12
+    ),
+    bordercolor="Black",
+    borderwidth=1.5
+    ),
+    legend_tracegroupgap=0,
+    paper_bgcolor='rgb(255, 255, 255)',
+    plot_bgcolor='rgb(255, 255, 255)',
+)
+
+fig.add_annotation(
+        dict(
+            x=-0.06,
+            y=-0.03,
+            showarrow=False,
+            text='<b>A</b>',
+            font=dict(
+                family='Times New Roman',
+                size=48
+            ),
+            xref='paper',
+            yref='paper'
+        ))
+fig.add_annotation(
+        dict(
+            x=0.5,
+            y=-0.03,
+            showarrow=False,
+            text='<b>B</b>',
+            font=dict(
+                family='Times New Roman',
+                size=48
+            ),
+            xref='paper',
+            yref='paper'
+        ),
+)
+#iplot(fig, filename = 'figure3', config = config)
+plot(fig, filename = 'figure3.html', config = config)
+
+
+
+
+
+
+
+ + +
+
+ +
+
+

Figure 3. B1+ efficiency (A) and CSF/Cord signal ratio from the GRE scan (B) across subjects and across different RF shimming conditions. Data were measured in the spinal cord from C3 to T2 vertebral levels. To match the x-ticks across subjects, the C2-C3 and the T2-T3 intervertebral discs of each subject were aligned with that of the PAM50 template [De Leener et al., 2018], and the curves were linearly scaled.

+
+
+

References#

+
+
+
1
+

Christopher M Collins, Wanzhan Liu, Weston Schreiber, Qing X Yang, and Michael B Smith. Central brightening due to constructive interference with, without, and despite dielectric resonance. J. Magn. Reson. Imaging, 21(2):192–196, February 2005. doi:https://doi.org/10.1002/jmri.20245.

+
+
2
+

Alexandre D'Astous, Gaspard Cereza, Daniel Papp, Kyle M. Gilbert, Jason P. Stockmann, Eva Alonso-Ortiz, and Julien Cohen-Adad. Shimming toolbox: an open-source software toolbox for b0 and b1 shimming in mri. Magnetic Resonance in Medicine, 89(4):1401–1417, 2023. doi:https://doi.org/10.1002/mrm.29528.

+
+
3
+

Gergely David, Siawoosh Mohammadi, Allan R Martin, Julien Cohen-Adad, Nikolaus Weiskopf, Alan Thompson, and Patrick Freund. Traumatic and nontraumatic spinal cord injury: pathological insights from neuroimaging. Nat. Rev. Neurol., 15(12):718–731, December 2019. doi:https://doi.org/10.1038/s41582-019-0270-5.

+
+
4(1,2)
+

Benjamin De Leener, Simon Lévy, Sara M. Dupont, Vladimir S. Fonov, Nikola Stikov, D. Louis Collins, Virginie Callot, and Julien Cohen-Adad. Sct: spinal cord toolbox, an open-source software for processing spinal cord \MRI\ data. NeuroImage, 145, Part A():24 – 43, 2017. doi:https://doi.org/10.1016/j.neuroimage.2016.10.009.

+
+
5
+

Tamer S Ibrahim, Robert Lee, Amir M Abduljalil, Brian A Baertlein, and Pierre-Marie L Robitaille. Dielectric resonances and b1 field inhomogeneity in uhfmri: computational analysis and experimental findings. Magnetic Resonance Imaging, 19(2):219–226, 2001. doi:https://doi.org/10.1016/S0730-725X(01)00300-9.

+
+
6
+

Hugh Kearney, David H Miller, and Olga Ciccarelli. Spinal cord MRI in multiple sclerosis–diagnostic, prognostic and clinical value. Nat. Rev. Neurol., 11(6):327–338, June 2015. doi:https://doi.org/10.1038/nrneurol.2015.80.

+
+
7
+

P Röschmann. Radiofrequency penetration and absorption in the human body: limitations to high-field whole-body nuclear magnetic resonance imaging. Med. Phys., 14(6):922–931, November 1987. doi:https://doi.org/10.1118/1.595995.

+
+
8
+

Qing X Yang, Jinghua Wang, Xiaoliang Zhang, Christopher M Collins, Michael B Smith, Haiying Liu, Xiao-Hong Zhu, J Thomas Vaughan, Kamil Ugurbil, and Wei Chen. Analysis of wave behavior in lossy dielectric samples at high field. Magn. Reson. Med., 47(5):982–989, May 2002. doi:https://doi.org/10.1002/mrm.10137.

+
+
9
+

Benjamin De Leener, Vladimir S. Fonov, D. Louis Collins, Virginie Callot, Nikola Stikov, and Julien Cohen-Adad. Pam50: unbiased multimodal template of the brainstem and spinal cord aligned with the icbm152 space. NeuroImage, 165:170–179, 2018. doi:https://doi.org/10.1016/j.neuroimage.2017.10.041.

+
+
+
+
+
+
+ + + + +
+ +