Skip to content

iangilan/tFUSFormer

Repository files navigation

tFUSFormer: Physics-guided super-resolution Transformer for simulation of transcranial focused ultrasound propagation in brain stimulation

Overview

This repository contains the implementation of neural networks designed to predict the simulation of transcranial focused ultrasound (tFUS) propagation in brain stimulation. tFUS brain stimulation is a non-invasive medical technique that uses focused sound waves to modulate neuronal activity in specific brain regions, offering potential therapeutic and research applications.

Features

  • Fast super-resolution convolutional neural network (FSRCNN), squeeze-and-excitation super-resolution residual network (SE-SRResNet), super-resolution generative adversarial network (SRGAN), and tFUSFormer architecture for accurate prediction of tFUS focal region.
  • Custom loss functions that combine MSE, IoU loss, and distance function to enhance prediction accuracy.
  • Data pre-processing and loading modules for efficient handling of the tFUS simulation data.
  • Evaluation metrics to assess model performance, including IoU score and distance D between maximum pressure points.

Requirements

  • python 3.9
  • torch-1.11.0+cu113
  • cudatoolkit-11.3.1

Installation

To set up the project environment:

  • Clone the repository: git clone https://github.com/iangilan/tFUSFormer.git

Dataset

The dataset used in this project consists of:

  • You can download training, validation, and test datasets from here.
  • Due to privacy concerns, CT images are excluded.

Pre-trained model

You can download the pre-trained models from here.

Usage

  1. Locate your test dataset in your local storage.
  2. Edit config.py according to the user's need.
  3. Edit the data loaders tFUS_dataloader.py to load and preprocess the data.
  4. Train the model using python train_model.py.
  5. Evaluate the model's performance on test data using python test_model.py.

Model Architecture

  • The FSRCNN_1ch model is a 3D FSRCNN that consists of encoder and decoder blocks, designed for extracting features and predicting a focal region.
  • The SESRResNet_1ch model is a 3D SESRResNet, designed for extracting features and predicting a focal region.
  • The SRGAN_1ch model is a 3D SRGAN, designed for extracting features and predicting a focal region.
  • The tFUSFormer_1ch model is a 3D tFUSFormer, designed for extracting features and predicting a focal region.
  • The tFUSFormer_5ch model is a 3D tFUSFormer, designed for extracting five physics features and predicting a focal region.
  • The architectures of all models are defined in models.py.

Custom Loss Function

The model utilizes a combined loss function that incorporates MSE loss, IoU loss, and a distance function to mitigate specific challenges in tFUS focal volume prediction.

Evaluation

The model is evaluated based on the intersection over union (IoU) score and the distance between the maximum pressure points of the ground truth full-width at half maximum (FWHM) and the predicted FWHM of the focal volume, offering a comprehensive assessment of its prediction accuracy.

Citation

If you use this tool in your research, please cite the following paper:

For the SE-SRResNet, please cite the following paper:

For the manifold discovery and analysis (MDA) analysis, please cite the following paper:

Contact

For any queries, please reach out to Minwoo Shin.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages