Skip to content

IMOP-lab/PricoMS-Pytorch

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

52 Commits
 
 
 
 
 
 

Repository files navigation

PricoMS: Prior-coordinated Multiscale Synthesis Framework for Vessel Reconstruction in Intravascular Ultrasound Image Amidst Label Scarcity

PricoMS: Prior-coordinated Multiscale Synthesis Framework for Vessel Reconstruction in Intravascular Ultrasound Image Amidst Label Scarcity

Xingru Huang, Huawei Wang, Shuaibin Chen, Yihao Guo, Francesca Pugliese, Anantharaman Ramasamy, Ryo Torii, Jouke Dijkstra, Huiyu Zhou, Christos V. Bourantas,Qianni Zhang

Hangzhou Dianzi University IMOP-lab

** Figure 1: Detailed network structure of the PricoMS.**

We propose PricoMS, a framework that leverages prior-coordinated multiscale synthesis approaches for the segmentation of cerebral ischemia preventive vessels in IVUS images amidst the challenge of label scarcity. PricoMS achieves state-of-the-art performance over 13 previous methods on the IVUS datasets.

We will first introduce our method and principles, then introduce the experimental environment and provide Github links to previous methods we have compared. Finally, we will present the experimental results.

Methods

PCP Module

Figure 2: Structure of the PCP module.

We propose the PCP to address label scarcity, consisting of the prior encoder and calibration module . This approach integrates spatial feature extraction with a novel temporal coherence strategy, enabling precise input of rich spatial and temporally coordinated data into the segmentation framework.

HCS Module

Figure 3: Structure of the HCS module.

We propose the HCS Module , which enhances feature information by refining latent subspace features and focusing attention to attenuate less beneficial elements

CSE-AMF Module

Figure 4: Structure of the AMF module.

We propose the AMF-CSE Module to tackle decoders’ inability to accurately recover detailed information and the issue of using features in isolation. This module includes two units: adaptive morphological fusion for global multiscale feature fusion, and contextual space encoding for organizing image context sequentially

Installation

To ensure an equitable comparison, all experimental procedures were conducted within a consistent hardware and software milieu: an assembly of four servers, each equipped with dual NVIDIA GeForce RTX 3080 10GB graphics cards, and augmented by 128GB of system memory. Our project is anchored in Python 3.9.0, harnesses PyTorch 1.13.1 for deep learning operations, and operates atop CUDA 11.7.64, employing distributed training methodologies for both training and evaluation phases. The optimizer of choice was AdamW, with a learning rate meticulously set to 0.0001. Model weights were initially set to random values to ensure a fair start, and the training regimen was designed to span 100 epochs.

Experiment

Baselines

We provide GitHub links pointing to the PyTorch implementation code for all networks compared in this experiment here, so you can easily reproduce all these projects.

U-Net+AttGate; Bisenet; Dunet; DeepLab; FCN;GCN; ICNet; LEDNe; OCNet; PSPNet;R2U-Net+AttGate; R2U-Net; U-Net

Compare with others on the IVUS dataset

Figure 5: Comparison experiments between our method and 13 previous segmentation methods on the IVUS dataset.

Figure 6: The visual results of our method compared to the existing 13 segmentation methods on the IVUS dataset. (a) U-Net+Attention Gate; (b) BiSeNet; (c) DUNet; (d) DeepLab; (e) FCN-8s; (f) GCN; (g) ICNet; (h) LEDNet; (i) OCNet; (j) PSPNet; (k) R2U-Net+Attention Gate; (l) R2U-Net; (m) U-Net; (n) PricoMS;

Ablation study

Figure 7: Ablation experiments on key components of PricoMS on the IVUS dataset.

Releases

No releases published

Packages

No packages published

Languages