Skip to content
forked from cvg/LightGlue

Automatic Longitudinal Image Registration using Fundus Landmarks (ARVO 2024)

License

Notifications You must be signed in to change notification settings

QTIM-Lab/EyeLiner

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

37 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

EyeLiner
Automatic Longitudinal Image Registration using Fundus Landmarks

Advaith Veturi

ARVO 2024

Poster | Colab

example
Change detection in longitudinal fundus imaging is key to monitoring disease progression in chronic ophthalmic diseases. Clinicians typically assess changes in disease status by either independently reviewing or manually juxtaposing longitudinally acquired images. However, this task can be challenging due to variations in image acquisition due to camera orientation, zoom, and exposure, which obscure true disease-related changes. This makes manual image evaluation variable and subjective, potentially impacting clinical decision making.

EyeLiner is a deep learning pipeline for automatically aligning longitudinal fundus images, compensating for camera orientation variations. Evaluated on three datasets, EyeLiner outperforms state-of-the-art methods and will facilitate better disease progression monitoring for clinicians.

This repository hosts the code for the EyeLiner pipeline. This codebase is a modification of the LightGlue pipeline, a lightweight feature matcher with high accuracy and blazing fast inference. Our pipeline inputs the two candidate images and segments the blood vessels and optic disk using a vessel and disk segmentation algorithm. Following this, the segmentations are provided to the SuperPoint and LightGlue frameworks for deep learning based keypoint detection and matching, outputting a set of corresponding image landmarks (check out the SuperPoint and LightGlue papers for more details).

Installation and demo Open In Colab

We use pyenv to setup environments. Install PyEnv.

Run the following commands in the terminal to setup environment.

pyenv install 3.10.4
pyenv virtualenv 3.10.4 eyeliner
pyenv activate eyeliner

Now we install the required packages into the virtual environment. We also use poetry to manage package installations. Install Poetry if not already done. Run poetry install and all dependencies will be installed.

git clone [email protected]:QTIM-Lab/EyeLiner.git
cd EyeLiner
poetry install

We provide a demo notebook which shows how to perform registration of a retinal image pair. Note that for our registrations, we rely on masks of the blood vessels and the optic disk. We obtain these using the AutoMorph repo. But you may use any repo to obtain vessel and disk segmentations.

Here is a minimal script to match two images:

from src.utils import load_image
from src.eyeliner import EyeLinerP

# Load EyeLiner API
eyeliner = EyeLinerP(
  reg='tps', # registration technique to use (tps or affine)
  lambda_tps=1.0, # set lambda value for tps
  image_size=(3, 256, 256) # image dimensions
  )

# load each image as a torch.Tensor on GPU with shape (3,H,W), normalized in [0,1]
fixed_image_vessel = load_image('assets/image_0_vessel.jpg', size=(256, 256), mode='rgb').cuda()
moving_image_vessel = load_image('assets/image_1_vessel.jpg', size=(256, 256), mode='rgb').cuda()

# store inputs
data = {
  'fixed_image': fixed_image_vessel,
  'moving_image': moving_image_vessel
}

# register images
theta = eyeliner(data)

# visualize registered images
moving_image = load_image('assets/image_1.jpg', size=(256, 256), mode='rgb').cuda()
reg_image = eyeliner.apply_transform(theta, moving_image)

Output:

Fixed

Fixed

Moving

Moving

Registered

Registered

Run pipeline on dataset

To run our pipeline on a full dataset of images, you will need to provide a csv pointing to the image pairs for registration. The following code snippet runs EyeLiner on a csv dataset.

python src/main.py \
-d /path/to/dataset \
-f image0 \
-m image1 \
-fv vessel0 \
-mv vessel1 \
-fd None \
-md None \
--input vessel \
--reg_method tps \
--lambda_tps 1 \
--save results/ \
--device cuda:0

The csv must contain atleast two columns: one for the fixed image, and one for the moving image - the names of these columns are provided for the -f and -m arguments. If you wish to register the images giving the vessels as input instead, then the csv must contain two additional columns with the vessel paths for the fixed and moving image - the names of these columns are provided for the -fv and -mv arguments. Finally, we try the vessel mask as input, but excluding the vessels within the optic disk region, which we define as a peripheral mask. For this, we provide the fixed and moving image optic disk columns in arguments -fd and md. The input to the model is specified in the --input flag as {img, vessel, peripheral}. The --reg_method specifies the type of registration performed on the anatomical keypoints, which is either affine or tps (thin-plate spline). The --lambda_tps value controls the amount of deformation in the registration. The remaining two arguments indicate the folder where to save the results and the device to perform registration on (cuda:0 or cpu).

Running this script will create a folder containing three subfolders:

  1. registration_params: This will contain the registration models (affine or deformation fields) as pth files.
  2. registration_keypoints: This will contain visualizations of the keypoint matches between image pairs in each row of the dataframe.

and a csv which is the same as the original dataset csv, containing extra columns pointing to the files in the sub-folders.

Evaluate registrations

Evaluating the registrations requires you to run the following script:

python src/eval.py \
-d /path/to/csv \
-f image0 \
-m image1 \
-k None \
-r registration \
--save results/ \
--device cuda:0

The parameters -d, -f, -m, --save and --device are the same as in the previous section. Particularly, the -d arguments takes the results csv that is generated in the previous section. The -k argument takes the name of the column containing the path to keypoints in the fixed and moving image. This is typically a text file with four columns, the first two columns representing the x and y coordinates of the fixed image, and the last two columns for the moving image. The -r takes the name of the column storing the path to the registration in the csv.

Running this script will create a folder containing three subfolders:

  1. registration_images: This contains the moving images registered to the fixed images.
  2. ckbd_images: This will contain the registration checkerboards of the fixed and registered images as pngs.
  3. diff_map_images: This will contain the difference/subtraction maps of the fixed and registered images as pngs.
  4. flicker_images: This contains gifs of the flicker between fixed and registered images.

and a csv which is the same as the original dataset csv, containing extra columns pointing to the files in the sub-folders.

EyeLiner-S

To register an entire longitudinal sequence of images, we introduce EyeLiner-S. All you need to provide is a csv with the images column name, patient ID column name, laterality column name, image ordering column name, and additional parameters similar to the EyeLiner-P script. Run the following script:

python src/sequential_registrator.py \
-d /path/to/csv \
-m patient_id_column_name \
-l laterality_column_name \
-sq image_ordering_column_name \
-i img_column_name \
-v vessel_column_name \
-o disk_column_name \
--inp input to registration algorithm [img/vessel/structural]
--reg2start set this flag if you want to register every image to the first image in the sequence. By default registers each image to the registered version of the previous image.
--reg_method tps \
--lambda_tps 1 \
--save results/ \
--device cuda:0

Running this script will create a folder containing four subfolders:

  1. registration_params: This will contain the registration models (affine or deformation fields) as pth files.
  2. registration_keypoint_matches: This will contain visualizations of the keypoint matches between the fixed and moving image, where the moving image is the t-th image of the sequence, and the fixed image is either the first image or t-1-th image of the sequence depending on whether you set the --reg2start flag or not.
  3. registration_videos: This is a folder containing videos of the registered images which are stitched together into a movie.
  4. logs: For every patient study registered, a logs file is generated which indicates the progress of the registration.

and a csv which is the same as the input csv, containing extra columns pointing to the files in the sub-folders.

License

The pre-trained weights of LightGlue and the code provided in this repository are released under the Apache-2.0 license. DISK follows this license as well but SuperPoint follows a different, restrictive license (this includes its pre-trained weights and its inference file). ALIKED was published under a BSD-3-Clause license.

About

Automatic Longitudinal Image Registration using Fundus Landmarks (ARVO 2024)

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 56.8%
  • Shell 43.2%