Sensorless frame-to-volume multimodal image fusion via deep learning

Loading...
Thumbnail Image
Authors
Guo, Hengtao
Issue Date
2022-12
Type
Electronic thesis
Thesis
Language
en_US
Keywords
Biomedical engineering
Research Projects
Organizational Units
Journal Issue
Alternative Title
Abstract
Prostate cancer is the leading cause of death for men in the western world. The fusion of transrectal ultrasound (US) and magnetic resonance (MR) images for guiding the biopsy can facilitate the clinical diagnosis of prostate cancer. Intra-operative US scan provides real-time 2D prostate images, and pre-operative MR image volume offers good sensitivity and specificity for lesion localization. During a biopsy procedure, clinicians semi-manually set up the MR-US correspondence to superimpose the pre-identified lesions from MR onto the real-time US frames for navigation. Although there exist several image fusion methods, we are still facing a few technical challenges to make this technology more accessible for disadvantaged population. For example, current tracking-based approaches require motion sensors to be attached to the US probes which increases the hardware costs. These methods usually require clinical experts to manually align the US images with MR images for setting up the cross-modality correspondence, which significantly limits the patient throughput. In this work, we propose a solution for real-time multi-modality 2D-US frame to 3D-MR fusion which is fully automated by DL techniques. We develop this project aiming to remove the hardware constraint and perform automatic multi-modal cross-dimensional image fusion with minimum human supervision. The proposed method can largely reduce the hardware complexity while increase the inference speed and accuracy. The innovation of this project is three fold: (1) We propose to automatically reconstruct 3D US volume from 2D frames without using any external US probe tracking devices. The trained neural networks can reveal the inter-frame relationship by extracting context information between neighboring frames in a US sweep video. Without the tracking devices, our sensorless volume reconstruction allows clinicians to move the probe with less constraint without the concerns of blocking tracking signals. Additionally, it also reduces hardware costs. We develop a systematic pipeline for the task of 3D US reconstruction, including data acquisition/preprocessing, model design and training, volume reconstruction performance evaluation, and learning capacity analysis. (2) We introduce a deep learning based method for registering 2D US frames and 3D US volume to bridge the dimensional gap for the US/MR fusion. During this process, we combine both the video context from real-time US scans and volumetric information from 3D reconstructed US volumes to estimate the location of the current 2D US frame in 3D space. While existing methods require external tracking devices to map the location of a US frame in the reconstructed US volume, in our developed technology, such mapping can be accomplished fully automatically without additional hardware. (3) We further bridge the image modality gap by proposing an automatic registration method between the reconstructed 3D US volume and 3D pre-operative MR volume. Unlike traditional image registration, our forward-pass method does not require iterative optimization, thus greatly reducing computational time. Considering all previous image correspondences, including 2D-US to 3D-US and 3D-US to 3D-MR, we can propagate the transformation to achieve 2D-US to 3D-MR registration without hardware constraints. We validate our method on a clinical dataset with 618 subjects and test its potential on real-time 2D-US to 3D-MR fusion tasks. The proposed frame-to-volume multi-modal image fusion pipeline achieved the average target navigation error of 1.93 mm with a registration speed of 5 to 14 frames per second.
Description
December 2022
School of Engineering
Full Citation
Publisher
Rensselaer Polytechnic Institute, Troy, NY
Terms of Use
Journal
Volume
Issue
PubMed ID
DOI
ISSN
EISSN
Collections