Multi-sequence MRI protocols are used in comprehensive examinations of various pathologies in both clinical diagnosis and medical research. Various MRI techniques provide complementary information about living tissue. However, a comprehensive examination covering all modalities is rarely achieved due to considerations of cost, patient comfort, and scanner time availability. This may lead to incomplete records owing to image artifacts or corrupted or lost data. In this paper, we explore the problem of synthesizing images for one MRI modality from an image of another MRI modality of the same subject using a novel geometry regularized joint dictionary learning framework for non-local patch reconstruction. Firstly, we learn a cross-modality joint dictionary from a multi-modality image database. Training image pairs are first co-registered. A cross-modality dictionary pair is then jointly learned by minimizing the cross-modality divergence via a Maximum Mean Discrepancy term in the objective function of the learning scheme. This guarantees that the distribution of both image modalities is taken jointly into account when building the resulting sparse representation. In addition, in order to preserve intrinsic geometrical structure of the synthesized image patches, we further introduced a graph Laplacian regularization term into the objective function. Finally, we present a patch-based non-local reconstruction scheme, providing further fidelity of the synthesized images. Experimental results demonstrate that our method achieves significant performance gains over previously published techniques.
|Title of host publication
|Simulation and Synthesis in Medical Imaging
|Sotirios Tsaftaris, Ali Gooya, Alejandro Frangi, Jerry Prince
|Published - 2016
|Lecture Notes in Computer Science