Publication: FDG-PET to T1 Weighted MRI Translation with 3D Elicit Generative Adversarial Network (E-GAN)
With the strengths of deep learning, computer-aided diagnosis (CAD) is a hot topic for researchers in medical image analysis. One of the main requirements for training a deep learning model is providing enough data for the network. However, in medical images, due to the difficulties of data collection and data privacy, finding an appropriate dataset (balanced, enough samples, etc.) is quite a challenge. Although image synthesis could be beneficial to overcome this issue, synthesizing 3D images is a hard task. The main objective of this paper is to generate 3D T1 weighted MRI corresponding to FDG-PET. In this study, we propose a separable convolution-based Elicit generative adversarial network (E-GAN). The proposed architecture can reconstruct 3D T1 weighted MRI from 2D high-level features and geometrical information retrieved from a Sobel filter. Experimental results on the ADNI datasets for healthy subjects show that the proposed model improves the quality of images compared with the state of the art. In addition, the evaluation of E-GAN and the state of art methods gives a better result on the structural information (13.73% improvement for PSNR and 22.95% for SSIM compared to Pix2Pix GAN) and textural information (6.9% improvements for homogeneity error in Haralick features compared to Pix2Pix GAN).