3.5

CiteScore

2.3

Impact Factor
  • ISSN 1674-8301
  • CN 32-1810/R
Volume 36 Issue 6
Nov.  2022
Turn off MathJax
Article Contents
Hamed Amini Amirkolaee, Hamid Amini Amirkolaee. Medical image translation using an edge-guided generative adversarial network with global-to-local feature fusion[J]. The Journal of Biomedical Research, 2022, 36(6): 409-422. doi: 10.7555/JBR.36.20220037
Citation: Hamed Amini Amirkolaee, Hamid Amini Amirkolaee. Medical image translation using an edge-guided generative adversarial network with global-to-local feature fusion[J]. The Journal of Biomedical Research, 2022, 36(6): 409-422. doi: 10.7555/JBR.36.20220037

Medical image translation using an edge-guided generative adversarial network with global-to-local feature fusion

doi: 10.7555/JBR.36.20220037
More Information
  • Corresponding author: Hamed Amini Amirkolaee, School of Surveying and Geospatial Engineering, College of Engineering, University of Tehran, N Kargar street, Tehran 1417935840, Iran. Tel/Fax: +98-930-9777140/+98-21-88008837, E-mail: hamedamini.a.k@gmail.com
  • Received: 2022-02-26
  • Revised: 2022-05-07
  • Accepted: 2022-05-13
  • Published: 2022-06-28
  • Issue Date: 2022-11-28
  • In this paper, we propose a framework based deep learning for medical image translation using paired and unpaired training data. Initially, a deep neural network with an encoder-decoder structure is proposed for image-to-image translation using paired training data. A multi-scale context aggregation approach is then used to extract various features from different levels of encoding, which are used during the corresponding network decoding stage. At this point, we further propose an edge-guided generative adversarial network for image-to-image translation based on unpaired training data. An edge constraint loss function is used to improve network performance in tissue boundaries. To analyze framework performance, we conducted five different medical image translation tasks. The assessment demonstrates that the proposed deep learning framework brings significant improvement beyond state-of-the-arts.

     

  • CLC number: R445; TP391.4, Document code: A
    The authors reported no conflict of interests.
  • loading
  • [1]
    Han X. MR-based synthetic CT generation using a deep convolutional neural network method[J]. Med Phys, 2017, 44(4): 1408–1419. doi: 10.1002/mp.12155
    [2]
    Catana C, Van Der Kouwe A, Benner T, et al. Toward implementing an MRI-based PET attenuation-correction method for neurologic studies on the MR-PET brain prototype[J]. J Nucl Med, 2010, 51(9): 1431–1438. doi: 10.2967/jnumed.109.069112
    [3]
    Chen Y, Juttukonda M, Su Y, et al. Probabilistic air segmentation and sparse regression estimated pseudo CT for PET/MR attenuation correction[J]. Radiology, 2015, 275(2): 562–569. doi: 10.1148/radiol.14140810
    [4]
    Uh J, Merchant TE, Li Y, et al. MRI-based treatment planning with pseudo CT generated through atlas registration[J]. Med Phys, 2014, 41(5): 051711. doi: 10.1118/1.4873315
    [5]
    Keereman V, Fierens Y, Broux T, et al. MRI-based attenuation correction for PET/MRI using ultrashort echo time sequences[J]. J Nucl Med, 2010, 51(5): 812–818. doi: 10.2967/jnumed.109.065425
    [6]
    Zheng W, Kim JP, Kadbi M, et al. Magnetic resonance–based automatic air segmentation for generation of synthetic computed tomography scans in the head region[J]. Int J Radiat Oncol Biol Phys, 2015, 93(3): 497–506. doi: 10.1016/j.ijrobp.2015.07.001
    [7]
    Huynh T, Gao Y, Kang J, et al. Estimating CT image from MRI data using structured random forest and auto-context model[J]. IEEE Trans Med Imaging, 2016, 35(1): 174–183. doi: 10.1109/TMI.2015.2461533
    [8]
    Zhong L, Lin L, Lu Z, et al. Predict CT image from MRI data using KNN-regression with learned local descriptors[C]//2016 IEEE 13th International Symposium on Biomedical Imaging (ISBI). Prague: IEEE, 2016: 743–746.
    [9]
    Krizhevsky A, Sutskever I, Hinton GE. Imagenet classification with deep convolutional neural networks[C]//Proceedings of the 25th International Conference on Neural Information Processing Systems. Lake Tahoe: ACM, 2012: 1097–1105.
    [10]
    He K, Zhang X, Ren S, et al. Deep residual learning for image recognition[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Las Vegas: IEEE, 2016: 770–778.
    [11]
    Nie D, Trullo R, Lian J, et al. Medical image synthesis with deep convolutional adversarial networks[J]. IEEE Trans Biomed Eng, 2018, 65(12): 2720–2730. doi: 10.1109/TBME.2018.2814538
    [12]
    Dar SU, Yurt M, Karacan L, et al. Image synthesis in multi-contrast MRI with conditional generative adversarial networks[J]. IEEE Trans Med Imaging, 2019, 38(10): 2375–2388. doi: 10.1109/TMI.2019.2901750
    [13]
    Kearney V, Ziemer BP, Perry A, et al. Attention-aware discrimination for MR-to-CT image translation using cycle-consistent generative adversarial networks[J]. Radiol Artif Intell, 2020, 2(2): e190027. doi: 10.1148/ryai.2020190027
    [14]
    Upadhyay U, Chen Y, Hepp T, et al. Uncertainty-guided progressive GANs for medical image translation[C]//24th International Conference on Medical Image Computing and Computer Assisted Intervention. Strasbourg: Springer, 2021: 614–624.
    [15]
    Dalmaz O, Yurt M, Çukur T. ResViT: residual vision transformers for multi-modal medical image synthesis[EB/OL]. [2022-04-22]. https://ieeexplore.ieee.org/document/9758823/.
    [16]
    Yang H, Sun J, Carass A, et al. Unpaired brain MR-to-CT synthesis using a structure-constrained CycleGAN[C]//4th International Workshop on Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support. Granada: Springer, 2018: 174–182.
    [17]
    Jin C, Kim H, Liu M, et al. Deep CT to MR synthesis using paired and unpaired data[J]. Sensors, 2019, 19(10): 2361. doi: 10.3390/s19102361
    [18]
    Wolterink JM, Dinkla AM, Savenije MHF, et al. Deep MR to CT synthesis using unpaired data[C]//Second International Workshop on Simulation and Synthesis in Medical Imaging. Québec City: Springer, 2017: 14–23.
    [19]
    Zhu J, Park T, Isola P, et al. Unpaired image-to-image translation using cycle-consistent adversarial networks[C]//Proceedings of the IEEE International Conference on Computer Vision. Venice: IEEE, 2017: 2242–2251.
    [20]
    Long J, Shelhamer E, Darrell T. Fully convolutional networks for semantic segmentation[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Boston: IEEE, 2015: 3431–3440.
    [21]
    Yu F, Koltun V. Multi-scale context aggregation by dilated convolutions[C]//4th International Conference on Learning Representations. San Juan: ICLR, 2016.
    [22]
    Isola P, Zhu J, Zhou T, et al. Image-to-image translation with conditional adversarial networks[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Honolulu: IEEE, 2017: 5967–5976.
    [23]
    Ronneberger O, Fischer P, Brox T. U-Net: convolutional networks for biomedical image segmentation[C]//18th International Conference on Medical Image Computing and Computer-Assisted Intervention. Munich: Springer, 2015: 234–241.
    [24]
    Rosasco L, De Vito E, Caponnetto A, et al. Are loss functions all the same?[J]. Neural Comput, 2004, 16(5): 1063–1076. doi: 10.1162/089976604773135104
    [25]
    Mao X, Li Q, Xie H, et al. Least squares generative adversarial networks[C]//Proceedings of the IEEE International Conference on Computer Vision. Venice: IEEE, 2017: 2813–2821.
    [26]
    Borji A. Pros and cons of GAN evaluation measures[J]. Comput Vis Image Und, 2019, 179: 41–65. doi: 10.1016/j.cviu.2018.10.009
    [27]
    Sheikh HR, Bovik AC. Image information and visual quality[J]. IEEE Trans Image Process, 2006, 15(2): 430–444. doi: 10.1109/TIP.2005.859378
    [28]
    Wang Z, Bovik AC, Sheikh HR, et al. Image quality assessment: from error visibility to structural similarity[J]. IEEE Trans Image Process, 2004, 13(4): 600–612. doi: 10.1109/TIP.2003.819861
    [29]
    Li W, Li Y, Qin W, et al. Magnetic resonance image (MRI) synthesis from brain computed tomography (CT) images based on deep learning methods for magnetic resonance (MR)-guided radiotherapy[J]. Quant Imaging Med Surg, 2020, 10(6): 1223–1236. doi: 10.21037/qims-19-885
    [30]
    Kong L, Lian C, Huang D, et al. Breaking the dilemma of medical image-to-image translation[C]//Proceedings of the 35th conference on Neural Information Processing Systems. Online: NIPS, 2021: 1964–1978.
    [31]
    Tang H, Liu H, Xu D, et al. AttentionGAN: unpaired image-to-image translation using attention-guided generative adversarial networks[EB/OL]. [2021-09-02]. https://doi.org/10.1109/TNNLS.2021.3105725.
    [32]
    Armanious K, Jiang C, Fischer M, et al. MedGAN: medical image translation using GANs[J]. Comput Med Imaging Graph, 2020, 79: 101684. doi: 10.1016/j.compmedimag.2019.101684
    [33]
    Ben-Cohen A, Klang E, Raskin SP, et al. Virtual PET images from CT data using deep convolutional networks: initial results[C]//Second International Workshop on Simulation and Synthesis in Medical Imaging. Québec City: Springer, 2017: 49–57.
    [34]
    Cui Y, Han S, Liu M, et al. Diagnosis and grading of prostate cancer by relaxation maps from synthetic MRI[J]. J Magn Reson Imaging, 2020, 52(2): 552–564. doi: 10.1002/jmri.27075
    [35]
    Denck J, Guehring J, Maier A, et al. MR-contrast-aware image-to-image translations with generative adversarial networks[J]. Int J Comput Ass Radiol Surg, 2021, 16(12): 2069–2078. doi: 10.1007/s11548-021-02433-x
    [36]
    Dinh PH. Multi-modal medical image fusion based on equilibrium optimizer algorithm and local energy functions[J]. Appl Intell, 2021, 51(11): 8416–8431. doi: 10.1007/s10489-021-02282-w
    [37]
    Wolterink JM, Leiner T, Viergever MA, et al. Generative adversarial networks for noise reduction in low-dose CT[J]. IEEE Trans Med Imaging, 2017, 36(12): 2536–2545. doi: 10.1109/TMI.2017.2708987
    [38]
    Florkow MC, Zijlstra F, Willemsen K, et al. Deep learning–based MR-to-CT synthesis: the influence of varying gradient echo–based MR images as input channels[J]. Magn Reson Med, 2020, 83(4): 1429–1441. doi: 10.1002/mrm.28008
    [39]
    Koike Y, Akino Y, Sumida I, et al. Feasibility of synthetic computed tomography generated with an adversarial network for multi-sequence magnetic resonance-based brain radiotherapy[J]. J Radiat Res, 2020, 61(1): 92–103. doi: 10.1093/jrr/rrz063
    [40]
    Liu Y, Lei Y, Wang T, et al. CBCT-based synthetic CT generation using deep-attention cycleGAN for pancreatic adaptive radiotherapy[J]. Med Phys, 2020, 47(6): 2472–2483. doi: 10.1002/mp.14121
    [41]
    Qi M, Li Y, Wu A, et al. Multi-sequence MR image-based synthetic CT generation using a generative adversarial network for head and neck MRI-only radiotherapy[J]. Med Phys, 2020, 47(4): 1880–1894. doi: 10.1002/mp.14075
    [42]
    Tie X, Lam SK, Zhang Y, et al. Pseudo-CT generation from multi-parametric MRI using a novel multi-channel multi-path conditional generative adversarial network for nasopharyngeal carcinoma patients[J]. Med Phys, 2020, 47(4): 1750–1762. doi: 10.1002/mp.14062
    [43]
    Gozes O, Greenspan H. Bone structures extraction and enhancement in chest radiographs via CNN trained on synthetic data[C]//2020 IEEE 17th International Symposium on Biomedical Imaging (ISBI). Iowa City: IEEE, 2020: 858–861.
    [44]
    Yuan N, Dyer B, Rao S, et al. Convolutional neural network enhancement of fast-scan low-dose cone-beam CT images for head and neck radiotherapy[J]. Phys Med Biol, 2020, 65(3): 035003. doi: 10.1088/1361-6560/ab6240
  • 加载中

Catalog

    通讯作者: 陈斌, bchen63@163.com
    • 1. 

      沈阳化工大学材料科学与工程学院 沈阳 110142

    1. 本站搜索
    2. 百度学术搜索
    3. 万方数据库搜索
    4. CNKI搜索

    Figures(8)  / Tables(2)

    Article Metrics

    Article views (911) PDF downloads(189) Cited by()
    Proportional views
    Relative Articles

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return