您的位置:山东大学 -> 科技期刊社 -> 《山东大学学报(医学版)》

山东大学学报 (医学版) ›› 2025, Vol. 63 ›› Issue (8): 17-40.doi: 10.6040/j.issn.1671-7554.0.2025.0512

• 临床研究 • 上一篇    

多模态医学数据融合技术及应用

杨帆1,2,3   

  1. 1.山东大学齐鲁医学院公共卫生学院医学数据学系, 山东 济南 250012;2.国家健康医疗大数据研究院, 山东 济南 250003;3.山东大学齐鲁医院, 山东 济南 250012
  • 发布日期:2025-08-25
  • 通讯作者: 杨帆. E-mail:fanyang@sdu.edu.cn
  • 基金资助:
    国家自然科学基金(82273736)

Multimodal medical data fusion technology and its application

YANG Fan1,2,3   

  1. 1. Department of Medical Dataology, School of Public Health, Cheeloo College of Medicine, Shandong University, Jinan 250012, Shandong, China;
    2. National Institute of Health and Medical Big Data, Jinan 250003, Shandong, China;
    3. Qilu Hospital of Shandong University, Jinan 250012, Shandong, China
  • Published:2025-08-25

摘要: 随着生物多组学、医学影像和电子健康记录等多源医疗数据的爆炸式增长,单一模态难以刻画复杂疾病的生物学异质性。多模态医学数据融合技术通过在特征级、表示级和决策级整合异构信息,为疾病预测与治疗提供了新的可能。本研究系统梳理了近年来基于深度学习与统计建模的融合方法学进展,包括Transformer与图神经网络驱动的端到端框架,贝叶斯及潜在因子模型支撑的显式概率推断,以及信息瓶颈、共性-特异性分解等增强表示有效性的理论新视角。针对跨模态异质性和高维稀疏性,本文总结了早期、中期、晚期三类融合策略及协同训练、多视角对齐等训练范式,并讨论注意力机制在捕获互补信息中的作用。进一步结合癌症预后、生物标志物发现、药物反应预测和临床决策支持等应用案例,阐释融合模型在提高预测性能、增强可解释性和契合临床工作流方面的优势与挑战。本文提出面向临床可落地的未来研究方向:构建安全合规的联邦数据湖、发展因果可解释融合框架、加强与医护流程的深度耦合,以实现从多模态数据到精准诊疗的闭环转化。

关键词: 多模态融合, 深度学习, 信息瓶颈, 可解释性, 精准诊疗

Abstract: With the explosive growth of multi-source medical data such as bio-multi-omics, medical imaging, and electronic health records, a single modality is unable to characterize the biological heterogeneity of complex diseases. Multimodal medical data fusion technology provides new possibilities for disease prediction and treatment by integrating heterogeneous information at the feature level, representation level, and decision level. This study systematically reviews the progress of fusion methodologies based on deep learning and statistical modeling in recent years, including end-to-end frameworks driven by Transformer and graph neural networks, explicit probabilistic inference supported by Bayesian and latent factor models, and new theoretical perspectives such as information bottlenecks and commonality-specificity decomposition to enhance representation effectiveness. In view of cross-modal heterogeneity and high-dimensional sparsity, this paper summarizes three types of fusion strategies, namely early, mid-, and late-stage, as well as training paradigms such as collaborative training and multi-view alignment, and discusses the role of attention mechanisms in capturing complementary information. Further combined with application cases such as cancer prognosis, biomarker discovery, drug response prediction, and clinical decision support, this paper explains the advantages and challenges of fusion models in improving prediction performance, enhancing interpretability, and fitting clinical workflows. This paper proposes future research directions for clinical implementation: building a secure and compliant federal data lake, developing a causal explainable fusion framework, and strengthening deep coupling with medical care processes to achieve a closed-loop transformation from multimodal data to precision diagnosis and treatment.

Key words: Multimodal fusion, Deep learning, Information bottleneck, Explainability, Precision diagnosis and treatment

中图分类号: 

  • R181.2+3
[1] Teoh JR, Dong J, Zuo XW, et al. Advancing healthcare through multimodal data fusion: a comprehensive review of techniques and applications[J]. Peer J Comput Sci, 2024, 10: e2298. doi:10.7717/peerj-cs.2298
[2] Krones F, Marikkar U, Parsons G, et al. Review of multimodal machine learning approaches in healthcare[J]. Inf Fusion, 2025, 114: 102690. doi:10.1016/j.inffus.2024.102690
[3] Kumar S, Rani S, Sharma S, et al. Multimodality fusion aspects of medical diagnosis: a comprehensive review[J]. Bioengineering, 2024, 11(12): 1233. doi:10.3390/bioengineering11121233
[4] Chaabene S, Boudaya A, Bouaziz B, et al. An overview of methods and techniques in multimodal data fusion with application to healthcare[J]. Int J Data Sci Anal, 2025. doi:10.1007/s41060-025-00715-0
[5] Wang T, Shao W, Huang Z, et al. MOGONET integrates multi-omics data using graph convolutional networks allowing patient classification and biomarker identification[J]. Nat Commun, 2021, 12(1): 3445. doi: 10.1038/s41467-021-23774-w
[6] Shaik T, Tao XH, Li L, et al. A survey of multimodal information fusion for smart healthcare: mapping the journey from data to wisdom[J]. Inf Fusion, 2024, 102: 102040. doi:10.1016/j.inffus.2023.102040
[7] Lyu W, Dong X, Wong R, et al. A multimodal transformer: fusing clinical notes with structured EHR data for interpretable in-hospital mortality prediction[J]. AMIA Annu Symp Proc, 2023, 2022: 719-728.
[8] Stahlschmidt SR, Ulfenborg B, Synnergren J. Multimodal deep learning for biomedical data fusion: a review[J]. Brief Bioinform, 2022, 23(2): bbab569. doi:10.1093/bib/bbab569
[9] Zheng Y, Conrad RD, Green EJ, et al. Graph attention-based fusion of pathology images and gene expression for prediction of cancer survival[J]. IEEE Trans Med Imaging, 2024, 43(9): 3085-3097.
[10] Lahat D, Adali T, Jutten C. Multimodal data fusion: an overview of methods, challenges, and prospects[J]. Proc IEEE, 2015, 103(9): 1449-1477.
[11] Krones F, Marikkar U, Parsons G, et al. Review of multimodal machine learning approaches in healthcare[J]. Inf Fusion, 2025, 114: 102690. doi:10.1016/j.inffus.2024.102690
[12] Han X, Chen S, Fu Z, et al. Multimodal fusion and vision-language models: a survey for robot vision[EB/OL].(2025-04-03)[2025-04-26].https://arxiv.org/abs/2504.02477v2
[13] 张虎成, 李雷孝, 刘东江. 多模态数据融合研究综述[J]. 计算机科学与探索, 2024, 18(10): 2501-2520. ZHANG Hucheng, LI Leixiao, LIU Dongjiang. Survey of multimodal data fusion research[J]. Journal of Frontiers of Computer Science and Technology, 2024, 18(10): 2501-2520.
[14] 潘梦竹, 李千目, 邱天. 深度多模态表示学习的研究综述[J]. 计算机工程与应用, 2023, 59(2): 48-64. PAN Mengzhu, LI Qianmu, QIU Tian. Survey of research on deep multimodal representation learning[J]. Computer Engineering and Applications, 2023, 59(2): 48-64.
[15] 任泽裕, 王振超, 柯尊旺, 等. 多模态数据融合综述[J]. 计算机工程与应用, 2021, 57(18): 49-64. REN Zeyu, WANG Zhenchao, KE Zunwang, et al. Survey of multimodal data fusion[J]. Computer Engineering and Applications, 2021, 57(18): 49-64.
[16] Rajendran S, Pan W, Sabuncu MR, et al. Learning across diverse biomedical data modalities and cohorts: challenges and opportunities for innovation[J]. Patterns(N Y), 2024, 5(2): 100913. doi: 10.1016/j.patter.2023.100913
[17] Wang T, Li F, Zhu L, et al. Cross-modal retrieval: a systematic review of methods and future directions[EB/OL].(2025-04-17)[2025-04-26]. https://ieeexplore.ieee.org/abstract/document/10843094/
[18] Sarraf A, Azhdari M, and Sarraf S. A comprehensive review of deep learning architectures for computer vision applications[J]. ASRJETS, 2021, 77(1): 1-29.
[19] Garg M, Ghosh D, Pradhan PM. Multiscaled multi-head attention-based video transformer network for hand gesture recognition[J]. IEEE Signal Process Lett, 2023, 30: 80-84. doi:10.1109/LSP.2023.3241857
[20] Kumar S, Sharma S, Megra KT. Transformer enabled multi-modal medical diagnosis for tuberculosis classification[J]. J Big Data, 2025, 12(1): 5. doi:10.1186/s40537-024-01054-w
[21] Zhou HY, Yu Y, Wang C, et al. A transformer-based representation-learning model with unified processing of multimodal input for clinical diagnostics[J]. Nat Biomed Eng, 2023, 7(6): 743-755.
[22] Nguyen HH, Blaschko MB, Saarakkala S. Clinically-inspired multi-agent transformers for disease trajectory forecasting from multimodal data[J]. IEEE Trans Med Imaging, 2024, 43(1): 529-541.
[23] Khader F, Kather JN, Müller-Franzes G, et al. Medical transformer for multimodal survival prediction in intensive care: integration of imaging and non-imaging data[J]. Sci Rep, 2023, 13(1): 10666. doi: 10.1038/s41598-023-37835-1
[24] Valous NA, Popp F, Zörnig I, et al. Graph machine learning for integrated multi-omics analysis[J]. Br J Cancer, 2024, 131(2): 205-211.
[25] Guo D, Shao Y, Cui Y, et al. Graph attention tracking[C]. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021: 9543-9552. http://openaccess.thecvf.com/content/CVPR2021/html/Guo_Graph_Attention_Tracking_CVPR_2021_paper.html
[26] Zheng Y, Gindra RH, Green EJ, et al. A graph-transformer for whole slide image classification[J]. IEEE Trans Med Imaging, 2022, 41(11): 3003-3015.
[27] Zheng Y, Conrad RF, Green EJ, et al. Graph attention-based fusion of pathology images and gene expression for prediction of cancer survival[J]. IEEE Trans Med Imaging, 2024, 43(9): 3085-3097.
[28] Huang SC, Pareek A, Jensen M, et al. Self-supervised learning for medical image classification: a systematic review and implementation guidelines[J]. NPJ Digit Med, 2023, 6(1): 74. doi:10.1038/s41746-023-00811-0
[29] Zhang Y, Jiang H, Miura Y, et al. Contrastive learning of medical visual representations from paired images and text[J]. PMLR, 2022: 2-25.
[30] Wang Z, Wu Z, Agarwal D, et al. MedCLIP: contrastive learning from unpaired medical images and text[C]. Proceedings of the Conference on Empirical Methods in Natural Language Processing, 2022. https://pmc.ncbi.nlm.nih.gov/articles/PMC11323634/
[31] Taleb A, Lippert C, Klein T, et al. Multimodal self-supervised learning for medical image analysis. In Feragen A, Sommer S, Schnabel J(Ed.)Information Processing in Medical Imaging. Cham: Springer, 2021: 661-673.
[32] Zong Y, Aodha OM, Hospedales TM. Self-supervised multimodal learning: a survey[J]. IEEE Trans Pattern Anal Mach Intell, 2025, 47(7): 5299-5318.
[33] Ghassemi M, Pimentel M, Naumann T, et al. A multivariate timeseries modeling approach to severity of illness assessment and forecasting in ICU with sparse, heterogeneous clinical data[J]. Proc AAAI Conf Artif Intell, 2015: 446-453.
[34] AlSaad R, Abd-Alrazaq A, Boughorbel S, et al. Multimodal large language models in health care: applications, challenges, and future outlook[J]. J Med Internet Res, 2024, 26: e59505. doi:10.2196/59505
[35] Gygi JP, Konstorum A, Pawar S, et al. A supervised Bayesian factor model for the identification of multi-omics signatures[J]. Bioinformatics, 2024, 40(5): btae202. doi:10.1093/bioinformatics/btae202
[36] Suter P, Dazert E, Kuipers J, et al. Multi-omics subtyping of hepatocellular carcinoma patients using a Bayesian network mixture model[J]. PLoS Comput Biol, 2022, 18(9): e1009767. doi:10.1371/journal.pcbi.1009767
[37] Samorodnitsky S, Wendt CH, Lock EF. Bayesian simultaneous factorization and prediction using multi-omic data[J]. Comput Stat Data Anal, 2024, 197: 107974. doi:10.1016/j.csda.2024.107974
[38] Ghosal S, Chen Q, Pergola G, et al. A generative-discriminative framework that integrates imaging, genetic, and diagnosis into coupled low dimensional space[J]. Neuroimage, 2021, 238: 118200. doi:10.1016/j.neuroimage.2021.118200
[39] Han Y, Lam JCK, Li VOK, et al. Interpretable AI-driven causal inference to uncover the time-varying effects of PM2.5 and public health interventions on COVID-19 infection rates[J]. Humanit Soc Sci Commun, 2024, 11(1): 1713. doi:10.1057/s41599-024-04202-y
[40] Daunhawer I, Sutter TM, Marcinkevi cs R, et al. Self-supervised disentanglement of modality-specific and shared factors improves multimodal generative models[J]. Pattern Recognition, 2021, 12544: 459-473. doi: 10.1007/978-3-030-71278-5_33
[41] Argelaguet R, Arnol D, Bredikhin D, et al. MOFA+: a statistical framework for comprehensive integration of multi-modal single-cell data[J]. Genome Biol, 2020, 21(1): 111. doi:10.1186/s13059-020-02015-1
[42] Shen R, Mo Q, Schultz N, et al. Integrative subtype discovery in glioblastoma using iCluster[J]. PLoS One, 2012, 7(4): e35236. doi:10.1371/journal.pone.0035236
[43] Xie H, Li J, Xue H. A survey of dimensionality reduction techniques based on random projection[EB/OL].(2018-05-3)[2025-04-26]. https://doi.org/10.48550/arXiv.1706.04371
[44] Mirabnahrazam G, Ma D, Lee S, et al. Machine learning based multimodal neuroimaging genomics dementia score for predicting future conversion to Alzheimers disease[J]. J Alzheimers Dis, 2022, 87(3): 1345-1365.
[45] Lock EF, Hoadley KA, Marron JS, et al. Joint and individual variation explained(JIVE)for integrated analysis of multiple data types[J]. Ann Appl Stat, 2013, 7(1): 523-542.
[46] Yang Y, Ma C. Estimating shared subspace with AJIVE: the power and limitation of multiple data matrices[EB/OL].(2025-02-15)[2025-04-26]. https://doi.org/10.48550/arXiv.2501.09336
[47] Gordon SL, Jahn E, Mazaheri B, et al. Identification of mixtures of discrete product distributions in near-optimal sample and time complexity[C]. Proceedings of the 37th Annual Conference on Learning Theory(COLT), PMLR, 2024: 2071-2091. https://proceedings.mlr.press/v247/gordon24a.html
[48] Yang J, Yu Y, Niu D, et al. ConFEDE: contrastive feature decomposition for multimodal sentiment analysis[C]. Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics(Volume 1: Long Papers), 2023: 7617-7630.
[49] Freund MC, Etzel JA, Braver TS. Neural coding of cognitive control: the representational similarity analysis approach[J]. Trends Cogn Sci, 2021, 25(7): 622-638.
[50] Angelopoulos N, Chatzipli A, Nangalia J, et al. Bayesian networks elucidate complex genomic landscapes in cancer[J]. Commun Biol, 2022, 5(1): 306. doi:10.1038/s42003-022-03243-w
[51] Yan HX, Weng DW, Li DG, et al. Prior knowledge-guided multilevel graph neural network for tumor risk prediction and interpretation via multi-omics data integration[J]. Brief Bioinform, 2024, 25(3): bbae184. doi:10.1093/bib/bbae184
[52] Nelson Hayes C, Nakahara H, Ono A, et al. From omics to multi-omics: a review of advantages and tradeoffs[J]. Genes, 2024, 15(12): 1551. doi:10.3390/genes15121551
[53] Forés-Martos J, Forte A, García-Martínez J, et al. A trans-omics comparison reveals common gene expression strategies in four model organisms and exposes similarities and differences between them[J]. Cells, 2021, 10(2): 334. doi:10.3390/cells10020334
[54] Liu J, Cen X, Yi C, et al. Challenges in AI-driven biomedical multimodal data fusion and analysis[J]. Genomics Proteomics Bioinformatics, 2025, 23(1): qzaf011. doi: 10.1093/gpbjnl/qzaf011
[55] Jia Z, Giehl RFH, Meyer RC, et al. Natural variation of BSK3 tunes brassinosteroid signaling to regulate root foraging under low nitrogen[J]. Nat Commun, 2019, 10(1): 2378. doi: 10.1038/s41467-019-10331-9
[56] Yip SS, Aerts HJ. Applications and limitations of radiomics[J]. Phys Med Biol, 2016, 61(13): R150-R166.
[57] Krishna A, Kurian NC, Patil A, et al. PathoGen-X: a cross-modal genomic feature trans-align network for enhanced survival prediction from histopathology images[C]. 2025 IEEE 22nd International Symposium on Biomedical Imaging, IEEE. doi: 10.1109/ISBI60581.2025.10981028
[58] Yan Y, Yao XJ, Wang SH, et al. A survey of computer-aided tumor diagnosis based on convolutional neural network[J]. Biology(Basel), 2021, 10(11): 1084. doi:10.3390/biology10111084
[59] Butler L, Karabayir I, Samie Tootooni M, et al. Image and structured data analysis for prognostication of health outcomes in patients presenting to the ED during the COVID-19 pandemic[J]. Int J Med Inform, 2021, 158: 104662. doi:10.1016/j.ijmedinf.2021.104662
[60] LI Y, Hajj HA, Conze PH, et al. Multimodal information fusion for the diagnosis of diabetic retinopathy[EB/OL].(2023-03-20)[2025-04-26]. https://arxiv.org/abs/2304.00003
[61] Luo H, Huang JS, Ju HR, et al. Multimodal multi-instance evidence fusion neural networks for cancer survival prediction[J]. Sci Rep, 2025, 15(1): 10470. doi:10.1038/s41598-025-93770-3
[62] Li T, Zhou X, Xue J, et al. Cross-modal alignment and contrastive learning for enhanced cancer survival prediction[J]. Comput Methods Programs Biomed, 2025, 263: 108633. doi:10.1016/j.cmpb.2025.108633
[63] Schneider L, Laiouar-Pedari S, Kuntz S, et al. Integration of deep learning-based image analysis and genomic data in cancer pathology: a systematic review[J]. Eur J Cancer, 2022, 160: 80-91. doi:10.1016/j.ejca.2021.10.007
[64] Zheng T, Hu W, Wang H, et al. MRI-based texture analysis for preoperative prediction of BRAF V600E mutation in papillary thyroid carcinoma[J]. J Multidiscip Healthc, 2023, 16:1-10. doi: 10.2147/JMDH.S393993
[65] Yu J, Ma T, Chen F, et al. Task-driven framework using large models for digital pathology[J]. Commun Biol, 2024, 7(1): 1619. doi:10.1038/s42003-024-07303-1
[66] Chen RJ, Lu MY, Williamson DFK, et al. Pan-cancer integrative histology-genomic analysis via multimodal deep learning[J]. Cancer Cell, 2022, 40(8): 865-878.
[67] Brussee S, Buzzanca G, Schrader AMR, et al. Graph neural networks in histopathology: emerging trends and future directions[J]. Med Image Anal, 2025, 101: 103444. doi:10.1016/j.media.2024.103444
[68] Ding KX, Zhou M, Metaxas DN, et al. Pathology-and-genomics multimodal transformer for survival outcome prediction[M] //Medical Image Computing and Computer Assisted Intervention- MICCAI 2023. Cham: Springer Nature Switzerland, 2023: 622-631. doi:10.1007/978-3-031-43987-2_60
[69] Qi YJ, Su GH, You C, et al. Radiomics in breast cancer: current advances and future directions[J]. Cell Rep Med, 2024, 5(9): 101719. doi: 10.1016/j.xcrm.2024.101719
[70] Ehrenstein V, Kharrazi H, Lehmann H, et al. Obtaining data from electronic health records[EB/OL].(2025-04-18)[2025-04-26]. https://www.ncbi.nlm.nih.gov/books/NBK551878/
[71] Patharkar A, Cai FL, Al-Hindawi F, et al. Predictive modeling of biomedical temporal data in healthcare applications: review and future directions[J]. Front Physiol, 2024, 15: 1386760. doi:10.3389/fphys.2024.1386760
[72] Zhan X, Humbert-Droz M, Mukherjee P, et al. Structuring clinical text with AI: old versus new natural language processing techniques evaluated on eight common cardiovascular diseases[EB/OL].(2025-04-18)[2025-04-26]. https://www.cell.com/patterns/fulltext/S2666-3899(21)00122-7
[73] Chen XL, Xie HR, Tao XH, et al. Artificial intelligence and multimodal data fusion for smart healthcare: topic modeling and bibliometrics[J]. Artif Intell Rev, 2024, 57(4): 91. doi:10.1007/s10462-024-10712-7
[74] Shajari S, Kuruvinashetti K, Komeili A, et al. The emergence of AI-based wearable sensors for digital health technology: a review[J]. Sensors, 2023, 23(23): 9498. doi:10.3390/s23239498
[75] Lih OS, Jahmunah V, Palmer EE, et al. EpilepsyNet: novel automated detection of epilepsy using transformer model with EEG signals from 121 patient population[J]. Comput Biol Med, 2023, 164: 107312. doi:10.1016/j.compbiomed.2023.107312
[76] Deniz-Garcia A, Fabelo H, Rodriguez-Almeida AJ, et al. Quality, usability, and effectiveness of mHealth apps and the role of artificial intelligence: current scenario and challenges[J]. J Med Internet Res, 2023, 25: e44030. doi:10.2196/44030
[77] Basak H, Yin ZZ. Semi-supervised domain adaptive medical image segmentation through consistency regularized disentangled contrastive learning[M] //Medical Image Computing and Computer Assisted Intervention-MICCAI 2023. Cham: Springer Nature Switzerland, 2023: 260-270. doi:10.1007/978-3-031-43901-8_25
[78] Zhao F, Zhang CC, Geng BC. Deep multimodal data fusion[J]. ACM Comput Surv, 2024, 56(9): 1-36.
[79] Li S, Tang H. Multimodal alignment and fusion: a survey[EB/OL].(2024-11-26)[2025-04-26]. https://arxiv.org/abs/2411.17040
[80] Hangaragi S, Neelima N, Jegdic K, et al. Integrated fusion approach for multi-class heart disease classification through ECG and PCG signals with deep hybrid neural networks[J]. Sci Rep, 2025, 15(1): 8129. doi:10.1038/s41598-025-92395-w
[81] Domingo J, Minaeva M, Morris JA, et al. Non-linear transcriptional responses to gradual modulation of transcription factor dosage[J]. bioRxiv, 2024. doi: 10.1101/2024.03.01.582837
[82] Han GR, Goncharov A, Eryilmaz M, et al. Machine learning in point-of-care testing: innovations, challenges, and opportunities[J]. Nat Commun, 2025, 16(1): 3165. doi:10.1038/s41467-025-58527-6
[83] Kawahara D, Nagata Y. T1-weighted and T2-weighted MRI image synthesis with convolutional generative adversarial networks[J]. Rep Pract Oncol Radiother, 2021, 26(1): 35-42.
[84] Kang Z, He Y, Wang J, et al. Efficient multi-model fusion with adversarial complementary representation learning[EB/OL].(2025-04-18)[2025-05-26]. https://ieeexplore.ieee.org/abstract/document/10650588/
[85] Yoon S, Byun S, Jung K. Multimodal speech emotion recognition using audio and text[EB/OL].(2025-04-18)[2025-05-26]. https://ieeexplore.ieee.org/abstract/document/8639583/
[86] Höhn J, Krieghoff-Henning E, Jutzi TB, et al. Combining CNN-based histologic whole slide image analysis and patient data to improve skin cancer classification[J]. Eur J Cancer, 2021, 149: 94-101. doi:10.1016/j.ejca.2021.02.032
[87] Arnold C, Küpfer A. Alignment helps make the most of multimodal data[EB/OL].(2024-05-14)[2025-04-26]. https://arxiv.org/abs/2405.08454
[88] Yang H, Zhou HY, Li C, et al. Multimodal self-supervised learning for lesion localization[EB/OL].(2024-08-20)[2025-04-26]. https://ieeexplore.ieee.org/abstract/document/10635268/
[89] Lobato-Delgado B, Priego-Torres B, Sanchez-Morillo D. Combining molecular, imaging, and clinical data analysis for predicting cancer prognosis[J]. Cancers, 2022, 14(13): 3215. doi:10.3390/cancers14133215
[90] Chen RJ, Lu MY, Wang J, et al. Pathomic fusion: an integrated framework for fusing histopathology and genomic features for cancer diagnosis and prognosis[J]. IEEE Trans Med Imaging, 2022, 41(4): 757-770.
[91] Wijethilake N, Islam M, Ren HL. Radiogenomics model for overall survival prediction of glioblastoma[J]. Med Biol Eng Comput, 2020, 58(8): 1767-1777.
[92] Magbanua MJM, Li W, vant Veer LJ. Integrating imaging and circulating tumor DNA features for predicting patient outcomes[J]. Cancers(Basel), 2024, 16(10): 1879. doi:10.3390/cancers16101879
[93] Niu W, Yan J, Hao M, et al. MRI transformer deep learning and radiomics for predicting IDH wild type TERT promoter mutant gliomas[J]. NPJ Precis Oncol, 2025, 9(1): 89. doi:10.1038/s41698-025-00884-y
[94] Angelopoulos N, Chatzipli A, Nangalia J, et al. Bayesian networks elucidate complex genomic landscapes in cancer[J]. Commun Biol, 2022, 5(1): 306. doi:10.1038/s42003-022-03243-w
[95] Herawan M, Adriansjah R. Prostate specific antigen level and gleason score in Indonesian prostate cancer patients[EB/OL].(2025-04-18)[2025-04-26]. https://repository.unar.ac.id/jspui/handle/123456789/8814
[96] Kabir Anaraki A, Ayati M, Kazemi F. Magnetic resonance imaging-based brain tumor grades classification and grading via convolutional neural networks and genetic algorithms[J]. Biocybern Biomed Eng, 2019, 39(1): 63-74.
[1] 唐玉宁,潘天岳,董智慧,符伟国. 深度学习在主动脉影像自动分割中的研究进展[J]. 山东大学学报 (医学版), 2024, 62(9): 66-73.
[2] 冯世庆. 计算机视觉与腰椎退行性疾病[J]. 山东大学学报 (医学版), 2023, 61(3): 1-6.
[3] 刘亚军,袁强,吴静晔,韩晓光,郎昭,张勇. 130例锥形束CT影像腰椎椎弓根螺钉自动规划的初步分析[J]. 山东大学学报 (医学版), 2023, 61(3): 80-89.
[4] 林冰洁,王梅云. 深度学习在医学影像学中的研究现状及发展前景[J]. 山东大学学报 (医学版), 2023, 61(12): 21-29.
[5] 赵古月,尚靳,侯阳. 人工智能在冠状动脉CT血管成像的应用进展[J]. 山东大学学报 (医学版), 2023, 61(12): 30-35.
[6] 徐子良,郑敏文. 影像人工智能在医学领域的时代创新与挑战[J]. 山东大学学报 (医学版), 2023, 61(12): 7-12, 20.
[7] 王琳琳,孙玉萍. 从临床医生角度,看人工智能在癌症精准诊疗中的应用及思考[J]. 山东大学学报 (医学版), 2021, 59(9): 89-96.
[8] 刘琚,吴强,于璐跃,林枫茗. 基于深度学习的脑肿瘤图像分割[J]. 山东大学学报 (医学版), 2020, 1(8): 42-49, 73.
[9] 林浩添,李龙辉,陈睛晶. 儿童眼病的人工智能研究进展[J]. 山东大学学报 (医学版), 2020, 58(11): 11-16.
[10] 曲毅,张焕开,宋先,初宝睿. 人工智能诊断系统在视网膜疾病的研究进展[J]. 山东大学学报 (医学版), 2020, 58(11): 39-44.
[11] CheungCarol Y.,冉安然. 青光眼影像人工智能深度学习研究现状与展望[J]. 山东大学学报 (医学版), 2020, 58(11): 24-32, 38.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
No Suggested Reading articles found!