Back to Journals » Journal of Multidisciplinary Healthcare » Volume 18
Role of Artificial Intelligence in Minimizing Missed and Undiagnosed Fractures Among Trainee Residents
Authors Sadat-Ali M , Al Omar HK , Alneghaimshi MM, AlHossan AM, Baragabh AM
Received 26 February 2025
Accepted for publication 25 June 2025
Published 5 July 2025 Volume 2025:18 Pages 3851—3858
DOI https://doi.org/10.2147/JMDH.S525183
Checked for plagiarism Yes
Review by Single anonymous peer review
Peer reviewer comments 2
Editor who approved publication: Dr Charles V Pollack
Mir Sadat-Ali,1 Hussain Khalil Al Omar,2 Muath M Alneghaimshi,2 Abdallah M AlHossan,3 Abdullah M Baragabh2
1Department of Orthopaedic Surgery, Haifa Medical Complex, Alkhobar, Saudi Arabia; 2King Fahad Military Medical Complex, Ministry of Defense Health Services, Dhahran, Saudi Arabia; 3King Fahad Military Medical Complex, Ministry of Defense Health Services, Dhahran and Alfaisal University, Riyadh, Saudi Arabia
Correspondence: Mir Sadat-Ali, Haifa Medical Complex, 7200 King Khalid Road, AlKhozama, Alklhobar, 32424, Saudi Arabia, Tel +966505848281, Email [email protected]
Background and Objectives: Traumatic Fractures and dislocations are missed up to 10% at the first line of defense in the emergency room and by the junior orthopedic residents in training. This review was done to evaluate the accuracy of AI-assisted fracture detection and to compare with the residents in training.
Methods: We searched all related electronic databases for English language literature between January 2015 and July 2023, Pub Med, Scopus, Web of Science, Cochrane Central Ovid Medline, Ovid Embase, EBSCO Cumulative Index to Allied Health Literature, with keywords of Artificial Intelligence, fractures, dislocations, X-rays, radiographs and missed diagnosis. The data extracted included a number of patients/images studied, site of fractures analyzed, algorithms used, the accuracy of the report based on the algorithm, sensitivity, and specificity, area under the curve (AUC), comparison between the algorithm, junior orthopedic resident, emergency physicians, and board certified radiologists.
Results: Twenty-seven publications fulfilled our objectives and were analyzed in detail. Ninety-two thousand two hundred and thirty-six images were analyzed for fractures, which showed that the overall accuracy of the correct diagnosis was 90.35± 6.88%, sensitivity 90.08± 8.2%, specificity 90.16± 7 and AUC was 0.931± 0.06. The accuracy of the AI model was 94.24± 4.19, and that of orthopedic resident was 85.18± 7.01 (P value of < 0.0001), with sensitivity 92.15± 7.12 versus 86.38± 7.6 (P< 0.0001) and specificity of 93.77± 4.03 versus 87.05± 12.9 (P< 0.0001). A single study compared 1703 hip fracture images between the AI model versus orthopedic resident and board-certified radiologist and found the accuracy to be 98% versus 87% and 92% (P value of < 0.0001).
Conclusion: This review accentuates AI’s potential for accurate diagnosis of fractures. We believe the AI algorithm should be incorporated in the emergency rooms where trainee residents and junior orthopedic residents could routinely use AI so that the incidence of missed fractures can be curtailed.
Keywords: artificial intelligence, diagnostic imaging, fractures missed diagnosis, X-rays
Introduction
Fractures take place in people of all age groups. The episode depends on the type of trauma, location, and associated injuries. The incidence of fractures ranges between 733 and 4017 per 100000 patient-years.1–3 Traumatic fractures are the major cause of morbidity and mortality, and in one study, 23,917 individuals sustained 27,169 fractures, with 64.5% of the fractures occurring in women.1 The epidemiological data for fractures and dislocations in Saudi Arabia are not available.4,5 It is expected that the number of fractures and dislocations will increase due to population growth.
![]() |
Figure 1 PRISMA flowchart Showing the Final Selection of Analyzed studies. |
![]() |
Figure 2 Comparison between AI Model and Clinicians for Accuracy, Sensitivity and Specificity. |
The reported incidence of missed diagnosis of fractures or dislocations by plain radiographs ranges between 3% and 10%,6–8 and this inversely affects the final outcome of the recovery. The majority of the errors take place in the emergency room, where the radiographs are wrongly elucidated as some injuries might be tenuous, and in the majority, conspicuous injuries are missed due to improper training with sub-standard techniques employed in radiological evaluation.9 This could be more common in the junior residents under training in the emergency room and orthopedics and traumatology. Unfortunately, this is not uncommon in trained radiologists as well. In the USA, radiologists were at the 6th position in malpractice claims,10–14 even though they make up about 3.1% of the 892 million physicians.15 It becomes mandatory to find ways to reduce this discrepancy at both fronts at the training levels and the trained level, and one such tenet is to bring the utilization of AI in the field of diagnosis of fractures and dislocations.
AI, which is part of computer science, can perform tasks that are usually performed by humans to humans. AI requires a high level of input from different images and then can use different algorithms using machine learning, deep learning, and convolutional neural networks to extricate high-level information from the input of images.16 Recent studies have suggested convincing accuracy of diagnosis of fractures and dislocations using AI algorithms, and with the objective to assess the accuracy, sensitivity, and specificity of AI algorithms in the diagnosis of fractures using plain radiographs, this review was carried out.
Methods
We searched all related electronic databases for English language literature between January 2015 and July 2023, Pub Med, Scopus, Web of Science, Cochrane Central Ovid Medline, Ovid Embase, EBSCO Cumulative Index to Allied Health Literature, Web of Science, and Cochrane Central with keywords of Artificial Intelligence, fractures, dislocations, X-rays, radiographs, missed diagnosis. All articles that fulfilled the following inclusion criteria: primary research using validated AI algorithms for fracture detection and Only studies with a comparative study between AI algorithms and clinicians were included in the analysis. Only studies with a comparative study between AI algorithms and clinicians were included in the analysis. All other publications and data were excluded, including reports by letter to the editor, conference presentations, and systematic reviews. EndNoteTM 39 was used to tabulate the references and delete any duplicates.
Data Extraction
We extracted available information from included studies fitting our inclusion criteria. The data extracted included a number of patients/images studied, site of fractures analyzed, algorithms used, the accuracy of the report based on the algorithm, sensitivity and specificity, area under the curve (AUC), comparison between the algorithm, junior orthopedic resident, emergency physicians, and board certified radiologists.
Statistical Analysis
The diagnostic prediction of the fractures of different algorithms was analyzed using contingency tables for validation. Regression analysis was performed between the different sites of fractures and the influence of the algorithms. A p-value of <0.05 was accepted as statistically significant at a 95% confidence interval (CI). SPSS (Statistical Package for Social Sciences) Inc., which is a statistical software developed by IBM for data management, advanced analytics, multivariate analysis, and business intelligence version 29, was used.
Results
We identified 2049 studies retrieved in which 347 were duplicates, and 1651 publications were excluded due to inclusion and exclusion criteria. Fifty-one studies were reviewed in depth as they nearly fulfilled the inclusion criteria, and only 27 publications fulfilled our objectives to be analyzed in detail and were included in this study (Figure 1). Eighty-eight thousand, nine hundred and ninety-six images were analyzed for fractures (Table 1), which showed that the overall accuracy of the correct diagnosis was 90.35±6.88 (73.59–98) percent, sensitivity 90.08±8.2 (73.8–99) percent, specificity 90.16±7 (72–100) and AUC was 0.931±0.06 (0.72–0.994). The fractures analyzed were common fractures from the wrist, upper and lower limbs, and spine. All studies had internally and externally validated algorithms for Diffusion-convolutional neural networks (DCNN). The majority of the studies limited their analysis for diagnoses based on a single view of the radiograph.
![]() |
Table 1 Characteristics of Studies, Number of Images Analyzed, Site of Fractures, Algorithms Used, Accuracy, Sensitivity, Specificity and Area Under Curve |
Table 2 shows the analysis of 214950 images where a comparison was made between the AI algorithm versus a junior resident in training. The accuracy of the AI model was 94.24±4.19, and that of orthopedic resident was 85.18±7.01 (P value of <0.0001), with sensitivity 92.15±7.12 versus 86.38±7.6 (P<0.0001) and specificity of 93.77±4.03 versus 87.05±12.9 (P<0.0001). Yamada et al (2020) 40 compared the AI model versus orthopedic residents and board-certified radiologists and found the accuracy to be 98% versus 87% and 92% (P value of <0.0001). Figure 2 shows the comparison between the AI model and the clinician for accuracy, sensitivity, and specificity.
![]() |
Table 2 Comparative Data Between the AI Models and Clinicians |
Discussion
This review shows that accuracy in the diagnosis of fractures using AI algorithms surpasses that of the trained and trainee residents. Secondly, the use of AI helped the trainees and trained radiologists in improving the accuracy, sensitivity, and specificity of fracture diagnosis. In this study, the AI with different models showed that the overall accuracy of the correct diagnosis was 90.35±6.88%, sensitivity 90.08±8.2%, specificity 90.16±7 and AUC was 0.931±0.06. These results were based on plain radiographs and included all limb and vertebral fractures.
In the recent past, there has been a consequential increase in different AI models, particularly CNNs, in the arena of trauma and orthopedics. Individual models have conclusively shown that AI models are accurate in the diagnosis of fractures, which are better than junior residents and, if not better, but at par with the senior radiologist. One aspect that needs to be questioned is that most of the reported data comes from retrospective testing, and few only are based prospectively on clinical practice. The accuracy of diagnosis of fractures varied at different sites of fractures. Murphy et al (2022)44 reported an analysis of hip fractures, comparing the AI model with two trained and expert clinicians, and found that the AI model was 19% more accurate than the physicians. Another report suggested that the sensitivity of the correct diagnosis increases by over 10%. Lindsey et al (2018)33 reported that the physician’s average sensitivity in the diagnosis of fractures improved from 80.8% to 91.5% (95% CI, 89.3–92.9%), and specificity was 87.5% to 93.9% (95% CI, 92.9–94.9%) when they were aided with Deep convolutional neural network and added to this the physicians experienced a reduction in misreading around 47.0%. Duron et al (2021)42 further concurred after their review that emergency room physicians improved their results after AI assistance from 61.3% to 74.3% (up 13.0%), and the trained radiologists enhanced their diagnosis from 80.2% to 84.6% (up 4.3%). Distal radius fractures, which amount to over 20% of all fractures, were studied using an ensemble model of AI between three groups: AI, orthopedic surgeons, and radiologists, and it was reported statistically significant between the three groups. The accuracy, sensitivity, and specificity between the attending orthopedic surgeons and radiologists showed significant differences: 93.69%, 91.94%, and 95.44% compared to 92.53%, 90.44%, and 94.62%. When the physician’s groups were compared to the AI ensemble tool, it was a highly significant score of 97.75%, 97.13%, and 98.37% by the AI tool.43
Missed extremity fracture diagnosis in trauma practice has always been an issue and is the second most injuries to be misdiagnosed.45 The most common malpractice claims against radiologists involve inaccuracies in the reporting of extremity fractures.10,46,47 Orthopaedic residents are not immune to making misinterpretations of radiographs in extremity fractures. One such study from the United Kingdom highlights that Senior Orthopaedic Residents on plain radiographs missed 4% of fractures, 7.8% made a wrong diagnosis, and 12.6%, a fracture was diagnosed when there was none.48 Report indicates that over the years, the number of claims against orthopadicians has increased, but complaints have remained comparatively the same.49 In the present belligerent and litigation-oriented society, it is imperative that junior orthopedic residents have all the help in making a correct fracture diagnosis and not miss even a meager injury. AI and its algorithms can never replace human doctors but can unquestionably enhance and complement in improving the accuracy of fracture diagnosis.37 Moreover, adequate and timely training of trainee residents in radiographic interpretation is paramount. It was reported that junior residents till 3rd of training level are more vulnerable to making errors in radiographic interpretation.9
Our review has limitations due to the number of studies we have included in the analysis, as there are a number of publications that are increasing by the day, and it is possible that we have not included the most recent literature. Secondly, we could not add the data of comparative accuracy between the unaided and aided AI tools in the fracture diagnosis. Lastly, we are basing the conclusion on the retrospective studies, and there were no prospective studies to compare with. The strength of the study is we have compared a large dataset, which suggests that the different AI models are more accurate than the physicians.
In conclusion, this review highlights with unbiased evaluations recommend that the use of AI models can definitely help residents in training by increasing the accuracy of fracture diagnosis and reducing the errors in diagnosis of fractures. AI has developed cutting edge tools, which need to be further evaluated so that procurement authorities in hospitals could integrate AI into healthcare and help physicians at all levels to improve correctness in fracture diagnosis, to prevent complications of delayed diagnosis.
Disclosure
The authors report no conflicts of interest in this work.
References
1. Bergh C, Wennergren D, Möller M, Brisby H. Fracture incidence in adults in relation to age and gender: a study of 27,169 fractures in the Swedish fracture register in a well-defined catchment area. PLoS One. 2020;15(12):e0244291. doi:10.1371/journal.pone.0244291
2. Amin S, Achenbach SJ, Atkinson EJ, Khosla S, Melton LJ. Trends in fracture incidence: a population-based study over 20 years. J Bone Miner Res. 2014;29(3):581–589. doi:10.1002/jbmr.2072
3. Curtis EM, van der Velde R, Moon RJ, et al. Epidemiology of fractures in the United Kingdom 1988-2012: variation with age, sex, geography, ethnicity and socioeconomic status. Bone. 2016;87:19–26. doi:10.1016/j.bone.2016.03.006
4. Sadat-Ali M, Ahlberg A. Fractured neck of the femur in young adults. Injury. 1992;23:311–313. doi:10.1016/0020-1383(92)90176-S
5. Sadat-Ali M, AlOmran AS, Azam MQ, et al. Epidemiology of fractures and dislocations among urban communities of Eastern Saudi Arabia. Saudi J Med Med Sci. 2015;3:54–57. doi:10.4103/1658-631X.149682
6. Wei CJ, Tsai WC, Tiu CM, Wu HT, Chiou HJ, Chang CY. Systematic analysis of missed extremity fractures in emergency radiology. Acta Radiol. 2006;47(7):710–717. doi:10.1080/02841850600806340
7. Williams SM, Connelly DJ, Wadsworth S, Wilson DJ. Radiological review of accident and emergency radiographs: a 1-year audit. Clin Radiol. 2000;55(11):861–865. doi:10.1053/crad.2000.0548
8. Hallas P, Ellingsen T. Errors in fracture diagnoses in the emergency department characteristics of patients and diurnal variation. BMC Emerg Med. 2006;6(1):4. doi:10.1186/1471-227X-6-4
9. Pinto A, Berritto D, Russo A, et al. Traumatic fractures in adults: missed diagnosis on plain radiographs in the Emergency Department. Acta Biomed. 2018;89(1–S):111–123.
10. Whang JS, Baker SR, Patel R, Luk L, Castro A. The causes of medical malpractice suits against radiologists in the United States. Radiology. 2013;266:548–554. doi:10.1148/radiol.12111119
11. De Filippo M, Pesce A, Barile A, et al. Imaging of postoperative shoulder instability. Musculoskeletal Surg. 2017;101:15–22. doi:10.1007/s12306-017-0461-4
12. Splendiani A, Bruno F, Patriarca L, et al. Thoracic spine trauma: advanced imaging modality. Radiol Med. 2016;121:780–792. doi:10.1007/s11547-016-0657-y
13. de Filippo M, Azzali E, Pesce A, et al. CT arthrography for evaluation of autologous chondrocyte and chondral-inductor scaffold implantation in the osteochondral lesions of the talus. Acta Biomed. 2016;87:51–56.
14. Splendiani A, Perri M, Grattacaso G, et al. Magnetic resonance imaging (MRI) of the lumbar spine with dedicated G-scan machine in the upright position: a retrospective study and our experience in 10 years with 4305 patients. Radiol Med. 2016;121:38–44. doi:10.1007/s11547-015-0570-9
15. Available from: https://www.aamc.org/data-reports/workforce/data/number-people-active-physician-specialty-2021.
16. Bizzo BC, Almeida RR, Michalski MH, Alkasab TK. Artificial intelligence and clinical decision support for radiologists and referring providers. J Am Coll Radiol. 2019;16(9 Pt B):1351–1356. (). doi:10.1016/j.jacr.2019.06.010
17. Kim DH, MacKinnon T. Artificial intelligence in fracture detection: transfer learning from deep convolutional neural networks. Clin Radiol. 2018;73(5):439–445. doi:10.1016/j.crad.2017.11.015
18. Adams M, Chen W, Holcdorf D, McCusker MW, Howe PD, Gail- Lard F. Computer vs human: deep learning versus perceptual training for the detection of neck of femur fractures. J Med Imaging Radiat Oncol. 2019;63(1):27–32. doi:10.1111/1754-9485.12828
19. Cheng CT, Ho TY, Lee TY, et al. Application of a deep learning algorithm for detection and visualization of hip fractures on plain pelvic radiographs. Eur Radiol. 2019;29(10):5469–5477. doi:10.1007/s00330-019-06167-y
20. Derkatch S, Kirby C, Kimelman D, Jozani MJ, Davidson JM, Leslie WD. Identification of vertebral fractures by convolutional neural networks to predict nonvertebral and hip fractures: a registry-based cohort study of dual X-ray absorptiometry. Radiology. 2019;293(2):405–411. doi:10.1148/radiol.2019190201
21. Rayan JC, Reddy N, Kan JH, Zhang W, Annapragada A. Binomial classification of. pediatric elbow fractures using a deep learning multi-view approach emulating radiologist decision making. Radiol Artif Intell. 2019;1(1):e180015. doi:10.1148/ryai.2019180015
22. Starosolski ZA, Kan H, Annapragada AV. CNN-based radiographic acute tibial fracture detection in the setting of open growth plates. bioRxiv pre- print bioRxiv:506154. Available from: https://www.biorxiv.org/content/10.1101/506154.
23. Choi JW, Cho YJ, Lee S, et al. Using a dual-input convolutional neural network for automated detection of pediatric supracondylar fracture on conventional radiography. Invest Radiol. 2020;55(2):101–110. doi:10.1097/RLI.0000000000000615
24. Jiménez-Sánchez A, Kazi A, Albarqouni S, et al. Precise proximal femur fracture classification for interactive training and surgical planning. Int J CARS. 2020;15(5):847–857. doi:10.1007/s11548-020-02150-x
25. Mawatari T, Hayashida Y, Katsuragawa S, et al. The effect of deep convolutional neural networks on radiologists’ performance in the detection of hip fractures on digital pelvic radiographs. Eur J Radiol. 2020;130:109188. doi:10.1016/j.ejrad.2020.109188
26. Chen HY, Hsu BW, Yin YK, et al. A human-algorithm integration system for hip fracture detection on plain radiography: system development and validation study. JMIR Med Inform. 2020;8(11):e19416. doi:10.2196/19416
27. Cheng CT, Wang Y, Chen HW, et al. A scalable physician-level deep learning algorithm detects universal trauma on pelvic radiographs. Nat Commun. 2021;12(1):1066. doi:10.1038/s41467-021-21311-3
28. Raisuddin AM, Vaattovaara E, Nevalainen M, et al. Critical evaluation of deep neural networks for wrist fracture detection. Sci Rep. 2021;11(1):6006. doi:10.1038/s41598-021-85570-2
29. Yoon AP, Lee YL, Kane RL, Kuo CF, Lin C, Chung KC. Development and validation of a deep learning model using convolutional neural net- works to identify scaphoid fractures in radiographs. JAMA Network Open. 2021;4(5):e216096. doi:10.1001/jamanetworkopen.2021.6096
30. Grauhan NF, Niehues SM, Gaudin RA, et al. Deep learning for accurately recognizing common causes of shoulder pain on radiographs. Skeletal Radiol. 2022;51(2):355–362. doi:10.1007/s00256-021-03740-9
31. Ozkaya E, Topal FE, Bulut T, Gursoy M, Ozuysal M, Karakaya Z. Evaluation of an artificial intelligence system for diagnosing scaphoid fracture on direct radiography. Eur J Trauma Emerg Surg. 2022;48(1):585–592. doi:10.1007/s00068-020-01468-0
32. Chung SW, Han SS, Lee JW, et al. Automated detection and classification of the proximal humerus fracture by using deep learning algorithm. Acta Orthop. 2018;89(4):468–473. doi:10.1080/17453674.2018.1453714
33. Lindsey R, Daluiski A, Chopra S, Lachapelle A, Mozer M. Sicular S et al. Deep neural network improves fracture detection by clinicians. Proc Natl Acad Sci U S A. 2018;115(45):11591–11596. doi:10.1073/pnas.1806905115
34. Wang Y, Lu L, Cheng CT, et al. Weakly supervised universal fracture detection in pelvic x-rays. In: Shen D, Liu T, Peters TM, et al. editors. Medical Image Computing and Computer Assisted Intervention – MICCAI 2019. MICCAI 2019. Lecture Notes in Computer Science. Vol. 11769, Cham, Switzerland: Springer; 2019:459–467.
35. Blüthgen C, Becker AS, Vittoria de Martini I, Meier A, Martini K, Frauen-felder T. Detection and localization of distal radius fractures: deep learning system versus radiologists. Eur J Radiol. 2020;126:108925. doi:10.1016/j.ejrad.2020.108925
36. Chen HY, Hsu BW, Yin YK, et al. Application of deep learning algorithm to detect and visualize vertebral fractures on plain frontal radiographs. PLoS One. 2021;16(1):e0245992. doi:10.1371/journal.pone.0245992
37. Cheng CT, Chen CC, Cheng FJ, et al. Human-algorithm integration system for hip fracture detection on plain radiography: system development and validation study. JMIR Med Inform. 2020;8(11):e19416.
38. Krogue JD, Cheng KV, Hwang KM, et al. Automatic Hip fracture identification and functional subclassification with deep learning. Radiol Artif Intell. 2020;2(2):e190023. doi:10.1148/ryai.2020190023
39. Murata K, Endo K, Aihara T, et al. Artificial intelligence for the detection of vertebral fractures on plain spinal radiography. Sci Rep. 2020;10(1):20031. doi:10.1038/s41598-020-76866-w
40. Yamada Y, Maki S, Kishida S, et al. Automated classification of hip fractures using deep convolutional neural networks with orthopedic surgeon- level accuracy: ensemble decision-making with antero-posterior and lateral radiographs. Acta Orthop. 2020;91(6):699–704. doi:10.1080/17453674.2020.1803664
41. Yu JS, Yu SM, Erdal BS, et al. Detection and localisation of hip fractures on anteroposterior radiographs with artificial intelligence: proof of concept. Clin Radiol. 2020;75(3):237.e1–237e9. doi:10.1016/j.crad.2019.10.022
42. Duron L, Ducarouge A, Gillibert A, et al. Assessment of an AI aid in detection of adult appendicular skeletal fractures by emergency physicians and radiologists: a multicenter cross-sectional diagnostic study. Radiology. 2021;300(1):120–129. doi:10.1148/radiol.2021203886
43. Zhang J, Li Z, Li H, et al. Deep learning assisted diagnosis system: improving the diagnostic accuracy of distal radius fractures. Front Med. 2023;10:1224489. PMID: 37663656; PMCID: PMC10471443. doi:10.3389/fmed.2023.1224489
44. Murphy EA, Ehrhardt B, Gregson CL, et al. Machine learning outperforms clinical experts in classification of Hip fractures. Sci Rep. 2022;12(1):2058. PMID: 35136091; PMCID: PMC8825848. doi:10.1038/s41598-022-06018-9
45. Porrino JA, Maloney E, Scherer K, Mulcahy H, Ha AS, Allan C. Fracture of the distal radius: epidemiology and premanagement radiographic characterization. Am J Roentgenol. 2014;203:551–559. doi:10.2214/AJR.13.12140
46. Guly HR. Diagnostic errors in an accident and emergency department. Emerg Med J. 2001;18:263–269. doi:10.1136/emj.18.4.263
47. Festekjian A, Kwan KY, Chang TP, Lai H, Fahit M, Liberman DB. Radiologic discrepancies in children with special healthcare needs in a pediatric emergency department. Am J Emerg Med. 2018;36:1356–1362. doi:10.1016/j.ajem.2017.12.041
48. Sharma H, Bhagat S, Gaine WJ. Reducing diagnostic errors in musculoskeletal trauma by reviewing non-admission orthopaedic referrals in the next-day trauma meeting. Ann R Coll Surg Engl. 2007;89(7):692–695. doi:10.1308/003588407X205305
49. Khan IH, Jamil W, Lynn SM, Khan OH, Markland K, Giddins G. Analysis of NHSLA claims in orthopedic surgery. Orthopedics. 2012;35:726–731. doi:10.3928/01477447-20120426-28
© 2025 The Author(s). This work is published and licensed by Dove Medical Press Limited. The
full terms of this license are available at https://www.dovepress.com/terms.php
and incorporate the Creative Commons Attribution
- Non Commercial (unported, 4.0) License.
By accessing the work you hereby accept the Terms. Non-commercial uses of the work are permitted
without any further permission from Dove Medical Press Limited, provided the work is properly
attributed. For permission for commercial use of this work, please see paragraphs 4.2 and 5 of our Terms.
Recommended articles
Advancing Hepatocellular Carcinoma Management Through Peritumoral Radiomics: Enhancing Diagnosis, Treatment, and Prognosis
Huang Y, Qian H
Journal of Hepatocellular Carcinoma 2024, 11:2159-2168
Published Date: 4 November 2024

Evaluating the Influence of Clinical Data on Inter-Observer Variability in Optic Disc Analysis for AI-Assisted Glaucoma Screening
Pourjavan S, Bourguignon GH, Marinescu C, Otjacques L, Boschi A
Clinical Ophthalmology 2024, 18:3999-4009
Published Date: 27 December 2024