Back to Journals » Clinical Ophthalmology » Volume 19
A Convolutional Neural Network Using Anterior Segment Photos for Infectious Keratitis Identification
Authors Satitpitakul V , Puangsricharern A, Yuktiratna S, Jaisarn Y, Sangsao K, Puangsricharern V , Kasetsuwan N , Reinprayoon U, Kittipibul T
Received 16 September 2024
Accepted for publication 30 December 2024
Published 7 January 2025 Volume 2025:19 Pages 73—81
DOI https://doi.org/10.2147/OPTH.S496552
Checked for plagiarism Yes
Review by Single anonymous peer review
Peer reviewer comments 4
Editor who approved publication: Dr Scott Fraser
Vannarut Satitpitakul,1,2 Apiwit Puangsricharern,3 Surachet Yuktiratna,3 Yossapon Jaisarn,4 Keeratika Sangsao,4 Vilavun Puangsricharern,1,2 Ngamjit Kasetsuwan,1,2 Usanee Reinprayoon,1,2 Thanachaporn Kittipibul1,2
1Center of Excellence for Cornea and Stem Cell Transplantation, Department of Ophthalmology, Faculty of Medicine, Chulalongkorn University, Bangkok, Thailand; 2Excellence Center for Cornea and Stem Cell Transplantation, Department of Ophthalmology, King Chulalongkorn Memorial Hospital, Thai Red Cross Society, Bangkok, Thailand; 3IM Impower Company Limited, Bangkok, Thailand; 4Faculty of Medicine, Chulalongkorn University, Bangkok, Thailand
Correspondence: Vannarut Satitpitakul, Department of Ophthalmology, King Chulalongkorn Memorial Hospital, 1873 Rama 4 Road, Pathumwan, Bangkok, 10330, Thailand, Tel +66-894959022, Email [email protected]
Purpose: To develop a comprehensively deep learning algorithm to differentiate between bacterial keratitis, fungal keratitis, non-infectious corneal lesions, and normal corneas.
Methods: This retrospective study collected slit-lamp photos of patients with bacterial keratitis, fungal keratitis, non-infectious corneal lesions, and normal cornea. Causative organisms of infectious keratitis were identified by either positive culture or clinical response to single treatment. Convolutional neural networks (ResNet50, DenseNet121, VGG19) and Ensemble with probability weighting were used to develop a deep learning algorithm. The performance including accuracy, precision, recall, F1 score, specificity and AUC has been reported.
Results: Total of 6478 photos from 2171 eyes, composed of 2400 bacterial keratitis, 1616 fungal keratitis, 1545 non-infectious corneal lesions, and 917 normal corneas were collected from hospital database. DenseNet121 demonstrated the best performance among three convolutional neural networks with the accuracy of 0.8 (95% CI 0.74– 0.86). The ensemble technique showed higher performance than single algorithm with the accuracy of 0.83 (95% 0.78– 0.88).
Conclusion: Convolutional neural networks with ensemble techniques provided the best performance in discriminating bacterial keratitis, fungal keratitis, non-infectious corneal lesions, and normal corneas. Our models can be used as a screening tool for non-ophthalmic health care providers and ophthalmologists for rapid provisional diagnosis of infectious keratitis.
Keywords: infectious keratitis, cornea ulcer, keratitis, conventional neural network, deep learning algorithm
Introduction
Infectious keratitis is considered a potentially sight threatening ocular condition and a leading cause of visual disability worldwide. A lack of eye care resources leads to delays in diagnosis and treatment of infectious keratitis, which lead to loss of ocular integrity, invasion of infectious organisms and poor long-term visual prognosis,1,2 especially in low- and middle-income countries.
Differentiation between infectious keratitis, other corneal lesions and normal corneas based on slit-lamp photography can be done by ophthalmologist. However, it can be challenging for non-health care providers or even for primary health care providers. After detecting infectious keratitis, it is also crucial to differentiate the etiology of infectious keratitis, as the treatment for each organism is varied. Bacteria and fungi are majorly responsible for infectious keratitis worldwide.3 Clinical presentation of bacterial and fungal keratitis can be similar including pain, decreased vision, photophobia, redness, corneal infiltration, and corneal ulceration. Patients’ history and some clinical presentations may help differentiating the etiology of infection. However, diagnosing the etiology of infectious keratitis is still challenging. Microbiological testing such as corneal scraping for culture, a gold standard test, could help identify causative microorganism. Unfortunately, microbiological culture requires one to four weeks, and the rates of positive culture have been shown to be only 31–71%.4–8 Furthermore, emerging techniques like metagenomics DNA sequencing may be valuable for samples with negative microbiological cultures. However, these methods are not widely available in most hospitals and come with high costs.9
Recently, artificial intelligence based on deep learning algorithms has been shown to automatically detect the distinct features of disease when providing a large dataset of labeled medical images.10,11 Several studies12–16 have used deep learning algorithms for identifying etiology of infectious keratitis. In this study, we would like to evaluate if we could corroborate the findings of previous studies and expand the ability of algorithms to differentiate between infectious keratitis and non-infectious eyes using the same algorithms. We elected to use a large data set to train convolutional neural networks for differentiating bacterial keratitis, fungal keratitis, non-infectious corneal lesions, and normal corneas.
Materials and Methods
This study was approved by Institutional Review Board, Faculty of Medicine, Chulalongkorn University and adhered to the tenets of the Declaration of Helsinki. The study received exemption from informed consent by the Institutional Review Board of our institution. The deidentified retrospective data from 2007 to 2022 has been approved for review by the King Chulalongkorn Memorial Hospital. All data were accessed in April 2023.
Dataset and Labelling
The diffused illumination slit-lamp images taken using the Topcon D series slit lamp and DC-4 digital camera attachment were collected. All the collected photos were taken by corneal fellows using the same setting with the same slit-lamp biomicroscope and stored in an in-patient clinic database. The device functioning is driven by the EZ Capture software. The resolution of each image was 2576 × 1934 pixels in JPG format. The EZ Capture dataset was divided into 4 categories including 1) bacterial keratitis, 2) fungal keratitis, 3) non-infectious corneal lesions and 4) normal corneas. One hundred fifty-two non-focus images were screened and excluded by an ophthalmologist (V.S). All the images from patients with infectious keratitis (bacterial and fungal keratitis) were included during the active phase of infection and were confirmed by positive microbiological results including culture, PCR, or pathological reports or complete response to a single treatment using either antibacterial or antifungal medications. Patients were excluded if mixed infections were reported by laboratory tests. The categories of non-infectious keratitis were taken from patients experiencing other corneal diseases with or without inflammation including peripheral ulcerative keratitis, corneal degeneration, corneal deposits, corneal tumors, bullous keratopathy, corneal scars, fibrous ingrowth and limbal stem cell deficiency. We had taken photos of all patients and all photos, which were eligible have been included in the study. Representative images in each category were shown in Figure 1.
![]() |
Figure 1 The representative slit-lamp images of four categories. |
The final image dataset in this study included 6478 diffuse illumination slit-lamp images including 2400 images of bacterial keratitis taken from 183 patients, 1616 images of fungal keratitis taken from 89 patients, 1545 images of non-infectious corneal lesions taken from 1545 patients and 917 images of normal corneas taken from 917 participants. There were 125 patients with bacterial keratitis (45.8%) and 74 patients with fungal keratitis (27.3%) diagnosed based on microbiological/pathological results. Fifty-eight (21.4%) and 15 (5.5%) were diagnosed as bacterial and fungal keratitis due to complete response to a single treatment, either antibacterial or antifungal medications, respectively.
To ensure the model is not biased towards recognizing patients rather than the categories, the data were randomly distributed into training, validating, and testing set based on patients with a targeted ratio of 70:10:20, respectively. The training set consisted of 1609 images of bacterial keratitis, 1274 images of fungal keratitis, 1089 images of non-infectious corneal lesions and 666 of normal corneas. The validating set consists of 196 images of bacterial keratitis, 120 images of fungal keratitis, 147 images of non-infectious corneal lesions and 70 of normal corneas and the testing set consisted of 595 images of bacterial keratitis, 222 images of fungal keratitis, 309 images of non-infectious corneal lesions and 181 of normal corneas.
Model Selection and Development
In this study, we developed two deep learning models. First, we developed a deep learning model for automated classification of the ImageNet dataset into three categories including bacterial keratitis, fungal keratitis, and others (non-infectious corneal lesions and normal corneas). Second, we developed a deep learning model for automated classification of the ImageNet dataset into four categories including bacterial keratitis, fungal keratitis, non-infectious corneal lesions, and normal corneas.
We selected ResNet50, DenseNet121, and VGG19 as our neural network architectures to classify the ImageNet dataset. These models were chosen based on a thorough literature review, which revealed that they are among the most widely used and well-regarded architectures in the field. In addition, a probability weighting ensemble, which was designed to combine the predictions of multiple models was also included in the study. The probability weighting ensemble calculated the prediction by weighing the individual models’ predictions based on their confidence levels. The weights assigned to each model in each class were determined based on their respective performance on the validation set.
The study framework encompassed the utilization of FastAI library, and extension of PyTorch, employing three neural network architectures (ResNet50, DenseNet121, and VGG19), initialized with pretrained parameters from the ImageNet dataset. The processing pipeline involves resizing the input images to 224 × 224 × 3 pixels and employing a batch size of 64. The training was performed using the cross-entropy loss function and the Adam optimizer. The Adam optimizer is a gradient-based optimization algorithm. To assess the models’ performance robustly, we adopt a 5-fold cross-validation strategy training each network for a maximum of 20 epochs. Early stopping was employed based on the minimum validation loss, with a patience of 5 epochs. This means that if the validation loss did not decrease for 5 consecutive epochs, the training process was stopped, and the model with the lowest validation loss was selected, with learning rates determined using FastAI’s learning rate finder. The experiments are conducted on an NVIDIA GeForce RTX 4090 GPU equipped with 24GB of VRAM.
The accuracy, precision (positive predictive value), recall (sensitivity), F1 score, and specificity were shown as the indexes for the validity of the deep learning model for classification of bacterial keratitis, fungal keratitis, non-infectious corneal lesion and normal corneas. F1 score is the harmonic weight average of precision and recall, and reflects a model’s performance. F1 score (macro) is used as a main metric for measuring our model performance where score 1 indicates the best value and 0 indicates the worst value. Model interpretation using Gradient-weighted Class Activation Mapping (Grad-CAM) was applied to produce visual explanations, highlighting the areas of the image that had the greatest influence on the model’s prediction.
Results
The Performance of Deep Learning Models to Differentiate Bacterial Keratitis, Fungal Keratitis, and Others
Among the three neural network architectures, the DenseNet 121 algorithm provided the best performance in classification bacterial keratitis, fungal keratitis, and others. The best algorithm achieved an accuracy of 0.74 (95% CI, 0.69–0.79), a precision of 0.61 (95% CI, 0.51–0.72), a recall of 0.61 (95% CI, 0.48–0.73), a F1 score of 0.59 (95% CI, 0.49–0.69), a specificity of 0.81 (95% CI, 0.75–0.87) and an AUC of 0.71 (95% CI, 0.65–0.77). Using the probability weighting ensemble technique, the models achieved higher performance with an accuracy of 0.77 (95% CI, 0.72–0.82), a precision of 0.65 (95% CI, 0.54–0.75), a recall of 0.63 (95% CI, 0.47–0.78), a F1 score of 0.60 (95% CI, 0.48–0.73), a specificity of 0.83 (95% CI, 0.75–0.90) and an AUC of 0.73 (95% CI, 0.66–0.79). The details on the performance of each algorithm are shown in Table 1.
![]() |
Table 1 The Performance of Deep Learning Models to Differentiate Bacterial Keratitis, Fungal Keratitis, and Others |
The Performance of Deep Learning Models to Differentiate Bacterial Keratitis, Fungal Keratitis, Non-Infectious Corneal Lesions, and Normal Corneas
The DenseNet 121 algorithm still demonstrated the best performance in classifying bacterial keratitis, fungal keratitis, non-infectious corneal lesions, and normal corneas. The DenseNet 121 showed an accuracy of 0.80 (95% CI, 0.74–0.86), a precision of 0.64 (95% CI, 0.56–0.73), a recall of 0.66 (95% CI, 0.55–0.76), a F1 score of 0.63 (95% CI, 0.55–0.71), a specificity of 0.86 (95% CI, 0.81–0.91) and an AUC of 0.76 (95% CI, 0.69–0.82). The ensemble technique presented higher performance compared to each single algorithm. The ensemble technique achieved an accuracy of 0.83 (95% CI, 0.78–0.88), a precision of 0.70 (95% CI, 0.61–0.78), a recall of 0.68 (95% CI, 0.57–0.79), a F1 score of 0.68 (95% CI, 0.59–0.76), a specificity of 0.88 (95% CI, 0.82–0.93) and an AUC of 0.78 (95% CI, 0.72–0.84). The details on the performance of each algorithm are shown in Table 2.
![]() |
Table 2 The Performance of Deep Learning Models to Differentiate Bacterial Keratitis, Fungal Keratitis, Non-Infectious Corneal Lesions (NCL), and Normal Cornea |
Heatmaps
At the end of the convolutional neural network, we created a heatmap using Grad-CAM to visualize the regions of the corneal lesions that most influenced the model’s prediction (Figure 2).
![]() |
Figure 2 Heat maps. |
The heatmaps highlight the areas of the input image that had the highest contribution to the model’s decision based on the activations of the final convolutional layer. Red regions indicate areas that strongly influenced the prediction, while blue regions signify areas with less influence. This visualization helps to better understand which parts of the image the model relied on for its classification.
Discussion
In this study, we evaluated the performance of deep learning system with additional of the ensemble techniques to classify bacterial keratitis, fungal keratitis, other corneal lesions, and normal corneas. Our findings showed that deep learning models could differentiate between infectious keratitis and non-infectious keratitis and also distinguish patients with bacterial and fungal keratitis using the same models. Separation of images with other corneal lesions and normal corneas before training the models helps obtaining better performance of the models. Among the three models, the DenseNet121 showed the best performance, with the accuracy of 80% followed by VGG19 79% and Resnet50 76%, respectively. The performance of the models increased with the probability weighting ensemble technique.
Rapid diagnosis of infectious keratitis is a crucial step to guide confirmative investigation and to initiate treatment.17,18 It also leads to better visual treatment outcomes. While differentiation between infectious and non-infectious corneas is challenging for non-ophthalmic health care workers, it can usually be done by ophthalmic health care workers. However, most ophthalmic health care workers are inexperienced in differentiation between bacterial and fungal keratitis, which is important for treatment initiation. The development of a deep learning system for differentiating infectious keratitis from other corneal lesions and normal corneas would benefit the health care system. In this study, we demonstrated the potential of developing a deep learning system for this purpose. However, the learning system still needs to improve both accuracy, precision and the system for clinical deployment in the future study.
In this study, we trained deep learning algorithms by replicating real-world conditions where patients with eye symptoms can have either infectious keratitis, non-infectious corneal lesions, or normal corneas with other eye problems. We included the group of normal corneas to train our algorithms to ensure that the algorithms can differentiate patients with eye symptoms without any corneal lesions. We found that the deep learning algorithms trained by combining images of other corneal lesions and normal corneas demonstrated the recall of 63% and the accuracy of 77% with ensemble technique, which is lower than the deep learning algorithm trained by separation between these two categories, which showed the recall of 68% and the accuracy of 83%. We believed that separation of the images of other corneal lesions and normal corneas replicated the real-world situation and lead to a more generalized learning process.
Providing a large data set, a deep learning algorithm can gain better performance compared with conventional statistical models and human performance.19 Our study used a large data set including 2400 images of bacterial keratitis, 1616 images of fungal keratitis, 1545 images of non-infectious corneal lesions and 917 images of normal corneas. While most of previous reports focused on either classification of infectious causes or differentiate infectious keratitis from others, we found that the accuracy of our model to classify bacterial and fungal keratitis and to differentiate between infectious, non-infectious corneas and normal corneas using the same model was 77–83% with ensemble technique. We believed that the accuracy of our model was comparable to the previous reports. The previous reports that aimed to classify causes of infectious keratitis demonstrated the accuracy of 51–90.7%,12,14,20,21 while the study of Li Z. et al aimed to differentiate keratitis, normal and others showed the accuracy of 99.8%.22 Soleimani M. et al demonstrated the model accuracy of 84% to classify fungal and bacterial keratitis and also developed the model to differentiate between keratitis caused by filamentous fungi and yeast. Previous reports12,20 also showed that the accuracy of the models was higher than that of ophthalmologists, with the accuracy ranged from 49% to 74%.
There are several limitations in our study. First, our models were trained using high resolution images from single-center data. The model performance may change when using different resolution images. However, our dataset has variable brightness, which should be suitable for real-world applications. Second, the amount of dataset in each classification did not reflect the real prevalence in each classification. This would directly affect the precision (positive predictive value) of the DL algorithm. However, lowering the number of cases in each class to the real prevalence, which is a lot higher rate of normal cornea cases, would lead to falsely high accuracy of the model. Third, some fungal keratitis cases have been reported to respond to antibiotics. Therefore, using a response to antibiotics for classification of bacterial keratitis may be misleading in some cases.23 Lastly, due to bacterial and fungal keratitis were the most common cause of infectious keratitis in our center. Our deep learning algorithm was only trained using images of bacterial and fungal keratitis. Therefore, the algorithm cannot be applied to other etiologies of infectious keratitis such as viral keratitis and parasitic keratitis. A future study including the history of patients could be done to increase the performance of the model.
Conclusion
In conclusion, this is the deep learning algorithm developed to differentiate between bacterial keratitis, fungal keratitis, non-infectious corneal lesions, and normal corneas. This is the initial development of deep learning algorithms as a screening tool and a guide for initial treatment of infectious keratitis. Studies to improve accuracy, precision and the system for clinical deployment could be done in the future.
Funding
This research is funded by Thailand Science Research and Innovation Fund, Chulalonkorn University (CUFRB65_hea(48)_055_30_36). The funder has no role in study design, data collection, analysis, manuscript writing and publication.
Disclosure
The authors report no conflicts of interest in this work.
References
1. Cabrera-Aguas M, Khoo P, Watson S. Infectious keratitis: a review. Clin Exp Ophthalmol. 2022;50(5):543–562. doi:10.1111/ceo.14113
2. Bonzano C, Borroni D, Lancia A, et al. Doxycycline: from ocular rosacea to COVID-19 anosmia. New insight into the coronavirus outbreak. Front Med Lausanne. 2020;7:200. doi:10.3389/fmed.2020.00200
3. Flexman SR, Bourne RRA, Resnikoff S, et al. Global causes of blindness and distance vision impairment 1990-2020: a systematic review and meta-analysis. Lancet Glob Health. 2017;5(12):e1221–e1234. doi:10.1016/S2214-109X(17)30393-5
4. Butler TKH, Spencer NA, Chan CCK, et al. Infective keratitis in older patients: a 4 year review, 1998-2002. Br J Ophthalmol. 2005;89(5):591–596. doi:10.1136/bjo.2004.049072
5. Keay L, Edwards K, Naduvilath T, et al. Microbial keratitis predisposing factors and morbidity. Ophthalmology. 2006;113(1):109–116. doi:10.1016/j.ophtha.2005.08.013
6. Gebauer A, McGhee CN, Crawford GJ. Severe microbial keratitis in temperate and tropical Western Australia. Eye. 1996;10(5):575–580. doi:10.1038/eye.1996.133
7. Leibovitch I, Lai TF, Senarath L, Selva D, Selva D. Infectious keratitis in south Australia: emerging resistance to cephazolin. Eur J Ophthalmol. 2005;15(1):23–26. doi:10.1177/112067210501500104
8. Reinprayoon U, Sitthanon S, Kasetsuwan N, Chongthaleong A. Bacteriological findings and antimicrobial susceptibility pattern of isolated pathogens from visual threatening ocular infections. J Med Assoc Thai. 2015;98(Suppl 1):S70–S76.
9. Borroni D, Bonzano C, Sanchez-Gonxalens J, et al. Shotgun metagenomic sequencing in culture negative microbial keratitis. Eur J Ophthalmol. 2023;33(4):1589–1595. doi:10.1177/11206721221149077
10. LeCun Y, Bengio Y, Hinton G. Deep learning. Nature. 2015;521(7553):436–444. doi:10.1038/nature14539
11. Litjens G, Kooi T, Bejnordi BE, et al. A survey on deep learning in medical image analysis. Med Image Anal. 2017;42:60–88. doi:10.1016/j.media.2017.07.005
12. Kuo MT, Hsu BWY, Yin YK, et al. A deep learning approach in diagnosing fungal keratitis based on corneal photographs. Sci Rep. 2020;10(1):14424. doi:10.1038/s41598-020-71425-9
13. Hung N, Shih AKY, Lin C, et al. Using slit-lamp images for deep learning-based identification of bacterial and fungal keratitis: model development and validation with different convolutional neural networks. Diagnostics. 2021;11(7):1246. doi:10.3390/diagnostics11071246
14. Ghosh AK, Thammasudjarit R, Jongkhakornpong P, et al. Deep learning for discrimination between fungal keratitis and bacterial keratitis: deepKeratitis. Cornea. 2022;41(5):616–622. doi:10.1097/ICO.0000000000002830
15. Mayya V, Shevgoor SK, Kulkarni U, et al. Multi-scale convolutional neural network for accurate corneal segmentation in early detection of fungal keratitis. J Fungi. 2021;7(10):850. doi:10.3390/jof7100850
16. Redd TK, Prajna NV, Srinivasan M, et al. Image-based differentiation of bacterial and fungal keratitis using deep convolutional neural networks. Ophthalmol Sci. 2022;2(2):100119. doi:10.1016/j.xops.2022.100119
17. Gupta N, Tandon R. Investigative modalities in infectious keratitis. Indian J Ophthalmol. 2008;56(3):209–213. doi:10.4103/0301-4738.40359
18. Labbe A, Khammari C, Dupas B, et al. Contribution of in vivo confocal microscopy to the diagnosis and management of infectious keratitis. Ocul Surf. 2009;7(1):41–52. doi:10.1016/S1542-0124(12)70291-4
19. Rajula HSR, Verlato G, Manchia M, et al. Comparison of conventional statistical methods with machine learning in medicine: diagnosis, drug development, and treatment. Medicina. 2020;56(9):455. doi:10.3390/medicina56090455
20. Xu Y, Kong M, Xie W, et al. Deep sequential feature learning in clinical image classification of infectious keratitis. Engineering. 2021;7(7):1002–1010. doi:10.1016/j.eng.2020.04.012
21. Saini JS, Jain AK, Kumar S, et al. Neural network approach to classify infective keratitis. Curr Eye Res. 2003;27(2):111–116. doi:10.1076/ceyr.27.2.111.15949
22. Li W, Yang Y, Zhang K, et al. Dense anatomical annotation of slit lamp images improves the performance of deep learning for the diagnosis of ophthalmic disorders. Nat Biomed Eng. 2020;4(8):767–777. doi:10.1038/s41551-020-0577-y
23. Matoba AY. Fungal keratitis responsive to moxifloxacin monotherapy. Cornea. 2012;31(10):1206–1209. doi:10.1097/ICO.0b013e31823f766c
© 2025 The Author(s). This work is published and licensed by Dove Medical Press Limited. The
full terms of this license are available at https://www.dovepress.com/terms.php
and incorporate the Creative Commons Attribution
- Non Commercial (unported, 3.0) License.
By accessing the work you hereby accept the Terms. Non-commercial uses of the work are permitted
without any further permission from Dove Medical Press Limited, provided the work is properly
attributed. For permission for commercial use of this work, please see paragraphs 4.2 and 5 of our Terms.