Skip to main content

Application of machine learning in ophthalmic imaging modalities

Abstract

In clinical ophthalmology, a variety of image-related diagnostic techniques have begun to offer unprecedented insights into eye diseases based on morphological datasets with millions of data points. Artificial intelligence (AI), inspired by the human multilayered neuronal system, has shown astonishing success within some visual and auditory recognition tasks. In these tasks, AI can analyze digital data in a comprehensive, rapid and non-invasive manner. Bioinformatics has become a focus particularly in the field of medical imaging, where it is driven by enhanced computing power and cloud storage, as well as utilization of novel algorithms and generation of data in massive quantities. Machine learning (ML) is an important branch in the field of AI. The overall potential of ML to automatically pinpoint, identify and grade pathological features in ocular diseases will empower ophthalmologists to provide high-quality diagnosis and facilitate personalized health care in the near future. This review offers perspectives on the origin, development, and applications of ML technology, particularly regarding its applications in ophthalmic imaging modalities.

Background

Medical imaging is important in clinical diagnosis and individualized treatment of eye diseases [1,2,3]. This technology can provide high-resolution information regarding anatomic and functional changes. In recent years, imaging techniques have developed rapidly, together with therapeutic advances [4]. However, with the increasing sophistication of imaging technology, comprehension and management of eye disease has become more complex due to the large numbers of images and findings that can be recorded for individual patients, as well as the hypotheses supported by these data. Thus, each patient has become a “big data” challenge [5].

Conventional diagnostic methods greatly depend on physicians’ professional experience and knowledge, which can lead to a high rate of misdiagnosis and wastage of medical data [6]. The new era of clinical diagnostics and therapeutics urgently requires intelligent tools to manage medical data safely and efficiently. Artificial intelligence (AI) has been widely applied across various contexts in medicine (Fig. 1). In particular, collaborations between medical imaging and AI disciplines have proven highly productive in the fields of radiology, dermatology and pathology [7].

Fig. 1
figure 1

The applications of AI techniques in the eye clinic

AI has improved the performance of many challenging tasks in medical imaging, such as diagnosis of cutaneous malignancies using skin photographs [8], detection of lung cancer using chest images [9], prediction of cardiovascular disease risk using computer tomographic (CT) [10], detection of pulmonary embolism using CT angiography [11], analysis of breast histopathology using tissue sections [12], detection of polyps using virtual colonoscopy [13], diagnosis of glioma using magnetic resonance imaging (MRI) [14], and diagnosis of neurological disease using functional MRI (e.g., Alzheimer’s disease) [15,16,17]. Furthermore, AI has a considerable impact in ophthalmology, mainly through accurate and efficient image interpretation [18].

The rapid increase in AI requires ophthalmologists to embrace intelligent algorithms and gain a greater understanding of the abilities of the technology, and thus enable them to evaluate and apply AI in a constructive manner. Here, we comprehensively reviewed the general applications of ML technology in ophthalmic imaging modalities, including the three most commonly used methods: fundus photography (FP), optical coherence tomography (OCT) and slit-lamp imaging. Throughout the review, we introduce basic definitions of terms commonly used when discussing ML applications, as well as the workflow for building AI models and an overview of the balance between the challenges and opportunities for ML technology in ophthalmic imaging.

Main text

From machine learning (ML) to deep learning (DL)

AI refers to the field of computer science that mimics human cognitive function [19]. ML is a subfield of AI that allows computers to learn from a set of data and subsequently make predictions; these processes can be classified as supervised and unsupervised learning.

In supervised learning, a machine is trained with input data previously labeled by humans to predict the desired outcome such that it can solve classification and regression problems. However, this approach is time-consuming because it requires a considerable amount of data to be labeled manually. Conversely, in unsupervised learning, a machine is provided input data that are not explicitly labeled; the machine is then permitted to identify structures and patterns from the set of objects, without human influence. Conventional ML algorithms include decision tree [20], naive Bayes algorithm [21], random forest (RF) [22], support vector machine (SVM) [23, 24], k-nearest neighbor (KNN) [25] (Table 1). Despite obtaining good performance with small datasets, ML network architecture makes them more prone to fail in reaching the convergence and overfitting training dataset because of manual features selection process, which limits their application.

Table 1 Representative algorithms in ML and DL

Among the techniques comprising ML, one of the most promising is DL (Fig. 2) [26]. This mimics the operation of the human brain using multiple layers of artificial neural networks that can generate automated predictions from input data. DL currently has central roles in various tasks, including image recognition (e.g., facial recognition in Facebook, image search in Google), virtual assistant (e.g., Apple’s Siri, Amazon’s Alexa, and Microsoft’s Cortana), and diagnostic assistant systems (e.g. IBM Watson for Oncology). Representative DL algorithms are deep belief network (DBN) [27, 28], convolution neural network (CNN) [29], recurrent neural network (RNN) [30, 31] (Table 1). Compared with conventional ML, the architecture of DL uses more hidden layers to decode image raw data without the need to handcraft specific features or use feature selection algorithm, which has the advantage of efficiency and can explore more complex non-linear pattern in the data (Fig. 2).

Fig. 2
figure 2

The relationship among the subsets of AI. Machine learning techniques occurred in the 1980s, while deep learning techniques has been applied since the 2010s. Abbreviations: ML, machine learning; DL, deep learning

Visual representation of some common algorithms in ML and DL is shown in Fig. 3. The most commonly applied algorithm in image recognition is CNN. Existing CNN architectures that have been the most widely used include LeNet [32], AlexNet [33], ResNet [34], GoogleNet [35] (Fig. 4), which showed robust performance in the ImageNet Large Scale Visual Recognition Competition [36] and has been successfully applied in facial detection [37], real-time language translation, robot navigation and pedestrian detection [38]. There are various open source tools for development and implementation of AI algorithms; these tools are compatible with many modern programming languages. We summarized some of the most commonly used libraries for DL in Fig. 5.

Fig. 3
figure 3

Schematic diagram of common algorithms in AI. a SVM are supervised learning models used to analyze the classification and regression of data. b RFs are an ensemble learning method that use multiple trees to train and predict samples. c CNNs are composed of layers of stacked neurons that can learn complex functions. d Reinforcement learning algorithms are used to train the action of an agent on an environment. Abbreviations: SVM, support vector machine; RF, random forest; CNN, convolutional neural networks

Fig. 4
figure 4

Top-5 error of representative CNN algorithms. Top-5 error: The probability of which none of the first five most probable labels given by the image classification algorithm is correct. Abbreviations: VGG, visual geometry group; GoogleNet, google inception net; ResNet, residual network

Fig. 5
figure 5

Open source DL research libraries with major programming languages including Python, C++, R, Java. Python libraries tend to be the most popular and can be used to implement recently available algorithms. Abbreviations: DL, deep learning

AI models building progress

DL neural networks use convolutional parameter layers to learn filters iteratively, which extract hierarchical feature maps from input images, learning the intricate structures of complicated features (such as shapes) through simpler features (such as line) and give the desired classification as output. These convolutional layers are placed in turn, so that each layer transforms the input image and propagates the output information into the next layer.

During the training progress, the parameters (mathematical functions) of the neural network are initially set to random values. The loss function is used to estimate the degree of inconsistency between the predicted value and the true value of the model. Next, the output provided by the function is compared to known features in the training set. Then, parameters of the function are slightly modified by the optimizer so that they can approximate or reach the optimal value, thereby minimizing the loss function. In general, the smaller the loss function, the better the model’s robustness. This process is repeated many times, and the function “learns” how to accurately calculate the features from the pixel intensity of the image for all images in the training set. The most commonly used network is the CNN, which uses a function that first merges nearby pixels into local features and then aggregates them into global features.

Figure 6a represents an abstraction of the algorithmic pipeline. The model characterizes the diagnosis of a disease based on an expert-labelled ground truth. The steps for building an AI model include pre-processing image data, training data, validating and testing the model from a large-scale dataset, and eventually evaluate the performance of the trained model.

Fig. 6
figure 6

A diagram showing data processing. a The typical workflow of AI experimental process. b Illustration of k-fold cross-validation techniques (k = 10). Abbreviation: AUC, area under the curve

Image data preprocessing

To unify images from different sources and rearrange them into a uniform format, multiple preprocessing steps can be performed [39]: (1) Cleaning up the data: It is the process of reviewing and verifying data, which can remove duplicate information and correct existing errors. (2) Data normalization: The original data will be resized to a common scale which is suitable for comprehensive comparative evaluation. (3) Noise reduction: It will greatly affect the convergence speed of the data and even the accuracy of the trained model if there are a lot of noise in the image data.

Training, validation and testing

To achieve a better performance, the base dataset is randomly split into two subsets: one for the model building; and one for testing the model’s performance. The former dataset is further partitioned into training dataset and validation dataset. The training dataset is used to develop the learning model, the validation dataset is used for parameter selection and tuning, and the test dataset was used to evaluate the model.

During the training process, one way to optimize the model and estimate the accuracy of the algorithm when there are insufficient training samples is by using the cross-validation method [40]. All data for modeling is randomly partitioned into k equal sized complementary subsamples. (k-1) folds are selected as the training set and one is selected as the validation set. This process is then repeated across k iterations using a different set of training and testing examples (Fig. 6b).

Evaluation metrics

After building the best learning model, evaluation indicators including accuracy, sensitivity and specificity are compared (Table 2). Furthermore, the receiver operating characteristic curve (ROC), and the area under the ROC curve (AUC) indicators are indicative of vital objective evaluation in the task of classification. AUC can measure the accuracies of the positive and negative samples at the same time. The closer the ROC curve is located to upper-left hand corner, the higher the value of AUC, and the better the model’s performance will be.

Table 2 Common metrics in AI model evaluation

Applications of AI in ophthalmic imaging

Recently, there has been a considerable increase in the use of AI techniques for medical imaging, from processing to interpretation. MRI and CT are collectively used in more than 50% of current articles involving applications of AI in radiology, electroencephalography, electrocardiography, X-ray imaging, ultrasound imaging and angiography (Fig. 7a). Among the applications of AI in ophthalmology, research efforts have focused on diseases with high incidences, such as diabetic retinopathy (DR), glaucoma, age-related macular degeneration (AMD) and cataract (Fig. 7b).

Fig. 7
figure 7

Publication statistics of AI application. a. Publication statistics of AI application in different imaging modalities per year indexed on PubMed database (Jan 1st, 2016 to Oct 1st, 2019). b. Publication statistics of AI application in diagnosing different ophthalmological diseases per year indexed on PubMed database (Jan 1st, 2016 to Oct 1st, 2019)

AI may be useful for alleviating clinical workloads as it allows physicians with minimal experience to screen for diseases and detect them in an efficient and objective manner. In the field of ophthalmology, AI has gained increasing interest because it can be used in detecting clinically significant features for diagnostic and prognostic purposes. There have been a number of researches comparing performance between experts and algorithms in diagnosing different ophthalmic imaging modalities.

Fundus photograph (FP)

FP is a common ophthalmic imaging technique, in which optical cameras are used to obtain enlarged images of retinal tissues; these retinal photographs are suitable for monitoring, diagnosis, and treatment planning with respect to eye diseases. Various studies have involved the application of AI technology with FP to the diagnosis, grading and monitoring of eye diseases [41, 42].

All diabetic patients need regular retinal screening for early detection and timely treatment of DR [43, 44], which is a leading cause of preventable blindness that affects millions of people worldwide [45]. Specific hallmarks in early DR including exudates [46,47,48], cotton-wool spots [49, 50], macular edema [51] and micro-aneurysms [52, 53] in the retina can be viewed by FP and identified by AI methods. Most model outputs belong to binary or multi-class classification tasks. Gulshan et al. were the first to use a deep CNN (DCNN) for automated detection of DR [54]. In another study, with a large-scale dataset (494,661 retinal images), a DL system was developed to automatically detect DR, glaucoma, and AMD with respective AUCs of 93.6, 94.2 and 93.1% [55]. Keel and colleagues developed a DL-based DR screening model for use in an endocrinology outpatient clinic, which resulted in 96% patient satisfaction [56].

Generally, conventional FP involves the acquisition of photographs at one-field 45° to the posterior pole of the retina, although the entire retina can be observed at an angle of 230° [57]. Takahashi et al. constructed fundus images of four different shooting directions and trained the GoogleNet DCNN to study single fundus images or four synthetic fundus photos intelligently [58]. The results showed that the accuracy was higher for synthetic fundus images and suggested that wider ranges of fundus images should be used for DR diagnosis. Recently, ultra-wide field scanning laser ophthalmoscopy was introduced; this technology enables scanning of 80% of the fundus area [59]. Diagnosis with wide range FP is an emerging trend in AI diagnostic research, and more advanced algorithms are needed to support its continued growth.

AI can be used in clinical practice to analyze retinal images for disease screening. The Google Chips and Amazon DeepLens cameras, allow embedding of advanced algorithms within devices, which is a useful approach in various medical fields [60]. Rajalakshmi et al. combined an AI-based grading algorithm with a smartphone-based retinal imaging device for potential use in mass retinal screening of people with type 2 diabetes [61]. In 2018, IDx-DR was approved as the first fully autonomous AI-based DR diagnostic system by the United States Food and Drug Administration (FDA) [62]; this study is a milestone as the first prospective assessment of AI in the real-world. We summarized the medical AI products approved by the FDA (Table 3).

Table 3 FDA cleared medical AI products

In addition, FP can be used to diagnose other retinal diseases, such as glaucoma, retinopathy of prematurity (ROP), and AMD [63,64,65,66,67]. Recent efforts have aimed to automate pupillary tracking by integrating a motor into the fundus camera. Google Brain has been shown to predict subjects’ cardiovascular risk factors, including age, systolic blood pressure, hemoglobin A1c, and sex from a single fundus image; this task is impossible for professional clinicians [68].

Important issues in the global implementation of ML/DL are the use of big data sharing and open access to scientific data. We have summarized the most commonly used public data-sets of fundus photographs for model training (Table 4). Among them, Kaggle is one of the largest data modeling and data analysis competition platforms in the world, which provides over 50,000 retinal images taken under various shooting conditions, with 0–4 severity level annotated by clinicians. Besides, EyePACS and MESSIDOR are the most commonly used image datasets for DR classification. At present, public eye datasets are mainly applied to automated DR and glaucoma detection, but few for other ophthalmic diseases.

Table 4 Common publicly available databases

Optical coherence tomography (OCT)

OCT is a non-contact and non-invasive optical image-based diagnostic technology, which provides extensive information regarding retinal morphology and assists in the diagnosis of various macular diseases [76]. Thirty million ophthalmic OCT procedures are performed each year; this number is comparable in scale to other medical imaging modalities, such as MRI or CT [77,78,79,80]. OCT algorithms can be broadly divided into classification and segmentation tasks.

With appropriate segmentation, the DL algorithm can extract and delineate the structures or lesions in OCT scans, then provide the surface areas or volumes of abnormal regions. Lee et al. applied a CNN model for segmentation of intraretinal fluid in OCT scans, which showed robust performance for interrater reliability between human observers and the algorithm [81]. Another group of patients was assessed regarding the need for urgent referral, using segmentation and classification algorithms. The system could transfer three-dimensional OCT scans into a tissue map and the patients were able to view the video showing the lesion, which sets a new benchmark for future efforts to solve the ‘black box’ problem of neural networks. Notably, the algorithm detected all urgent referral cases within the patient cohort [82]. With the development of DL, some researchers have extended their algorithms to perform segmentation of pigment epithelium detachment, fluid and vessels [83,84,85].

OCT has become increasingly important in disease detection, prognostication, and surveillance in AMD patients, especially those with wet AMD requiring anti-vascular endothelial growth factor (anti-VEGF). A ML method was proposed to predict the need for anti-VEGF treatment based on OCT scans taken during the intake examination. The results showed that classifications of low- and high-treatment requirement subgroups demonstrated AUCs of 0.7 and 0.77, respectively [86]. Treder et al. showed that a DL algorithm exhibited good performance for automated detection of AMD in spectral domain OCT [87]. This pilot study was an important step toward automated image-guided prediction of treatment intervals in patients with neovascular AMD.

Additionally, OCT can quantitatively measure structural parameters by scanning the thickness of the retinal nerve fiber layer (RNFL), which is recognized as the earliest structure being implicated in glaucoma [88], since the changes are often detectable before visual field loss [89]. ML classifiers have shown substantial diagnostic accuracy for detection of RNFL thickness measurements obtained by OCT [90, 91]. Moreover, algorithms have been developed for the use of OCT parameters to classify the optic disc in patients with open-angle glaucoma [92].

Because DL methods incorporate millions of parameters, the success of these methods largely depends on the availability of large datasets [93]. A DL-based computer-aided system was used to detect DR in a small sample of patients (52 OCT scans), achieving an AUC of 0.98 [94]. Transfer learning is an algorithm that enables the application of cumulative knowledge learned from other datasets to a new task [95]; this algorithm is highly effective with respect to the application of DL, particularly in the context of limited data [63]. An AI diagnostic tool based on a transfer learning algorithm could distinguish OCT images with choroidal neovascularization or diabetic macular edema from those of normal retina with an AUC of 98.9% [96].

Recent research involved analysis of a unique combination of retinal OCT and MRI images; the findings indicated that retinal OCT might provide insights for early diagnosis of neurodegeneration in the brain, including Alzheimer’s disease [97]. Taken together, the results of the above studies highlight the accuracy of diagnostic evaluation using AI.

Slit-lamp images

The slit lamp, a high-intensity light source instrument, is used to shine a thin beam of light into the eye, enabling examination of the anterior and posterior segments of the eye. It is applied mainly for wide illumination of much of the eye and its adnexa for general observation.

In recent years, several studies have investigated and made contributions to the grading and classification of senile cataracts by using slit-lamp images. Huang et al. [98] proposed a ranking method based on slit-lamp images and achieved acceptable grading for nuclear cataracts; this could potentially reduce the clinical burden of experienced ophthalmologists. Fan et al. [99] developed an automatic grading system for nuclear sclerosis based on slit-lamp photographs, using linear regression; the grades predicted by that algorithm were statistically reliable. Li et al. [100] extracted important feature landmarks from slit-lamp images and trained an SVM regression model to automatically predict grades of nuclear cataract.

Slit-lamp images are essential in the diagnosis of congenital cataracts, a major cause of childhood blindness [101,102,103]. Compared with senile cataract, the phenotype of congenital cataract is far more complicated. Slit-lamp images show heterogeneity among cataract patients as well as complexity in their ocular images [104, 105].

In addition, some DL methods for grading and classifying slit-lamp images have shown effective results [106, 107]. Lin and colleagues’ team developed a prototype diagnostic and therapeutic system (CC-Cruiser) for pediatric cataract screening by using preprocessed ocular images and a DCNN [108]; they compared the performances of multiple DL and conventional ML methods from various perspectives [109, 110]. CC-Cruiser has been used in the Ophthalmic Center of Sun Yat-sen University with an accuracy comparable to that of ophthalmologists. Lin and colleagues also built a collaborative cloud-based multihospital AI platform to integrate rare disease data and provide medical suggestions for non-specialized doctors and remote hospitals without advanced equipment. These efforts addressed significant needs in cataract research and may provide a basis for using AI to analyze other ophthalmic images.

With the continual increase in the amount of data available for AI analysis as well as the potential for AI to identify diseases, ophthalmic medical imaging has moved from a strictly conceptual and perceptual approach to more objective methodology. The enhanced efficiency provided by AI is likely to allow ophthalmologists to perform more value-added tasks. In this review, we summarized studies on FP and OCT using DL techniques on diseases with high incidences (Table 5).

Table 5 Summary of DL methods using FP and OCT to detect eye disease

Challenges and future considerations

Despite promising findings thus far, there remain challenges and limitations to using AI [138]. First, the quality of input images is inherently variable, primarily because there is a lack of uniform imaging annotation, and there is variability in ocular characteristics among patients. In addition, inter-expert variability in clinical decision making is an important issue which has been well-documented [139]. High inconsistency among experts in the interpretation of ophthalmic images may introduce bias during model training. Secondly, due to the heavy workload of manual annotation, the number of images with clinical annotations is extremely scarce. Hence, advanced image annotation tools should be developed to gather clinical annotations (such as localization of exudates and retinal hemorrhages). Semi-supervised learning method attempts to make full use of unlabeled samples to improve the performance of model generalization. Third, given the complexity of diseases, sufficient data are needed to build high-accuracy models; however, data for more severe stages of disease, as well as for rare diseases, are often insufficient. Fourth, the current application of AI in ophthalmology mainly focuses on single images of a single disease, whereas combined diagnosis using multiple imaging techniques is needed to evaluate diseases in a synergistic manner. Finally, ensuring the security and privacy of medical data is an important challenge that has not been entirely resolved.

In the future, healthcare systems with minimal staff may benefit from modern automated imaging. The inclusion of intelligence within ophthalmic devices may enable healthcare professionals to provide better patient care. Furthermore, AI systems may be embedded within ophthalmic imaging devices for real-time image diagnosis (e.g., portable fundus cameras and smartphones) with minimal operator expertise. Emerging multimodal imaging techniques, which coincide with improved intelligent algorithms, enable joint training from complementary modalities that have different strengths. This embedded AI will be enabled by improved hardware performance with decreasing cost. With the increasing employment of AI in medical care, patients could be self-screened without supervision before an ophthalmologist appointment. Besides, patients in remote areas could receive routine eye examinations and undergo monitoring of disease progression without the intervention of highly skilled operators. Increasing the interpretability of networks will be another important research direction. The “black box” problem has been identified as an obstacle to the application of DL in healthcare. Existing studies have developed novel algorithms that enable clinicians to inspect and visualize the decision process (e.g., OCT tissue-segmentation), rather than simply obtaining a diagnosis suggestion [82]. In terms of treatment, the research on ophthalmic robots needs further exploration; there have been studies on robotic intraretinal vascular injection and anterior macular surgery.

Conclusions

With the unprecedented progress of computer and imaging technologies, medical imaging has developed from an auxiliary examination to the most important method for clinical and differential diagnosis in modern medicine. High-accuracy models suggest that ML can effectively learn from increasingly complicated images with a high degree of generalization, using a relatively small repository of data [68]. To some extent, AI may revolutionize disease diagnosis and management by performing classifications of difficult images for clinical experts, as well as by rapidly reviewing large amounts of images. Compared with evaluations by humans, AI has advantages in terms of information integration, data processing, and diagnostic speed. Most AI-based applications in medicine are still in early stages; AI in medical care may ultimately aid in expediting the diagnosis and referral of ophthalmic diseases through cross-disciplinary collaborations of clinicians, engineers, and designers.

Availability of data and materials

Not applicable.

References

  1. Bernardes R, Serranho P, Lobo C. Digital ocular fundus imaging: a review. Ophthalmologica. 2011;226(4):161–81.

    Article  PubMed  Google Scholar 

  2. Panwar N, Huang P, Lee J, Keane PA, Chuan TS, Richhariya A, et al. Fundus photography in the 21st century--a review of recent technological advances and their implications for worldwide healthcare. Telemed J E Health. 2016;22(3):198–208.

    Article  PubMed  PubMed Central  Google Scholar 

  3. Zhang Z, Srivastava R, Liu H, Chen X, Duan L, Kee Wong DW, et al. A survey on computer aided diagnosis for ocular diseases. BMC Med Inform Decis Mak. 2014;14:80.

    Article  PubMed  PubMed Central  Google Scholar 

  4. Chaikitmongkol V, Khunsongkiet P, Patikulsila D, Ratanasukon M, Watanachai N, Jumroendararasame C, et al. Color fundus photography, optical coherence tomography, and fluorescein angiography in diagnosing polypoidal choroidal vasculopathy. Am J Ophthalmol. 2018;192:77–83.

    Article  PubMed  Google Scholar 

  5. Obermeyer Z, Lee TH. Lost in thought — the limits of the human mind and the future of medicine. N Engl J Med. 2017;377(13):1209–11.

    Article  PubMed  PubMed Central  Google Scholar 

  6. Murdoch TB, Detsky AS. The inevitable application of big data to health care. JAMA. 2013;309(13):1351–2.

    Article  CAS  PubMed  Google Scholar 

  7. Patel VL, Shortliffe EH, Stefanelli M, Szolovits P, Berthold MR, Bellazzi R, et al. The coming of age of artificial intelligence in medicine. Artif Intell Med. 2009;46(1):5–17.

    Article  PubMed  Google Scholar 

  8. Esteva A, Kuprel B, Novoa RA, Ko J, Swetter SM, Blau HM, et al. Dermatologist-level classification of skin cancer with deep neural networks. Nature. 2017;542(7639):115–8.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  9. van Ginneken B. Fifty years of computer analysis in chest imaging: rule-based, machine learning, deep learning. Radiol Phys Technol. 2017;10(1):23–32.

    Article  PubMed  PubMed Central  Google Scholar 

  10. Weng SF, Reps J, Kai J, Garibaldi JM, Qureshi N. Can machine-learning improve cardiovascular risk prediction using routine clinical data? PLoS One. 2017;12(4):e0174944.

    Article  PubMed  PubMed Central  CAS  Google Scholar 

  11. Schoepf UJ, Schneider AC, Das M, Wood SA, Cheema JI, Costello P. Pulmonary embolism: computer-aided detection at multidetector row spiral computed tomography. J Thorac Imaging. 2007;22(4):319–23.

    Article  PubMed  Google Scholar 

  12. Bejnordi BE, Zuidhof G, Balkenhol M, Hermsen M, Bult P, van Ginneken B, et al. Context-aware stacked convolutional neural networks for classification of breast carcinomas in whole-slide histopathology images. J Med Imaging (Bellingham). 2017;4(4):044504.

    Google Scholar 

  13. Komeda Y, Handa H, Watanabe T, Nomura T, Kitahashi M, Sakurai T, et al. Computer-aided diagnosis based on convolutional neural network system for colorectal polyp classification: preliminary experience. Oncology. 2017;93(Suppl 1):30–4.

    Article  PubMed  Google Scholar 

  14. Li Z, Wang Y, Yu J, Guo Y, Cao W. Deep Learning based Radiomics (DLR) and its usage in noninvasive IDH1 prediction for low grade glioma. Sci Rep. 2017;7(1):5467.

    Article  PubMed  PubMed Central  CAS  Google Scholar 

  15. Ambastha AK, Leong TY. Alzheimer's disease neuroimaging I. A deep learning approach to neuroanatomical characterisation of Alzheimer's disease. Stud Health Technol Inform. 2017;245:1249.

    PubMed  Google Scholar 

  16. Mitchell TM, Shinkareva SV, Carlson A, Chang KM, Malave VL, Mason RA, et al. Predicting human brain activity associated with the meanings of nouns. Science. 2008;320(5880):1191–5.

    Article  CAS  PubMed  Google Scholar 

  17. Kim D, Burge J, Lane T, Pearlson GD, Kiehl KA, Calhoun VD. Hybrid ICA–Bayesian network approach reveals distinct effective connectivity differences in schizophrenia. Neuroimage. 2008;42(4):1560–8.

    Article  CAS  PubMed  Google Scholar 

  18. Jiang F, Jiang Y, Zhi H, Dong Y, Li H, Ma S, et al. Artificial intelligence in healthcare: past, present and future. Stroke Vasc Neurol. 2017;2(4):230–43.

    Article  PubMed  PubMed Central  Google Scholar 

  19. Russell S, Bohannon J. Artificial intelligence. Fears of an AI pioneer Science. 2015;349(6245):252.

    PubMed  Google Scholar 

  20. Rokach L, Maimon O. Data mining with decision trees: theory and applications. World scientific: Singapore; 2008.

    Google Scholar 

  21. Lowd D, Domingos P. Naive Bayes models for probability estimation. In: Proceedings of the 22nd International Conference On Machine Learning (ICML 2005). Bonn: ACM; 2005. p. 529–36.

    Google Scholar 

  22. Cutler A, Cutler DR, Stevens JR. Random forests. In: Zhang C, Ma Y, editors. Ensemble machine learning. Berlin: Springer; 2012. p. 157–75.

    Chapter  Google Scholar 

  23. Ragab DA, Sharkas M, Marshall S, Ren J. Breast cancer detection using deep convolutional neural networks and support vector machines. Peer J. 2019;7:e6201.

    Article  PubMed  PubMed Central  Google Scholar 

  24. Cortes C, Vapnik V. Support-vector networks. Mach Learn. 1995;20(3):273–97.

    Google Scholar 

  25. Cover T, Hart P. Nearest neighbor pattern classification. IEEE Trans Inf Theory. 1967;13(1):21–7.

    Article  Google Scholar 

  26. LeCun Y, Bengio Y, Hinton G. Deep learning. Nature. 2015;521(7553):436–44.

    Article  CAS  PubMed  Google Scholar 

  27. Hinton GE, Osindero S, Teh YW. A fast learning algorithm for deep belief nets. Neural Comput. 2006;18(7):1527–54.

    Article  PubMed  Google Scholar 

  28. Hinton GE, Salakhutdinov RR. Reducing the dimensionality of data with neural networks. Science. 2006;313(5786):504–7.

    Article  CAS  PubMed  Google Scholar 

  29. Lawrence S, Giles CL, Tsoi AC, Back AD. Face recognition: a convolutional neural-network approach. IEEE Trans Neural Netw. 1997;8(1):98–113.

    Article  CAS  PubMed  Google Scholar 

  30. Karpathy A, Fei-Fei L. Deep visual-semantic alignments for generating image descriptions. IEEE Trans Pattern Anal Mach Intell. 2017;39(4):664–76.

    Article  PubMed  Google Scholar 

  31. Choi E, Schuetz A, Stewart WF, Sun J. Using recurrent neural network models for early detection of heart failure onset. J Am Med Inform Assoc. 2017;24(2):361–70.

    PubMed  Google Scholar 

  32. Lecun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proc IEEE. 1998;86(11):2278–324.

    Article  Google Scholar 

  33. Krizhevsky A, Sutskever I, Hinton GE. ImageNet classification with deep convolutional neural networks. Commun ACM. 2017;60(6):84–90.

    Article  Google Scholar 

  34. He K, Zhang X, Ren S, Sun J. Deep residual learning for image recognition. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Las Vegas; 2016. p. 770–778.

  35. Szegedy C, Wei L, Yangqing J, Sermanet P, Reed S, Anguelov D, et al. Going deeper with convolutions. In: 2015 IEEE conference on computer vision and pattern recognition (CVPR). Boston, MA; 2015. p. 1–9.

  36. Russakovsky O, Deng J, Su H, Krause J, Satheesh S, Ma S, et al. ImageNet large scale visual recognition challenge. Int J Comput Vis. 2015;115(3):211–52.

    Article  Google Scholar 

  37. Fok Hing Chi T, Bouzerdown A. An eye feature detector based on convolutional neural network. In: Proceedings of the Eighth International Symposium on Signal Processing and its Applications, 2005. Sydney: IEEE; 2005. p. 90–93.

  38. Szarvas M, Yoshizawa A, Yamamoto M, Ogata J. Pedestrian detection with convolutional neural networks. IEEE Proceedings. Intelligent Vehicles Symposium, 2005. Las Vegas; 2005. p. 224–229.

  39. Xiaosong J, Yijun H. Research on data pre-process and feature extraction based on wavelet packet analysis. In: 2006 6th World Congress on Intelligent Control and Automation. Dalian; 2006. p. 5850–5853.

  40. Cherkassky V. The nature of statistical learning theory. IEEE Trans Neural Netw. 1997;8(6):1564.

    Article  CAS  PubMed  Google Scholar 

  41. Guo Y, Budak Ü, Vespa LJ, Khorasani E, Şengür A. A retinal vessel detection approach using convolution neural network with reinforcement sample learning strategy. Measurement. 2018;125:586–91.

    Article  Google Scholar 

  42. Guo Y, Budak Ü, Şengür A, Smarandache F. A retinal vessel detection approach based on shearlet transform and indeterminacy filtering on fundus images. Symmetry. 2017;9(10):235.

    Article  Google Scholar 

  43. Fong DS, Aiello LP, Ferris FL 3rd, Klein R. Diabetic retinopathy. Diabetes Care. 2004;27(10):2540–53.

    Article  PubMed  Google Scholar 

  44. Namperumalsamy P, Nirmalan PK, Ramasamy K. Developing a screening program to detect sight-threatening diabetic retinopathy in South India. Diabetes Care. 2003;26(6):1831–5.

    Article  PubMed  Google Scholar 

  45. Cheung N, Mitchell P, Wong TY. Diabetic retinopathy. Lancet. 2010;376(9735):124–36.

    Article  PubMed  Google Scholar 

  46. Osareh A, Shadgar B, Markham R. A computational-intelligence-based approach for detection of exudates in diabetic retinopathy images. IEEE Trans Inf Technol Biomed. 2009;13(4):535–45.

    Article  PubMed  Google Scholar 

  47. Shuang Y, Di X, Kanagasingam Y. Exudate detection for diabetic retinopathy with convolutional neural networks. Conf Proc IEEE Eng Med Biol Soc. 2017;2017:1744–7.

    Google Scholar 

  48. Zheng R, Liu L, Zhang S, Zheng C, Bunyak F, Xu R, et al. Detection of exudates in fundus photographs with imbalanced learning using conditional generative adversarial network. Biomed Opt Express. 2018;9(10):4863–78.

    Article  PubMed  PubMed Central  Google Scholar 

  49. Naqvi SAG, Zafar HMF, Ul HI. Automated system for referral of cotton-wool spots. Curr Diabetes Rev. 2018;14(2):168–74.

    Article  PubMed  Google Scholar 

  50. Niemeijer M, van Ginneken B, Russell SR, Suttorp-Schulten MS, Abramoff MD. Automated detection and differentiation of drusen, exudates, and cotton-wool spots in digital color fundus photographs for diabetic retinopathy diagnosis. Invest Ophthalmol Vis Sci. 2007;48(5):2260–7.

    Article  PubMed  Google Scholar 

  51. Murugeswari S, Sukanesh R. Investigations of severity level measurements for diabetic macular oedema using machine learning algorithms. Ir J Med Sci. 2017;186(4):929–38.

    Article  CAS  PubMed  Google Scholar 

  52. Jiayi W, Jingmin X, Lai H, You J, Nanning Z. New hierarchical approach for microaneurysms detection with matched filter and machine learning. Conf Proc IEEE Eng Med Biol Soc. 2015;2015:4322–5.

    Google Scholar 

  53. Budak U, Şengür A, Guo Y, Akbulut Y. A novel microaneurysms detection approach based on convolutional neural networks with reinforcement sample learning algorithm. Health Inf Sci Syst. 2017;5(1):14.

    Article  PubMed  PubMed Central  Google Scholar 

  54. Gulshan V, Peng L, Coram M, Stumpe MC, Wu D, Narayanaswamy A, et al. Development and validation of a deep learning algorithm for detection of diabetic retinopathy in retinal fundus photographs. JAMA. 2016;316(22):2402–10.

    Article  PubMed  Google Scholar 

  55. Ting DSW, Cheung CY, Lim G, Tan GSW, Quang ND, Gan A, et al. Development and validation of a deep learning system for diabetic retinopathy and related eye diseases using retinal images from multiethnic populations with diabetes. JAMA. 2017;318(22):2211–23.

    Article  PubMed  PubMed Central  Google Scholar 

  56. Keel S, Lee PY, Scheetz J, Li Z, Kotowicz MA, MacIsaac RJ, et al. Feasibility and patient acceptability of a novel artificial intelligence-based screening model for diabetic retinopathy at endocrinology outpatient services: a pilot study. Sci Rep. 2018;8(1):4330.

    Article  PubMed  PubMed Central  CAS  Google Scholar 

  57. Vujosevic S, Benetti E, Massignan F, Pilotto E, Varano M, Cavarzeran F, et al. Screening for diabetic retinopathy: 1 and 3 nonmydriatic 45-degree digital fundus photographs vs 7 standard early treatment diabetic retinopathy study fields. Am J Ophthalmol. 2009;148(1):111–8.

    Article  PubMed  Google Scholar 

  58. Takahashi H, Tampo H, Arai Y, Inoue Y, Kawashima H. Applying artificial intelligence to disease staging: deep learning for improved staging of diabetic retinopathy. PLoS One. 2017;12(6):e0179790.

    Article  PubMed  PubMed Central  CAS  Google Scholar 

  59. Kaines A, Oliver S, Reddy S, Schwartz SD. Ultrawide angle angiography for the detection and management of diabetic retinopathy. Int Ophthalmol Clin. 2009;49(2):53–9.

    Article  PubMed  Google Scholar 

  60. Göbl R, Navab N, Hennersperger C. SUPRA: open-source software-defined ultrasound processing for real-time applications. Int J Comput Assist Radiol Surg. 2018;13(6):759–67.

    Article  PubMed  Google Scholar 

  61. Rajalakshmi R, Subashini R, Anjana RM, Mohan V. Automated diabetic retinopathy detection in smartphone-based fundus photography using artificial intelligence. Eye (Lond). 2018;32(6):1138–44.

    Article  PubMed  PubMed Central  Google Scholar 

  62. Abràmoff MD, Lavin PT, Birch M, Shah N, Folk JC. Pivotal trial of an autonomous AI-based diagnostic system for detection of diabetic retinopathy in primary care offices. NPJ Digit Med. 2018;1:39.

    Article  PubMed  PubMed Central  Google Scholar 

  63. Burlina PM, Joshi N, Pekala M, Pacheco KD, Freund DE, Bressler NM. Automated grading of age-related macular degeneration from color fundus images using deep convolutional neural networks. JAMA Ophthalmol. 2017;135(11):1170–6.

    Article  PubMed  PubMed Central  Google Scholar 

  64. Xiangyu C, Yanwu X, Damon Wing Kee W, Tien Yin W, Jiang L. Glaucoma detection based on deep convolutional neural network. Conf Proc IEEE Eng Med Biol Soc. 2015;2015:715–8.

    Google Scholar 

  65. Li Z, He Y, Keel S, Meng W, Chang RT, He M. Efficacy of a deep learning system for detecting glaucomatous optic neuropathy based on color fundus photographs. Ophthalmology. 2018;125(8):1199–206.

    Article  PubMed  Google Scholar 

  66. Annan L, Jun C, Damon Wing Kee W, Jiang L. Integrating holistic and local deep features for glaucoma classification. Conf Proc IEEE Eng Med Biol Soc. 2016;2016:1328–31.

    Google Scholar 

  67. Burlina P, Pacheco KD, Joshi N, Freund DE, Bressler NM. Comparing humans and deep learning performance for grading AMD: a study in using universal deep features and transfer learning for automated AMD analysis. Comput Biol Med. 2017;82:80–6.

    Article  PubMed  PubMed Central  Google Scholar 

  68. Poplin R, Varadarajan AV, Blumer K, Liu Y, McConnell MV, Corrado GS, et al. Prediction of cardiovascular risk factors from retinal fundus photographs via deep learning. Nat Biomed Eng. 2018;2(3):158–64.

    Article  PubMed  Google Scholar 

  69. Decencière E, Cazuguel G, Zhang X, Thibault G, Klein JC, Meyer F, et al. TeleOphta: machine learning and image processing methods for teleophthalmology. IRBM. 2013;34(2):196–203.

    Article  Google Scholar 

  70. Budai A, Bock R, Maier A, Hornegger J, Michelson G. Robust Vessel Segmentation in Fundus Images. Int J Biomed Imaging. 2013;2013:154860.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  71. Almazroa A, Alodhayb S, Osman E, Lakshminarayanan V, Raahemifar K, Alkatee M, Dlaim M, et al. Retinal fundus images for glaucoma analysis: the RIGA dataset. Med Imag 2018. 2018;2018:105790.

    Google Scholar 

  72. Zhuo Z, Feng Shou Y, Jiang L, Wing Kee W, Ngan Meng T, Beng Hai L, et al. ORIGA-light: An online retinal fundus image database for glaucoma analysis and research. In: 2010 Annual International Conference of the IEEE Engineering in Medicine and Biology. Buenos Aires; 2010. p. 3065–3068.

  73. Sivaswamy J, Krishnadas SR, Datt Joshi G, Jain M, Syed Tabish AU, Drishti GS. Retinal image dataset for optic nerve head(ONH) segmentation. In: 2014 IEEE 11th International Symposium on Biomedical Imaging (ISBI). Beijing; 2014. p. 53–56.

  74. Niemeijer M, Xiayu X, Dumitrescu AV, Gupta P, van Ginneken B, Folk JC, et al. Automated measurement of the arteriolar-to-venular width ratio in digital color fundus photographs. IEEE Trans on Med Imaging. 2011;30(11):1941–50.

    Article  Google Scholar 

  75. Al-Diri B, Hunter A, Steel D, Habib M, Hudaib T, Berry S. Review - A reference data set for retinal vessel profiles. In: 2008 30th annual international conference of the IEEE engineering in medicine and biology society. Vancouver, BC; 2008. p. 2262–5.

  76. Huang D, Swanson EA, Lin CP, Schuman JS, Stinson WG, Chang W, et al. Optical coherence tomography. Science. 1991;254(5035):1178–81.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  77. Akhtar Z, Rishi P, Srikanth R, Rishi E, Bhende M, Raman R. Choroidal thickness in normal Indian subjects using swept source optical coherence tomography. PLoS One. 2018;13(5):e0197457.

    Article  PubMed  PubMed Central  CAS  Google Scholar 

  78. Zysk AM, Nguyen FT, Oldenburg AL, Marks DL, Boppart SA. Optical coherence tomography: a review of clinical development from bench to bedside. J Biomed Opt. 2007;12(5):051403.

    Article  PubMed  Google Scholar 

  79. Adhi M, Duker JS. Optical coherence tomography--current and future applications. Curr Opin Ophthalmol. 2013;24(3):213–21.

    Article  PubMed  PubMed Central  Google Scholar 

  80. Gabriele ML, Wollstein G, Ishikawa H, Kagemann L, Xu J, Folio LS, et al. Optical coherence tomography: history, current status, and laboratory work. Invest Ophthalmol Vis Sci. 2011;52(5):2425–36.

    Article  PubMed  PubMed Central  Google Scholar 

  81. Lee CS, Tyring AJ, Deruyter NP, Wu Y, Rokem A, Lee AY. Deep-learning based, automated segmentation of macular edema in optical coherence tomography. Biomed Opt Express. 2017;8(7):3440–8.

    Article  PubMed  PubMed Central  Google Scholar 

  82. De Fauw J, Ledsam JR, Romera-Paredes B, Nikolov S, Tomasev N, Blackwell S, et al. Clinically applicable deep learning for diagnosis and referral in retinal disease. Nat Med. 2018;24(9):1342–50.

    Article  PubMed  CAS  Google Scholar 

  83. Xu Y, Yan K, Kim J, Wang X, Li C, Su L, et al. Dual-stage deep learning framework for pigment epithelium detachment segmentation in polypoidal choroidal vasculopathy. Biomed Opt Express. 2017;8(9):4061–76.

    Article  PubMed  PubMed Central  Google Scholar 

  84. Memari N, Ramli AR, Bin Saripan MI, Mashohor S, Moghbel M. Supervised retinal vessel segmentation from color fundus images based on matched filtering and AdaBoost classifier. PLoS One. 2017;12(12):e0188939.

    Article  PubMed  PubMed Central  CAS  Google Scholar 

  85. Venhuizen FG, van Ginneken B, Liefers B, van Asten F, Schreur V, Fauser S, et al. Deep learning approach for the detection and quantification of intraretinal cystoid fluid in multivendor optical coherence tomography. Biomed Opt Express. 2018;9(4):1545–69.

    Article  PubMed  PubMed Central  Google Scholar 

  86. Bogunovic H, Waldstein SM, Schlegl T, Langs G, Sadeghipour A, Liu X, et al. Prediction of anti-vegf treatment requirements in neovascular amd using a machine learning approach. Invest Ophthalmol Vis Sci. 2017;58(7):3240–8.

    Article  CAS  PubMed  Google Scholar 

  87. Treder M, Lauermann JL, Eter N. Automated detection of exudative age-related macular degeneration in spectral domain optical coherence tomography using deep learning. Graefes Arch Clin Exp Ophthalmol. 2018;256(2):259–65.

    Article  CAS  PubMed  Google Scholar 

  88. Jonas JB, Budde WM, Panda-Jonas S. Ophthalmoscopic evaluation of the optic nerve head. Surv Ophthalmol. 1999;43(4):293–320.

    Article  CAS  PubMed  Google Scholar 

  89. Weinreb RN, Khaw PT. Primary open-angle glaucoma. Lancet. 2004;363(9422):1711–20.

    Article  PubMed  Google Scholar 

  90. Bizios D, Heijl A, Hougaard JL, Bengtsson B. Machine learning classifiers for glaucoma diagnosis based on classification of retinal nerve fibre layer thickness parameters measured by stratus OCT. Acta Ophthalmol. 2010;88(1):44–52.

    Article  PubMed  Google Scholar 

  91. Barella KA, Costa VP, Goncalves Vidotti V, Silva FR, Dias M, Gomi ES. Glaucoma diagnostic accuracy of machine learning classifiers using retinal nerve fiber layer and optic nerve data from SD-OCT. J Ophthalmol. 2013;2013:789129.

    Article  PubMed  PubMed Central  Google Scholar 

  92. Omodaka K, An G, Tsuda S, Shiga Y, Takada N, Kikawa T, et al. Classification of optic disc shape in glaucoma using machine learning based on quantified ocular parameters. PLoS One. 2017;12(12):e0190012.

    Article  PubMed  PubMed Central  CAS  Google Scholar 

  93. Silver D, Huang A, Maddison CJ, Guez A, Sifre L, van den Driessche G, et al. Mastering the game of go with deep neural networks and tree search. Nature. 2016;529(7587):484–9.

    Article  CAS  PubMed  Google Scholar 

  94. ElTanboly A, Ismail M, Shalaby A, Switala A, El-Baz A, Schaal S, et al. A computer-aided diagnostic system for detecting diabetic retinopathy in optical coherence tomography images. Med Phys. 2017;44(3):914–23.

    Article  CAS  PubMed  Google Scholar 

  95. Han T, Liu C, Yang W, Jiang D. Learning transferable features in deep convolutional neural networks for diagnosing unseen machine conditions. ISA Trans. 2019;93:341–53.

    Article  PubMed  Google Scholar 

  96. Kermany DS, Goldbaum M, Cai W, Valentim CCS, Liang H, Baxter SL, et al. Identifying medical diagnoses and treatable diseases by image-based deep learning. Cell. 2018;172(5):1122–31 e9.

    Article  CAS  PubMed  Google Scholar 

  97. Ong YT, Hilal S, Cheung CY, Venketasubramanian N, Niessen WJ, Vrooman H, et al. Retinal neurodegeneration on optical coherence tomography and cerebral atrophy. Neurosci Lett. 2015;584:12–6.

    Article  CAS  PubMed  Google Scholar 

  98. Huang W, Chan KL, Li H, Lim JH, Liu J, Wong TY. A computer assisted method for nuclear cataract grading from slit-lamp images using ranking. IEEE Trans Med Imaging. 2011;30(1):94–107.

    Article  PubMed  Google Scholar 

  99. Fan S, Dyer CR, Hubbard L, Klein B. An Automatic System for Classification of Nuclear Sclerosis from Slit-Lamp Photographs. In: Medical Image Computing and Computer-Assisted Intervention – MICCAI, vol. 2003; 2003. p. 592–601.

    Google Scholar 

  100. Li H, Lim JH, Liu J, Mitchell P, Tan AG, Wang JJ, et al. A computer-aided diagnosis system of nuclear cataract. IEEE Trans Biomed Eng. 2010;57(7):1690–8.

    Article  PubMed  Google Scholar 

  101. Lin D, Chen J, Lin Z, Li X, Wu X, Long E, et al. 10-year overview of the hospital-based prevalence and treatment of congenital cataracts: the CCPMOH experience. PLoS One. 2015;10(11):e0142298.

    Article  PubMed  PubMed Central  CAS  Google Scholar 

  102. Wu X, Long E, Lin H, Liu Y. Prevalence and epidemiological characteristics of congenital cataract: a systematic review and meta-analysis. Sci Rep. 2016;6:28564.

    Article  PubMed  PubMed Central  Google Scholar 

  103. West SK, Rosenthal F, Newland HS, Taylor HR. Use of photographic techniques to grade nuclear cataracts. Invest Ophthalmol Vis Sci. 1988;29(1):73–7.

    CAS  PubMed  Google Scholar 

  104. Amaya L, Taylor D, Russell-Eggitt I, Nischal KK, Lengyel D. The morphology and natural history of childhood cataracts. Surv Ophthalmol. 2003;48(2):125–44.

    Article  PubMed  Google Scholar 

  105. Marc RE, Jones BW, Watt CB, Anderson JR, Sigulinsky C, Lauritzen S. Retinal connectomics: towards complete, accurate networks. Prog Retin Eye Res. 2013;37:141–62.

    Article  PubMed  PubMed Central  Google Scholar 

  106. Jiang J, Liu X, Liu L, Wang S, Long E, Yang H, et al. Predicting the progression of ophthalmic disease based on slit-lamp images using a deep temporal sequence network. PLoS One. 2018;13(7):e0201142.

    Article  PubMed  PubMed Central  CAS  Google Scholar 

  107. Liu X, Jiang J, Zhang K, Long E, Cui J, Zhu M, et al. Localization and diagnosis framework for pediatric cataracts based on slit-lamp images using deep features of a convolutional neural network. PLoS One. 2017;12(3):e0168606.

    Article  PubMed  PubMed Central  CAS  Google Scholar 

  108. Long E, Lin H, Liu Z, Wu X, Wang L, Jiang J, et al. An artificial intelligence platform for the multihospital collaborative management of congenital cataracts. Nat Biomed Eng. 2017;1(2):0024.

    Article  Google Scholar 

  109. Lin H, Long E, Chen W, Liu Y. Documenting rare disease data in China. Science. 2015;349(6252):1064.

    Article  CAS  PubMed  Google Scholar 

  110. Wang L, Zhang K, Liu X, Long E, Jiang J, An Y, et al. Comparative analysis of image classification methods for automatic diagnosis of ophthalmic images. Sci Rep. 2017;7:41545.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  111. Arcadu F, Benmansour F, Maunz A, Michon J, Haskova Z, McClintock D, et al. Deep learning predicts oct measures of diabetic macular thickening from color fundus photographs. Invest Ophthalmol Vis Sci. 2019;60(4):852–7.

    Article  PubMed  Google Scholar 

  112. Nagasawa T, Tabuchi H, Masumoto H, Enno H, Niki M, Ohara Z, et al. Accuracy of ultrawide-field fundus ophthalmoscopy-assisted deep learning for detecting treatment-naive proliferative diabetic retinopathy. Int Ophthalmol. 2019;39(10):2153–9.

    Article  PubMed  Google Scholar 

  113. Phan S, Satoh S, Yoda Y, Kashiwagi K, Oshika T. Japan ocular imaging registry research group. Evaluation of deep convolutional neural networks for glaucoma detection. Jpn J Ophthalmol. 2019;63(3):276–83.

    Article  PubMed  Google Scholar 

  114. Nagasato D, Tabuchi H, Ohsugi H, Masumoto H, Enno H, Ishitobi N, et al. Deep-learning classifier with ultrawide-field fundus ophthalmoscopy for detecting branch retinal vein occlusion. Int J Ophthalmol. 2019;12(1):94–9.

    PubMed  PubMed Central  Google Scholar 

  115. Burlina PM, Joshi N, Pacheco KD, Liu TYA, Bressler NM. Assessment of deep generative models for high-resolution synthetic retinal image generation of age-related macular degeneration. JAMA Ophthalmol. 2019;137(3):258–64.

    Article  PubMed  PubMed Central  Google Scholar 

  116. Girard F, Kavalec C, Cheriet F. Joint segmentation and classification of retinal arteries/veins from fundus images. Artif Intell Med. 2019;94:96–109.

    Article  PubMed  Google Scholar 

  117. Coyner AS, Swan R, Brown JM, Kalpathy-Cramer J, Kim SJ, Campbell JP, et al. Deep learning for image quality assessment of fundus images in retinopathy of prematurity. AMIA Annu Symp Proc. 2018;2018:1224–32.

    PubMed  PubMed Central  Google Scholar 

  118. Keel S, Wu J, Lee PY, Scheetz J, He M. Visualizing deep learning models for the detection of referable diabetic retinopathy and glaucoma. JAMA Ophthalmol. 2019;137(3):288–92.

    Article  PubMed  Google Scholar 

  119. Sayres R, Taly A, Rahimy E, Blumer K, Coz D, Hammel N, et al. Using a deep learning algorithm and integrated gradients explanation to assist grading for diabetic retinopathy. Ophthalmology. 2019;126(4):552–64.

    Article  PubMed  Google Scholar 

  120. Peng Y, Dharssi S, Chen Q, Keenan TD, Agron E, Wong WT, et al. DeepSeeNet: a deep learning model for automated classification of patient-based age-related macular degeneration severity from color fundus photographs. Ophthalmology. 2019;126(4):565–75.

    Article  PubMed  Google Scholar 

  121. Guo Y, Budak U, Sengur A. A novel retinal vessel detection approach based on multiple deep convolution neural networks. Comput Methods Prog Biomed. 2018;167:43–8.

    Article  Google Scholar 

  122. Khojasteh P, Aliahmad B, Kumar DK. Fundus images analysis using deep features for detection of exudates, hemorrhages and microaneurysms. BMC Ophthalmol. 2018;18(1):288.

    Article  PubMed  PubMed Central  Google Scholar 

  123. Gargeya R, Leng T. Automated identification of diabetic retinopathy using deep learning. Ophthalmology. 2017;124(7):962–9.

    Article  PubMed  Google Scholar 

  124. Ordonez PF, Cepeda CM, Garrido J, Chakravarty S. Classification of images based on small local features: a case applied to microaneurysms in fundus retina images. J Med Imaging (Bellingham). 2017;4(4):041309.

    Google Scholar 

  125. Abbas Q, Fondon I, Sarmiento A, Jimenez S, Alemany P. Automatic recognition of severity level for diagnosis of diabetic retinopathy using deep visual features. Med Biol Eng Comput. 2017;55(11):1959–74.

    Article  PubMed  Google Scholar 

  126. Pfister M, Schutzenberger K, Pfeiffenberger U, Messner A, Chen Z, Dos Santos VA, et al. Automated segmentation of dermal fillers in OCT images of mice using convolutional neural networks. Biomed Opt Express. 2019;10(3):1315–28.

    Article  PubMed  PubMed Central  Google Scholar 

  127. Fu H, Baskaran M, Xu Y, Lin S, Kee Wong DW, Liu J, et al. A deep learning system for automated angle-closure detection in anterior segment optical coherence tomography images. Am J Ophthalmol. 2019;203:37–45.

    Article  PubMed  Google Scholar 

  128. Masood S, Fang R, Li P, Li H, Sheng B, Mathavan A, et al. Automatic choroid layer segmentation from optical coherence tomography images using deep learning. Sci Rep. 2019;9(1):3058.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  129. Dos Santos VA, Schmetterer L, Stegmann H, Pfister M, Messner A, Schmidinger G, et al. CorneaNet: fast segmentation of cornea OCT scans of healthy and keratoconic eyes using deep learning. Biomed Opt Express. 2019;10(2):622–41.

    Article  PubMed  PubMed Central  Google Scholar 

  130. Asaoka R, Murata H, Hirasawa K, Fujino Y, Matsuura M, Miki A, et al. Using deep learning and transfer learning to accurately diagnose early-onset glaucoma from macular optical coherence tomography images. Am J Ophthalmol. 2019;198:136–45.

    Article  PubMed  Google Scholar 

  131. Lu W, Tong Y, Yu Y, Xing Y, Chen C, Shen Y. Deep learning-based automated classification of multi-categorical abnormalities from optical coherence tomography images. Transl Vis Sci Technol. 2018;7(6):41.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  132. Schlegl T, Waldstein SM, Bogunovic H, Endstrasser F, Sadeghipour A, Philip AM, et al. Fully automated detection and quantification of macular fluid in oct using deep learning. Ophthalmology. 2018;125(4):549–58.

    Article  PubMed  Google Scholar 

  133. Prahs P, Radeck V, Mayer C, Cvetkov Y, Cvetkova N, Helbig H, et al. OCT-based deep learning algorithm for the evaluation of treatment indication with anti-vascular endothelial growth factor medications. Graefes Arch Clin Exp Ophthalmol. 2018;256(1):91–8.

    Article  PubMed  Google Scholar 

  134. Shah A, Zhou L, Abramoff MD, Wu X. Multiple surface segmentation using convolution neural nets: application to retinal layer segmentation in OCT images. Biomed Opt Express. 2018;9(9):4509–26.

    Article  PubMed  PubMed Central  Google Scholar 

  135. Chan GCY, Kamble R, Muller H, Shah SAA, Tang TB, Meriaudeau F. Fusing results of several deep learning architectures for automatic classification of normal and diabetic macular edema in optical coherence tomography. Conf Proc IEEE Eng Med Biol Soc. 2018;2018:670–3.

    Google Scholar 

  136. Muhammad H, Fuchs TJ, De Cuir N, De Moraes CG, Blumberg DM, Liebmann JM, et al. Hybrid deep learning on single wide-field optical coherence tomography scans accurately classifies glaucoma suspects. J Glaucoma. 2017;26(12):1086–94.

    Article  PubMed  PubMed Central  Google Scholar 

  137. Lee CS, Baughman DM, Lee AY. Deep learning is effective for the classification of OCT images of normal versus age-related macular degeneration. Ophthalmol Retina. 2017;1(4):322–7.

    Article  PubMed  PubMed Central  Google Scholar 

  138. Bengio Y, Courville A, Vincent P. Representation Learning: A review and new perspectives. IEEE Trans Pattern Analysis Machine Int. 2013;35(8):1798–828.

    Article  Google Scholar 

  139. Chiang MF, Erdogmus D, Keck K, You S, Kalpathy-Cramer J, Ataer-Cansizoglu E. Analysis of underlying causes of inter-expert disagreement in retinopathy of prematurity diagnosis. Methods Inf Med. 2018;54(1):93–102.

    Google Scholar 

Download references

Acknowledgements

Not applicable.

Funding

This work was supported by National Key R&D Program of China (2017YFE0103400); National Nature Science Foundation of China (Grant No. 81800872).

Author information

Authors and Affiliations

Authors

Contributions

Y.T. was involved in scientific literature research and writing the manuscript. Y.S. was involved in designing the protocol, and reviewing and editing the manuscript. W.L. and Y.Y. was involved in the conception and editing of the manuscript. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Yin Shen.

Ethics declarations

Ethics approval and consent to participate

Not applicable.

Competing interests

The authors declare that they have no competing interests.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Tong, Y., Lu, W., Yu, Y. et al. Application of machine learning in ophthalmic imaging modalities. Eye and Vis 7, 22 (2020). https://doi.org/10.1186/s40662-020-00183-6

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s40662-020-00183-6

Keywords