RESUMEN
OBJECTIVE: Keratitis is the primary cause of corneal blindness worldwide. Prompt identification and referral of patients with keratitis are fundamental measures to improve patient prognosis. Although deep learning can assist ophthalmologists in automatically detecting keratitis through a slit lamp camera, remote and underserved areas often lack this professional equipment. Smartphones, a widely available device, have recently been found to have potential in keratitis screening. However, given the limited data available from smartphones, employing traditional deep learning algorithms to construct a robust intelligent system presents a significant challenge. This study aimed to propose a meta-learning framework, cosine nearest centroid-based metric learning (CNCML), for developing a smartphone-based keratitis screening model in the case of insufficient smartphone data by leveraging the prior knowledge acquired from slit-lamp photographs. METHODS: We developed and assessed CNCML based on 13,009 slit-lamp photographs and 4,075 smartphone photographs that were obtained from 3 independent clinical centers. To mimic real-world scenarios with various degrees of sample scarcity, we used training sets of different sizes (0 to 20 photographs per class) from the HUAWEI smartphone to train CNCML. We evaluated the performance of CNCML not only on an internal test dataset but also on two external datasets that were collected by two different brands of smartphones (VIVO and XIAOMI) in another clinical center. Furthermore, we compared the performance of CNCML with that of traditional deep learning models on these smartphone datasets. The accuracy and macro-average area under the curve (macro-AUC) were utilized to evaluate the performance of models. RESULTS: With merely 15 smartphone photographs per class used for training, CNCML reached accuracies of 84.59%, 83.15%, and 89.99% on three smartphone datasets, with corresponding macro-AUCs of 0.96, 0.95, and 0.98, respectively. The accuracies of CNCML on these datasets were 0.56% to 9.65% higher than those of the most competitive traditional deep learning models. CONCLUSIONS: CNCML exhibited fast learning capabilities, attaining remarkable performance with a small number of training samples. This approach presents a potential solution for transitioning intelligent keratitis detection from professional devices (e.g., slit-lamp cameras) to more ubiquitous devices (e.g., smartphones), making keratitis screening more convenient and effective.
Asunto(s)
Aprendizaje Profundo , Queratitis , Teléfono Inteligente , Humanos , Queratitis/diagnóstico , Algoritmos , Fotograbar/métodos , Tamizaje Masivo/métodos , Tamizaje Masivo/instrumentaciónRESUMEN
Bifunctional molecules such as targeted protein degraders induce proximity to promote gain-of-function pharmacology. These powerful approaches have gained broad traction across academia and the pharmaceutical industry, leading to an intensive focus on strategies that can accelerate their identification and optimization. We and others have previously used chemical proteomics to map degradable target space, and these datasets have been used to develop and train multiparameter models to extend degradability predictions across the proteome. In this study, we now turn our attention to develop generalizable chemistry strategies to accelerate the development of new bifunctional degraders. We implement lysine-targeted reversible-covalent chemistry to rationally tune the binding kinetics at the protein-of-interest across a set of 25 targets. We define an unbiased workflow consisting of global proteomics analysis, IP/MS of ternary complexes and the E-STUB assay, to mechanistically characterize the effects of ligand residence time on targeted protein degradation and formulate hypotheses about the rate-limiting step of degradation for each target. Our key finding is that target residence time is a major determinant of degrader activity, and this can be rapidly and rationally tuned through the synthesis of a minimal number of analogues to accelerate early degrader discovery and optimization efforts.
RESUMEN
Smartphone-based artificial intelligence (AI) diagnostic systems could assist high-risk patients to self-screen for corneal diseases (e.g., keratitis) instead of detecting them in traditional face-to-face medical practices, enabling the patients to proactively identify their own corneal diseases at an early stage. However, AI diagnostic systems have significantly diminished performance in low-quality images which are unavoidable in real-world environments (especially common in patient-recorded images) due to various factors, hindering the implementation of these systems in clinical practice. Here, we construct a deep learning-based image quality monitoring system (DeepMonitoring) not only to discern low-quality cornea images created by smartphones but also to identify the underlying factors contributing to the generation of such low-quality images, which can guide operators to acquire high-quality images in a timely manner. This system performs well across validation, internal, and external testing sets, with AUCs ranging from 0.984 to 0.999. DeepMonitoring holds the potential to filter out low-quality cornea images produced by smartphones, facilitating the application of smartphone-based AI diagnostic systems in real-world clinical settings, especially in the context of self-screening for corneal diseases.
RESUMEN
R-loops play diverse functional roles, but controversial genomic localization of R-loops have emerged from experimental approaches, posing significant challenges for R-loop research. The development and application of an accurate computational tool for studying human R-loops remains an unmet need. Here, we introduce DeepER, a deep learning-enhanced R-loop prediction tool. DeepER showcases outstanding performance compared to existing tools, facilitating accurate genome-wide annotation of R-loops and a deeper understanding of the position- and context-dependent effects of nucleotide composition on R-loop formation. DeepER also unveils a strong association between certain tandem repeats and R-loop formation, opening a new avenue for understanding the mechanisms underlying some repeat expansion diseases. To facilitate broader utilization, we have developed a user-friendly web server as an integral component of R-loopBase. We anticipate that DeepER will find extensive applications in the field of R-loop research.
RESUMEN
The main cause of corneal blindness worldwide is keratitis, especially the infectious form caused by bacteria, fungi, viruses, and Acanthamoeba. The key to effective management of infectious keratitis hinges on prompt and precise diagnosis. Nevertheless, the current gold standard, such as cultures of corneal scrapings, remains time-consuming and frequently yields false-negative results. Here, using 23,055 slit-lamp images collected from 12 clinical centers nationwide, this study constructed a clinically feasible deep learning system, DeepIK, that could emulate the diagnostic process of a human expert to identify and differentiate bacterial, fungal, viral, amebic, and noninfectious keratitis. DeepIK exhibited remarkable performance in internal, external, and prospective datasets (all areas under the receiver operating characteristic curves > 0.96) and outperformed three other state-of-the-art algorithms (DenseNet121, InceptionResNetV2, and Swin-Transformer). Our study indicates that DeepIK possesses the capability to assist ophthalmologists in accurately and swiftly identifying various infectious keratitis types from slit-lamp images, thereby facilitating timely and targeted treatment.
RESUMEN
To elucidate the genetic basis of complex diseases, it is crucial to discover the single-nucleotide polymorphisms (SNPs) contributing to disease susceptibility. This is particularly challenging for high-order SNP epistatic interactions (HEIs), which exhibit small individual effects but potentially large joint effects. These interactions are difficult to detect due to the vast search space, encompassing billions of possible combinations, and the computational complexity of evaluating them. This study proposes a novel explicit-encoding-based multitasking harmony search algorithm (MTHS-EE-DHEI) specifically designed to address this challenge. The algorithm operates in three stages. First, a harmony search algorithm is employed, utilizing four lightweight evaluation functions, such as Bayesian network and entropy, to efficiently explore potential SNP combinations related to disease status. Second, a G-test statistical method is applied to filter out insignificant SNP combinations. Finally, two machine learning-based methods, multifactor dimensionality reduction (MDR) as well as random forest (RF), are employed to validate the classification performance of the remaining significant SNP combinations. This research aims to demonstrate the effectiveness of MTHS-EE-DHEI in identifying HEIs compared to existing methods, potentially providing valuable insights into the genetic architecture of complex diseases. The performance of MTHS-EE-DHEI was evaluated on twenty simulated disease datasets and three real-world datasets encompassing age-related macular degeneration (AMD), rheumatoid arthritis (RA), and breast cancer (BC). The results demonstrably indicate that MTHS-EE-DHEI outperforms four state-of-the-art algorithms in terms of both detection power and computational efficiency. The source code is available at https://github.com/shouhengtuo/MTHS-EE-DHEI.git .
Asunto(s)
Algoritmos , Teorema de Bayes , Epistasis Genética , Polimorfismo de Nucleótido Simple , Polimorfismo de Nucleótido Simple/genética , Humanos , Aprendizaje Automático , Reducción de Dimensionalidad Multifactorial , Biología Computacional/métodos , Predisposición Genética a la EnfermedadRESUMEN
Over 95% of pancreatic ductal adenocarcinomas (PDAC) harbor oncogenic mutations in K-Ras. Upon treatment with K-Ras inhibitors, PDAC cancer cells undergo metabolic reprogramming towards an oxidative phosphorylation-dependent, drug-resistant state. However, direct inhibition of complex I is poorly tolerated in patients due to on-target induction of peripheral neuropathy. In this work, we develop molecular glue degraders against ZBTB11, a C2H2 zinc finger transcription factor that regulates the nuclear transcription of components of the mitoribosome and electron transport chain. Our ZBTB11 degraders leverage the differences in demand for biogenesis of mitochondrial components between human neurons and rapidly-dividing pancreatic cancer cells, to selectively target the K-Ras inhibitor resistant state in PDAC. Combination treatment of both K-Ras inhibitor-resistant cell lines and multidrug resistant patient-derived organoids resulted in superior anti-cancer activity compared to single agent treatment, while sparing hiPSC-derived neurons. Proteomic and stable isotope tracing studies revealed mitoribosome depletion and impairment of the TCA cycle as key events that mediate this response. Together, this work validates ZBTB11 as a vulnerability in K-Ras inhibitor-resistant PDAC and provides a suite of molecular glue degrader tool compounds to investigate its function.
RESUMEN
BACKGROUND: The accurate detection of eyelid tumors is essential for effective treatment, but it can be challenging due to small and unevenly distributed lesions surrounded by irrelevant noise. Moreover, early symptoms of eyelid tumors are atypical, and some categories of eyelid tumors exhibit similar color and texture features, making it difficult to distinguish between benign and malignant eyelid tumors, particularly for ophthalmologists with limited clinical experience. METHODS: We propose a hybrid model, HM_ADET, for automatic detection of eyelid tumors, including YOLOv7_CNFG to locate eyelid tumors and vision transformer (ViT) to classify benign and malignant eyelid tumors. First, the ConvNeXt module with an inverted bottleneck layer in the backbone of YOLOv7_CNFG is employed to prevent information loss of small eyelid tumors. Then, the flexible rectified linear unit (FReLU) is applied to capture multi-scale features such as texture, edge, and shape, thereby improving the localization accuracy of eyelid tumors. In addition, considering the geometric center and area difference between the predicted box (PB) and the ground truth box (GT), the GIoU_loss was utilized to handle cases of eyelid tumors with varying shapes and irregular boundaries. Finally, the multi-head attention (MHA) module is applied in ViT to extract discriminative features of eyelid tumors for benign and malignant classification. RESULTS: Experimental results demonstrate that the HM_ADET model achieves excellent performance in the detection of eyelid tumors. In specific, YOLOv7_CNFG outperforms YOLOv7, with AP increasing from 0.763 to 0.893 on the internal test set and from 0.647 to 0.765 on the external test set. ViT achieves AUCs of 0.945 (95% CI 0.894-0.981) and 0.915 (95% CI 0.860-0.955) for the classification of benign and malignant tumors on the internal and external test sets, respectively. CONCLUSIONS: Our study provides a promising strategy for the automatic diagnosis of eyelid tumors, which could potentially improve patient outcomes and reduce healthcare costs.
Asunto(s)
Neoplasias de los Párpados , Humanos , Neoplasias de los Párpados/diagnóstico , Área Bajo la Curva , Costos de la Atención en SaludRESUMEN
Understanding how small molecules bind to specific protein complexes in living cells is critical to understanding their mechanism-of-action. Unbiased chemical biology strategies for direct readout of protein interactome remodelling by small molecules would provide advantages over target-focused approaches, including the ability to detect previously unknown ligand targets and complexes. However, there are few current methods for unbiased profiling of small molecule interactomes. To address this, we envisioned a technology that would combine the sensitivity and live-cell compatibility of proximity labelling coupled to mass spectrometry, with the specificity and unbiased nature of chemoproteomics. In this manuscript, we describe the BioTAC system, a small-molecule guided proximity labelling platform that can rapidly identify both direct and complexed small molecule binding proteins. We benchmark the system against µMap, photoaffinity labelling, affinity purification coupled to mass spectrometry and proximity labelling coupled to mass spectrometry datasets. We also apply the BioTAC system to provide interactome maps of Trametinib and analogues. The BioTAC system overcomes a limitation of current approaches and supports identification of both inhibitor bound and molecular glue bound complexes.
Asunto(s)
Biotina , Proteínas , Proteínas/metabolismo , Cromatografía de Afinidad , Espectrometría de Masas/métodos , Etiquetas de Fotoafinidad/químicaRESUMEN
Unbiased chemical biology strategies for direct readout of protein interactome remodelling by small molecules provide advantages over target-focused approaches, including the ability to detect previously unknown targets, and the inclusion of chemical off-compete controls leading to high-confidence identifications. We describe the BioTAC system, a small-molecule guided proximity labelling platform, to rapidly identify both direct and complexed small molecule binding proteins. The BioTAC system overcomes a limitation of current approaches, and supports identification of both inhibitor bound and molecular glue bound complexes.
RESUMEN
Purpose: To develop a visual function-based deep learning system (DLS) using fundus images to screen for visually impaired cataracts. Materials and methods: A total of 8,395 fundus images (5,245 subjects) with corresponding visual function parameters collected from three clinical centers were used to develop and evaluate a DLS for classifying non-cataracts, mild cataracts, and visually impaired cataracts. Three deep learning algorithms (DenseNet121, Inception V3, and ResNet50) were leveraged to train models to obtain the best one for the system. The performance of the system was evaluated using the area under the receiver operating characteristic curve (AUC), sensitivity, and specificity. Results: The AUC of the best algorithm (DenseNet121) on the internal test dataset and the two external test datasets were 0.998 (95% CI, 0.996-0.999) to 0.999 (95% CI, 0.998-1.000),0.938 (95% CI, 0.924-0.951) to 0.966 (95% CI, 0.946-0.983) and 0.937 (95% CI, 0.918-0.953) to 0.977 (95% CI, 0.962-0.989), respectively. In the comparison between the system and cataract specialists, better performance was observed in the system for detecting visually impaired cataracts (p < 0.05). Conclusion: Our study shows the potential of a function-focused screening tool to identify visually impaired cataracts from fundus images, enabling timely patient referral to tertiary eye hospitals.
RESUMEN
Artificial intelligence (AI) has great potential to transform healthcare by enhancing the workflow and productivity of clinicians, enabling existing staff to serve more patients, improving patient outcomes, and reducing health disparities. In the field of ophthalmology, AI systems have shown performance comparable with or even better than experienced ophthalmologists in tasks such as diabetic retinopathy detection and grading. However, despite these quite good results, very few AI systems have been deployed in real-world clinical settings, challenging the true value of these systems. This review provides an overview of the current main AI applications in ophthalmology, describes the challenges that need to be overcome prior to clinical implementation of the AI systems, and discusses the strategies that may pave the way to the clinical translation of these systems.
Asunto(s)
Inteligencia Artificial , Oftalmología , Humanos , Oftalmología/métodosRESUMEN
Based on a previously reported 1,4-dihydropyridinebutyrolactone virtual screening hit, nine lactone ring-opened ester and seven amide analogs were prepared. The analogs were designed to provide interactions with residues at the entrance of the ZA loop of the testis-specific bromodomain (ZA) channel to enhance the affinity and selectivity for the bromodomain and extra-terminal (BET) subfamily of bromodomains. Compound testing by AlphaScreen showed that neither the affinity nor the selectivity of the ester and lactam analogs was improved for BRD4-1 and the first bromodomain of the testis-specific bromodomain (BRDT-1). The esters retained affinity comparable to the parent compound, whereas the affinity for the amide analogs was reduced 10-fold. A representative benzyl ester analog was found to retain high selectivity for BET bromodomains as shown by a BROMOscan. X-ray analysis of the allyl ester analog in complex with BRD4-1 and BRDT-1 revealed that the ester side chain is located next to the ZA loop and solvent exposed.
Asunto(s)
Proteínas Nucleares , Factores de Transcripción , Humanos , Masculino , Amidas/farmacología , Proteínas de Ciclo Celular , Ésteres/farmacología , Proteínas Nucleares/química , Proteínas Nucleares/metabolismo , Relación Estructura-Actividad , Lactonas/químicaRESUMEN
Objective. Cardiac activity changes during sleep enable real-time sleep staging. We developed a deep neural network (DNN) to detect sleep stages using interbeat intervals (IBIs) extracted from electrocardiogram signals.Approach. Data from healthy and apnea subjects were used for training and validation; 2 additional datasets (healthy and sleep disorders subjects) were used for testing. R-peak detection was used to determine IBIs before resampling at 2 Hz; the resulting signal was segmented into 150 s windows (30 s shift). DNN output approximated the probabilities of a window belonging to light, deep, REM, or wake stages. Cohen's Kappa, accuracy, and sensitivity/specificity per stage were determined, and Kappa was optimized using thresholds on probability ratios for each stage versus light sleep.Main results. Mean (SD) Kappa and accuracy for 4 sleep stages were 0.44 (0.09) and 0.65 (0.07), respectively, in healthy subjects. For 3 sleep stages (light+deep, REM, and wake), Kappa and accuracy were 0.52 (0.12) and 0.76 (0.07), respectively. Algorithm performance on data from subjects with REM behavior disorder or periodic limb movement disorder was significantly worse, with Kappa of 0.24 (0.09) and 0.36 (0.12), respectively. Average processing time by an ARM microprocessor for a 300-sample window was 19.2 ms.Significance. IBIs can be obtained from a variety of cardiac signals, including electrocardiogram, photoplethysmography, and ballistocardiography. The DNN algorithm presented is 3 orders of magnitude smaller compared with state-of-the-art algorithms and was developed to perform real-time, IBI-based sleep staging. With high specificity and moderate sensitivity for deep and REM sleep, small footprint, and causal processing, this algorithm may be used across different platforms to perform real-time sleep staging and direct intervention strategies.Novelty & Significance(92/100 words) This article describes the development and testing of a deep neural network-based algorithm to detect sleep stages using interbeat intervals, which can be obtained from a variety of cardiac signals including photoplethysmography, electrocardiogram, and ballistocardiography. Based on the interbeat intervals identified in electrocardiogram signals, the algorithm architecture included a group of convolution layers and a group of long short-term memory layers. With its small footprint, fast processing time, high specificity and good sensitivity for deep and REM sleep, this algorithm may provide a good option for real-time sleep staging to direct interventions.
Asunto(s)
Fotopletismografía , Fases del Sueño , Algoritmos , Humanos , Redes Neurales de la Computación , SueñoRESUMEN
Malignant eyelid tumors can invade adjacent structures and pose a threat to vision and even life. Early identification of malignant eyelid tumors is crucial to avoiding substantial morbidity and mortality. However, differentiating malignant eyelid tumors from benign ones can be challenging for primary care physicians and even some ophthalmologists. Here, based on 1,417 photographic images from 851 patients across three hospitals, we developed an artificial intelligence system using a faster region-based convolutional neural network and deep learning classification networks to automatically locate eyelid tumors and then distinguish between malignant and benign eyelid tumors. The system performed well in both internal and external test sets (AUCs ranged from 0.899 to 0.955). The performance of the system is comparable to that of a senior ophthalmologist, indicating that this system has the potential to be used at the screening stage for promoting the early detection and treatment of malignant eyelid tumors.
RESUMEN
Inhibitors of Bromodomain and Extra Terminal (BET) proteins are investigated for various therapeutic indications, but selectivity for BRD2, BRD3, BRD4, BRDT and their respective tandem bromodomains BD1 and BD2 remains suboptimal. Here we report selectivity-focused structural modifications of previously reported dihydropyridine lactam 6 by changing linker length and linker type of the lactam side chain in efforts to engage the unique arginine 54 (R54) residue in BRDT-BD1 to achieve BRDT-selective affinity. We found that the analogs were highly selective for BET bromodomains, and generally more selective for the first (BD1) and second (BD2) bromodomains of BRD4 rather than for those of BRDT. Based on AlphaScreen and BromoScan results and on crystallographic data for analog 10 j, we concluded that the lack of selectivity for BRDT is most likely due to the high flexibility of the protein and the unfavorable trajectory of the lactam side chain that do not allow interaction with R54. A 15-fold preference for BD2 over BD1 in BRDT was observed for analogs 10 h and 10 m, which was supported by protein-based 19 F NMR experiments with a BRDT tandem bromodomain protein construct.
Asunto(s)
Dihidropiridinas/farmacología , Lactamas/farmacología , Proteínas Nucleares/antagonistas & inhibidores , Dihidropiridinas/química , Relación Dosis-Respuesta a Droga , Humanos , Lactamas/química , Estructura Molecular , Proteínas Nucleares/metabolismo , Relación Estructura-ActividadRESUMEN
The performance of deep learning in disease detection from high-quality clinical images is identical to and even greater than that of human doctors. However, in low-quality images, deep learning performs poorly. Whether human doctors also have poor performance in low-quality images is unknown. Here, we compared the performance of deep learning systems with that of cornea specialists in detecting corneal diseases from low-quality slit lamp images. The results showed that the cornea specialists performed better than our previously established deep learning system (PEDLS) trained on only high-quality images. The performance of the system trained on both high- and low-quality images was superior to that of the PEDLS while inferior to that of a senior corneal specialist. This study highlights that cornea specialists perform better in low-quality images than the system trained on high-quality images. Adding low-quality images with sufficient diagnostic certainty to the training set can reduce this performance gap.
RESUMEN
Keratitis is the main cause of corneal blindness worldwide. Most vision loss caused by keratitis can be avoidable via early detection and treatment. The diagnosis of keratitis often requires skilled ophthalmologists. However, the world is short of ophthalmologists, especially in resource-limited settings, making the early diagnosis of keratitis challenging. Here, we develop a deep learning system for the automated classification of keratitis, other cornea abnormalities, and normal cornea based on 6,567 slit-lamp images. Our system exhibits remarkable performance in cornea images captured by the different types of digital slit lamp cameras and a smartphone with the super macro mode (all AUCs>0.96). The comparable sensitivity and specificity in keratitis detection are observed between the system and experienced cornea specialists. Our system has the potential to be applied to both digital slit lamp cameras and smartphones to promote the early diagnosis and treatment of keratitis, preventing the corneal blindness caused by keratitis.
Asunto(s)
Ceguera/prevención & control , Córnea/patología , Aprendizaje Profundo , Queratitis/diagnóstico , Diagnóstico Precoz , Humanos , Queratitis/terapia , Área sin Atención Médica , Sensibilidad y EspecificidadRESUMEN
Infantile cataract is the main cause of infant blindness worldwide. Although previous studies developed artificial intelligence (AI) diagnostic systems for detecting infantile cataracts in a single center, its generalizability is not ideal because of the complicated noises and heterogeneity of multicenter slit-lamp images, which impedes the application of these AI systems in real-world clinics. In this study, we developed two lens partition strategies (LPSs) based on deep learning Faster R-CNN and Hough transform for improving the generalizability of infantile cataracts detection. A total of 1,643 multicenter slit-lamp images collected from five ophthalmic clinics were used to evaluate the performance of LPSs. The generalizability of Faster R-CNN for screening and grading was explored by sequentially adding multicenter images to the training dataset. For the normal and abnormal lenses partition, the Faster R-CNN achieved the average intersection over union of 0.9419 and 0.9107, respectively, and their average precisions are both > 95%. Compared with the Hough transform, the accuracy, specificity, and sensitivity of Faster R-CNN for opacity area grading were improved by 5.31, 8.09, and 3.29%, respectively. Similar improvements were presented on the other grading of opacity density and location. The minimal training sample size required by Faster R-CNN is determined on multicenter slit-lamp images. Furthermore, the Faster R-CNN achieved real-time lens partition with only 0.25 s for a single image, whereas the Hough transform needs 34.46 s. Finally, using Grad-Cam and t-SNE techniques, the most relevant lesion regions were highlighted in heatmaps, and the high-level features were discriminated. This study provides an effective LPS for improving the generalizability of infantile cataracts detection. This system has the potential to be applied to multicenter slit-lamp images.
RESUMEN
BACKGROUND: Lens opacity seriously affects the visual development of infants. Slit-illumination images play an irreplaceable role in lens opacity detection; however, these images exhibited varied phenotypes with severe heterogeneity and complexity, particularly among pediatric cataracts. Therefore, it is urgently needed to explore an effective computer-aided method to automatically diagnose heterogeneous lens opacity and to provide appropriate treatment recommendations in a timely manner. METHODS: We integrated three different deep learning networks and a cost-sensitive method into an ensemble learning architecture, and then proposed an effective model called CCNN-Ensemble [ensemble of cost-sensitive convolutional neural networks (CNNs)] for automatic lens opacity detection. A total of 470 slit-illumination images of pediatric cataracts were used for training and comparison between the CCNN-Ensemble model and conventional methods. Finally, we used two external datasets (132 independent test images and 79 Internet-based images) to further evaluate the model's generalizability and effectiveness. RESULTS: Experimental results and comparative analyses demonstrated that the proposed method was superior to conventional approaches and provided clinically meaningful performance in terms of three grading indices of lens opacity: area (specificity and sensitivity; 92.00% and 92.31%), density (93.85% and 91.43%) and opacity location (95.25% and 89.29%). Furthermore, the comparable performance on the independent testing dataset and the internet-based images verified the effectiveness and generalizability of the model. Finally, we developed and implemented a website-based automatic diagnosis software for pediatric cataract grading diagnosis in ophthalmology clinics. CONCLUSIONS: The CCNN-Ensemble method demonstrates higher specificity and sensitivity than conventional methods on multi-source datasets. This study provides a practical strategy for heterogeneous lens opacity diagnosis and has the potential to be applied to the analysis of other medical images.