Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 16 de 16
Filtrar
Más filtros

País/Región como asunto
Tipo del documento
País de afiliación
Intervalo de año de publicación
1.
Retina ; 44(3): 527-536, 2024 Mar 01.
Artículo en Inglés | MEDLINE | ID: mdl-37972986

RESUMEN

PURPOSE: To investigate fundus tessellation density (TD) and its association with axial length (AL) elongation and spherical equivalent (SE) progression in children. METHODS: The school-based prospective cohort study enrolled 1,997 individuals aged 7 to 9 years in 11 elementary schools in Mojiang, China. Cycloplegic refraction and biometry were performed at baseline and 4-year visits. The baseline fundus photographs were taken, and TD, defined as the percentage of exposed choroidal vessel area in the photographs, was quantified using an artificial intelligence-assisted semiautomatic labeling approach. After the exclusion of 330 ineligible participants because of loss to follow-up or ineligible fundus photographs, logistic models were used to assess the association of TD with rapid AL elongation (>0.36 mm/year) and SE progression (>1.00 D/year). RESULTS: The prevalence of tessellation was 477 of 1,667 (28.6%) and mean TD was 0.008 ± 0.019. The mean AL elongation and SE progression in 4 years were 0.90 ± 0.58 mm and -1.09 ± 1.25 D. Higher TD was associated with longer baseline AL (ß, 0.030; 95% confidence interval: 0.015-0.046; P < 0.001) and more myopic baseline SE (ß, -0.017; 95% confidence interval: -0.032 to -0.002; P = 0.029). Higher TD was associated with rapid AL elongation (odds ratio, 1.128; 95% confidence interval: 1.055-1.207; P < 0.001) and SE progression (odds ratio, 1.123; 95% confidence interval: 1.020-1.237; P = 0.018). CONCLUSION: Tessellation density is a potential indicator of rapid AL elongation and refractive progression in children. TD measurement could be a routine to monitor AL elongation.


Asunto(s)
Inteligencia Artificial , Miopía , Niño , Humanos , Estudios Prospectivos , Refracción Ocular , Pruebas de Visión , Miopía/diagnóstico , Miopía/epidemiología , Longitud Axial del Ojo
2.
Transl Vis Sci Technol ; 13(2): 1, 2024 02 01.
Artículo en Inglés | MEDLINE | ID: mdl-38300623

RESUMEN

Purpose: Artificial intelligence (AI)-assisted ultra-widefield (UWF) fundus photographic interpretation is beneficial to improve the screening of fundus abnormalities. Therefore we constructed an AI machine-learning approach and performed preliminary training and validation. Methods: We proposed a two-stage deep learning-based framework to detect early retinal peripheral degeneration using UWF images from the Chinese Air Force cadets' medical selection between February 2016 and June 2022. We developed a detection model for the localization of optic disc and macula, which are used to find the peripheral areas. Then we developed six classification models for the screening of various retinal cases. We also compared our proposed framework with two baseline models reported in the literature. The performance of the screening models was evaluated by area under the receiver operating curve (AUC) with 95% confidence interval. Results: A total of 3911 UWF fundus images were used to develop the deep learning model. The external validation included 760 UWF fundus images. The results of comparison study revealed that our proposed framework achieved competitive performance compared to existing baselines while also demonstrating significantly faster inference time. The developed classification models achieved an average AUC of 0.879 on six different retinal cases in the external validation dataset. Conclusions: Our two-stage deep learning-based framework improved the machine learning efficiency of the AI model for fundus images with high resolution and many interference factors by maximizing the retention of valid information and compressing the image file size. Translational Relevance: This machine learning model may become a new paradigm for developing UWF fundus photography AI-assisted diagnosis.


Asunto(s)
Aprendizaje Profundo , Degeneración Retiniana , Adulto Joven , Humanos , Inteligencia Artificial , Retina/diagnóstico por imagen , Fondo de Ojo
3.
IEEE Trans Med Imaging ; 43(1): 335-350, 2024 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-37549071

RESUMEN

In the real world, medical datasets often exhibit a long-tailed data distribution (i.e., a few classes occupy the majority of the data, while most classes have only a limited number of samples), which results in a challenging long-tailed learning scenario. Some recently published datasets in ophthalmology AI consist of more than 40 kinds of retinal diseases with complex abnormalities and variable morbidity. Nevertheless, more than 30 conditions are rarely seen in global patient cohorts. From a modeling perspective, most deep learning models trained on these datasets may lack the ability to generalize to rare diseases where only a few available samples are presented for training. In addition, there may be more than one disease for the presence of the retina, resulting in a challenging label co-occurrence scenario, also known as multi-label, which can cause problems when some re-sampling strategies are applied during training. To address the above two major challenges, this paper presents a novel method that enables the deep neural network to learn from a long-tailed fundus database for various retinal disease recognition. Firstly, we exploit the prior knowledge in ophthalmology to improve the feature representation using a hierarchy-aware pre-training. Secondly, we adopt an instance-wise class-balanced sampling strategy to address the label co-occurrence issue under the long-tailed medical dataset scenario. Thirdly, we introduce a novel hybrid knowledge distillation to train a less biased representation and classifier. We conducted extensive experiments on four databases, including two public datasets and two in-house databases with more than one million fundus images. The experimental results demonstrate the superiority of our proposed methods with recognition accuracy outperforming the state-of-the-art competitors, especially for these rare diseases.


Asunto(s)
Enfermedades Raras , Enfermedades de la Retina , Humanos , Enfermedades de la Retina/diagnóstico por imagen , Retina/diagnóstico por imagen , Bases de Datos Factuales , Fondo de Ojo
4.
iScience ; 27(1): 108516, 2024 Jan 19.
Artículo en Inglés | MEDLINE | ID: mdl-38269093

RESUMEN

Retinopathy of prematurity (ROP) is currently one of the leading causes of infant blindness worldwide. Recently significant progress has been made in deep learning-based computer-aided diagnostic methods. However, deep learning often requires a large amount of annotated data for model optimization, but this requires long hours of effort by experienced doctors in clinical scenarios. In contrast, a large number of unlabeled images are relatively easy to obtain. In this paper, we propose a new semi-supervised learning framework to reduce annotation costs for automatic ROP staging. We design two consistency regularization strategies, prediction consistency loss and semantic structure consistency loss, which can help the model mine useful discriminative information from unlabeled data, thus improving the generalization performance of the classification model. Extensive experiments on a real clinical dataset show that the proposed method promises to greatly reduce the labeling requirements in clinical scenarios while achieving good classification performance.

5.
J Cataract Refract Surg ; 50(4): 319-327, 2024 Apr 01.
Artículo en Inglés | MEDLINE | ID: mdl-37938020

RESUMEN

PURPOSE: To investigate how vault and other biometric variations affect postoperative refractive error of implantable collamer lenses (ICLs) by integrating artificial intelligence and modified vergence formula. SETTING: Eye and ENT Hospital of Fudan University, Shanghai, China. DESIGN: Artificial intelligence and big data-based prediction model. METHODS: 2845 eyes that underwent uneventful spherical ICL or toric ICL implantation and with manifest refraction results 1 month postoperatively were included. 1 eye of each patient was randomly included. Random forest was used to calculate the postoperative sphere, cylinder, and spherical equivalent by inputting variable ocular parameters. The influence of predicted vault and modified Holladay formula on predicting postoperative refractive error was analyzed. Subgroup analysis of ideal vault (0.25 to 0.75 mm) and extreme vault (<0.25 mm or >0.75 mm) was performed. RESULTS: In the test set of both ICLs, all the random forest-based models significantly improved the accuracy of predicting postoperative sphere compared with the Online Calculation & Ordering System calculator ( P < .001). For ideal vault, the combination of modified Holladay formula in spherical ICL exhibited highest accuracy ( R = 0.606). For extreme vault, the combination of predicted vault in spherical ICL enhanced R values ( R = 0.864). The combination of predicted vault and modified Holladay formula was most optimal for toric ICL in all ranges of vault (ideal vault: R = 0.516, extreme vault: R = 0.334). CONCLUSIONS: The random forest-based calculator, considering vault and variable ocular parameters, illustrated superiority over the existing calculator on the study datasets. Choosing an appropriate lens size to control the vault within the ideal range was helpful to avoid refractive surprises.


Asunto(s)
Lentes Intraoculares Fáquicas , Errores de Refracción , Humanos , Agudeza Visual , Inteligencia Artificial , China , Errores de Refracción/diagnóstico , Aprendizaje Automático , Estudios Retrospectivos
6.
Ophthalmol Retina ; 8(7): 666-677, 2024 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-38280426

RESUMEN

OBJECTIVE: We aimed to develop a deep learning system capable of identifying subjects with cognitive impairment quickly and easily based on multimodal ocular images. DESIGN: Cross sectional study. SUBJECTS: Participants of Beijing Eye Study 2011 and patients attending Beijing Tongren Eye Center and Beijing Tongren Hospital Physical Examination Center. METHODS: We trained and validated a deep learning algorithm to assess cognitive impairment using retrospectively collected data from the Beijing Eye Study 2011. Cognitive impairment was defined as a Mini-Mental State Examination score < 24. Based on fundus photographs and OCT images, we developed 5 models based on the following sets of images: macula-centered fundus photographs, optic disc-centered fundus photographs, fundus photographs of both fields, OCT images, and fundus photographs of both fields with OCT (multimodal). The performance of the models was evaluated and compared in an external validation data set, which was collected from patients attending Beijing Tongren Eye Center and Beijing Tongren Hospital Physical Examination Center. MAIN OUTCOME MEASURES: Area under the curve (AUC). RESULTS: A total of 9424 retinal photographs and 4712 OCT images were used to develop the model. The external validation sets from each center included 1180 fundus photographs and 590 OCT images. Model comparison revealed that the multimodal performed best, achieving an AUC of 0.820 in the internal validation set, 0.786 in external validation set 1, and 0.784 in external validation set 2. We evaluated the performance of the multi-model in different sexes and different age groups; there were no significant differences. The heatmap analysis showed that signals around the optic disc in fundus photographs and the retina and choroid around the macular and optic disc regions in OCT images were used by the multimodal to identify participants with cognitive impairment. CONCLUSIONS: Fundus photographs and OCT can provide valuable information on cognitive function. Multimodal models provide richer information compared with single-mode models. Deep learning algorithms based on multimodal retinal images may be capable of screening cognitive impairment. This technique has potential value for broader implementation in community-based screening or clinic settings. FINANCIAL DISCLOSURE(S): Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article.


Asunto(s)
Disfunción Cognitiva , Aprendizaje Profundo , Fondo de Ojo , Tomografía de Coherencia Óptica , Humanos , Estudios Transversales , Femenino , Masculino , Tomografía de Coherencia Óptica/métodos , Estudios Retrospectivos , Anciano , Disfunción Cognitiva/diagnóstico , Persona de Mediana Edad , Imagen Multimodal , Curva ROC , Disco Óptico/diagnóstico por imagen , Disco Óptico/patología , Tamizaje Masivo/métodos
7.
Br J Ophthalmol ; 107(2): 201-206, 2023 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-34489338

RESUMEN

AIMS: To predict the vault and the EVO-implantable collamer lens (ICL) size by artificial intelligence (AI) and big data analytics. METHODS: Six thousand two hundred and ninety-seven eyes implanted with an ICL from 3536 patients were included. The vault values were measured by the anterior segment analyzer (Pentacam HR). Permutation importance and Impurity-based feature importance are used to investigate the importance between the vault and input parameters. Regression models and classification models are applied to predict the vault. The ICL size is set as the target of the prediction, and the vault and the other input features are set as the new inputs for the ICL size prediction. Data were collected from 2015 to 2020. Random Forest, Gradient Boosting and XGBoost were demonstrated satisfying accuracy and mean area under the curve (AUC) scores in vault predicting and ICL sizing. RESULTS: In the prediction of the vault, the Random Forest has the best results in the regression model (R2=0.315), then follows the Gradient Boosting (R2=0.291) and XGBoost (R2=0.285). The maximum classification accuracy is 0.828 in Random Forest, and the mean AUC is 0.765. The Random Forest predicts the ICL size with an accuracy of 82.2% and the Gradient Boosting and XGBoost, which are also compatible with 81.5% and 81.8% accuracy, respectively. CONCLUSIONS: Random Forest, Gradient Boosting and XGBoost models are applicable for vault predicting and ICL sizing. AI may assist ophthalmologists in improving ICL surgery safety, designing surgical strategies, and predicting clinical outcomes.


Asunto(s)
Miopía , Lentes Intraoculares Fáquicas , Humanos , Implantación de Lentes Intraoculares/métodos , Inteligencia Artificial , Miopía/diagnóstico , Miopía/cirugía , Inteligencia , Estudios Retrospectivos
8.
Comput Biol Med ; 158: 106714, 2023 05.
Artículo en Inglés | MEDLINE | ID: mdl-37003068

RESUMEN

High-quality manual labeling of ambiguous and complex-shaped targets with binary masks can be challenging. The weakness of insufficient expression of binary masks is prominent in segmentation, especially in medical scenarios where blurring is prevalent. Thus, reaching a consensus among clinicians through binary masks is more difficult in multi-person labeling cases. These inconsistent or uncertain areas are related to the lesions' structure and may contain anatomical information conducive to providing an accurate diagnosis. However, recent research focuses on uncertainties of model training and data labeling. None of them has investigated the influence of the ambiguous nature of the lesion itself. Inspired by image matting, this paper introduces a soft mask called alpha matte to medical scenes. It can describe the lesions with more details better than a binary mask. Moreover, it can also be used as a new uncertainty quantification method to represent uncertain areas, filling the gap in research on the uncertainty of lesion structure. In this work, we introduce a multi-task framework to generate binary masks and alpha mattes, which outperforms all state-of-the-art matting algorithms compared. The uncertainty map is proposed to imitate the trimap in matting methods, which can highlight fuzzy areas and improve matting performance. We have created three medical datasets with alpha mattes to address the lack of available matting datasets in medical fields and evaluated the effectiveness of our proposed method on them comprehensively. Furthermore, experiments demonstrate that the alpha matte is a more effective labeling method than the binary mask from both qualitative and quantitative aspects.


Asunto(s)
Algoritmos , Procesamiento de Imagen Asistido por Computador , Humanos , Incertidumbre , Procesamiento de Imagen Asistido por Computador/métodos
9.
Eye (Lond) ; 37(10): 2026-2032, 2023 07.
Artículo en Inglés | MEDLINE | ID: mdl-36302974

RESUMEN

PURPOSE: Our aim is to establish an AI model for distinguishing color fundus photographs (CFP) of RVO patients from normal individuals. METHODS: The training dataset included 2013 CFP from fellow eyes of RVO patients and 8536 age- and gender-matched normal CFP. Model performance was assessed in two independent testing datasets. We evaluated the performance of the AI model using the area under the receiver operating characteristic curve (AUC), accuracy, precision, specificity, sensitivity, and confusion matrices. We further explained the probable clinical relevance of the AI by extracting and comparing features of the retinal images. RESULTS: Our model achieved an average AUC was 0.9866 (95% CI: 0.9805-0.9918), accuracy was 0.9534 (95% CI: 0.9421-0.9639), precision was 0.9123 (95% CI: 0.8784-9453), specificity was 0.9810 (95% CI: 0.9729-0.9884), and sensitivity was 0.8367 (95% CI: 0.7953-0.8756) for identifying fundus images of RVO patients in training dataset. In independent external datasets 1, the AUC of the RVO group was 0.8102 (95% CI: 0.7979-0.8226), the accuracy of 0.7752 (95% CI: 0.7633-0.7875), the precision of 0.7041 (95% CI: 0.6873-0.7211), specificity of 0.6499 (95% CI: 0.6305-0.6679) and sensitivity of 0.9124 (95% CI: 0.9004-0.9241) for RVO group. There were significant differences in retinal arteriovenous ratio, optic cup to optic disc ratio, and optic disc tilt angle (p = 0.001, p = 0.0001, and p = 0.0001, respectively) between the two groups in training dataset. CONCLUSION: We trained an AI model to classify color fundus photographs of RVO patients with stable performance both in internal and external datasets. This may be of great importance for risk prediction in patients with retinal venous occlusion.


Asunto(s)
Disco Óptico , Oclusión de la Vena Retiniana , Humanos , Oclusión de la Vena Retiniana/diagnóstico , Inteligencia Artificial , Disco Óptico/diagnóstico por imagen , Fondo de Ojo , Técnicas de Diagnóstico Oftalmológico
10.
IEEE Trans Med Imaging ; 41(6): 1533-1546, 2022 06.
Artículo en Inglés | MEDLINE | ID: mdl-34995185

RESUMEN

Deep neural networks are known to be data-driven and label noise can have a marked impact on model performance. Recent studies have shown great robustness to classic image recognition even under a high noisy rate. In medical applications, learning from datasets with label noise is more challenging since medical imaging datasets tend to have instance-dependent noise (IDN) and suffer from high observer variability. In this paper, we systematically discuss the two common types of label noise in medical images - disagreement label noise from inconsistency expert opinions and single-target label noise from biased aggregation of individual annotations. We then propose an uncertainty estimation-based framework to handle these two label noise amid the medical image classification task. We design a dual-uncertainty estimation approach to measure the disagreement label noise and single-target label noise via improved Direct Uncertainty Prediction and Monte-Carlo-Dropout. A boosting-based curriculum training procedure is later introduced for robust learning. We demonstrate the effectiveness of our method by conducting extensive experiments on three different diseases with synthesized and real-world label noise: skin lesions, prostate cancer, and retinal diseases. We also release a large re-engineered database that consists of annotations from more than ten ophthalmologists with an unbiased golden standard dataset for evaluation and benchmarking. The dataset is available at https://mmai.group/peoples/julie/.


Asunto(s)
Diagnóstico por Imagen , Redes Neurales de la Computación , Ruido , Radiografía , Incertidumbre
11.
Comput Biol Med ; 150: 106153, 2022 11.
Artículo en Inglés | MEDLINE | ID: mdl-36228464

RESUMEN

Usually, lesions are not isolated but are associated with the surrounding tissues. For example, the growth of a tumour can depend on or infiltrate into the surrounding tissues. Due to the pathological nature of the lesions, it is challenging to distinguish their boundaries in medical imaging. However, these uncertain regions may contain diagnostic information. Therefore, the simple binarization of lesions by traditional binary segmentation can result in the loss of diagnostic information. In this work, we introduce the image matting into the 3D scenes and use the alpha matte, i.e., a soft mask, to describe lesions in a 3D medical image. The traditional soft mask acted as a training trick to compensate for the easily mislabelled or under-labelled ambiguous regions. In contrast, 3D matting uses soft segmentation to characterize the uncertain regions more finely, which means that it retains more structural information for subsequent diagnosis and treatment. The current study of image matting methods in 3D is limited. To address this issue, we conduct a comprehensive study of 3D matting, including both traditional and deep-learning-based methods. We adapt four state-of-the-art 2D image matting algorithms to 3D scenes and further customize the methods for CT images to calibrate the alpha matte with the radiodensity. Moreover, we propose the first end-to-end deep 3D matting network and implement a solid 3D medical image matting benchmark. Its efficient counterparts are also proposed to achieve a good performance-computation balance. Furthermore, there is no high-quality annotated dataset related to 3D matting, slowing down the development of data-driven deep-learning-based methods. To address this issue, we construct the first 3D medical matting dataset. The validity of the dataset was verified through clinicians' assessments and downstream experiments. The dataset and codes will be released to encourage further research.1.


Asunto(s)
Benchmarking , Procesamiento de Imagen Asistido por Computador , Procesamiento de Imagen Asistido por Computador/métodos , Tomografía Computarizada por Rayos X/métodos , Imagenología Tridimensional/métodos , Algoritmos
12.
Front Med (Lausanne) ; 9: 920716, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-35755054

RESUMEN

Background: Thyroid-associated ophthalmopathy (TAO) is one of the most common orbital diseases that seriously threatens visual function and significantly affects patients' appearances, rendering them unable to work. This study established an intelligent diagnostic system for TAO based on facial images. Methods: Patient images and data were obtained from medical records of patients with TAO who visited Shanghai Changzheng Hospital from 2013 to 2018. Eyelid retraction, ocular dyskinesia, conjunctival congestion, and other signs were noted on the images. Patients were classified according to the types, stages, and grades of TAO based on the diagnostic criteria. The diagnostic system consisted of multiple task-specific models. Results: The intelligent diagnostic system accurately diagnosed TAO in three stages. The built-in models pre-processed the facial images and diagnosed multiple TAO signs, with average areas under the receiver operating characteristic curves exceeding 0.85 (F1 score >0.80). Conclusion: The intelligent diagnostic system introduced in this study accurately identified several common signs of TAO.

13.
IEEE J Biomed Health Inform ; 25(10): 3709-3720, 2021 10.
Artículo en Inglés | MEDLINE | ID: mdl-33465032

RESUMEN

The need for comprehensive and automated screening methods for retinal image classification has long been recognized. Well-qualified doctors annotated images are very expensive and only a limited amount of data is available for various retinal diseases such as diabetic retinopathy (DR) and age-related macular degeneration (AMD). Some studies show that some retinal diseases such as DR and AMD share some common features like haemorrhages and exudation but most classification algorithms only train those disease models independently when the only single label for one image is available. Inspired by multi-task learning where additional monitoring signals from various sources is beneficial to train a robust model. We propose a method called synergic adversarial label learning (SALL) which leverages relevant retinal disease labels in both semantic and feature space as additional signals and train the model in a collaborative manner using knowledge distillation. Our experiments on DR and AMD fundus image classification task demonstrate that the proposed method can significantly improve the accuracy of the model for grading diseases by 5.91% and 3.69% respectively. In addition, we conduct additional experiments to show the effectiveness of SALL from the aspects of reliability and interpretability in the context of medical imaging application.


Asunto(s)
Retinopatía Diabética , Enfermedades de la Retina , Algoritmos , Retinopatía Diabética/diagnóstico por imagen , Fondo de Ojo , Humanos , Reproducibilidad de los Resultados
14.
IEEE Trans Med Imaging ; 40(10): 2911-2925, 2021 10.
Artículo en Inglés | MEDLINE | ID: mdl-33531297

RESUMEN

Recently, ultra-widefield (UWF) 200° fundus imaging by Optos cameras has gradually been introduced because of its broader insights for detecting more information on the fundus than regular 30° - 60° fundus cameras. Compared with UWF fundus images, regular fundus images contain a large amount of high-quality and well-annotated data. Due to the domain gap, models trained by regular fundus images to recognize UWF fundus images perform poorly. Hence, given that annotating medical data is labor intensive and time consuming, in this paper, we explore how to leverage regular fundus images to improve the limited UWF fundus data and annotations for more efficient training. We propose the use of a modified cycle generative adversarial network (CycleGAN) model to bridge the gap between regular and UWF fundus and generate additional UWF fundus images for training. A consistency regularization term is proposed in the loss of the GAN to improve and regulate the quality of the generated data. Our method does not require that images from the two domains be paired or even that the semantic labels be the same, which provides great convenience for data collection. Furthermore, we show that our method is robust to noise and errors introduced by the generated unlabeled data with the pseudo-labeling technique. We evaluated the effectiveness of our methods on several common fundus diseases and tasks, such as diabetic retinopathy (DR) classification, lesion detection and tessellated fundus segmentation. The experimental results demonstrate that our proposed method simultaneously achieves superior generalizability of the learned representations and performance improvements in multiple tasks.


Asunto(s)
Retinopatía Diabética , Retinopatía Diabética/diagnóstico por imagen , Angiografía con Fluoresceína , Fondo de Ojo , Humanos , Fotograbar , Tomografía de Coherencia Óptica
15.
Methods Mol Biol ; 1932: 89-97, 2019.
Artículo en Inglés | MEDLINE | ID: mdl-30701493

RESUMEN

microRNAs (miRNAs) are short, noncoding regulatory RNAs derived from hairpin precursors (pre-miRNAs). In synergy with experimental approaches, computational approaches have become an invaluable tool for identifying miRNAs at the genome scale. We have recently reported a method called miRLocator, which applies machine learning algorithms to accurately predict the localization of most likely miRNAs within their pre-miRNAs. One major strength of miRLocator is the fact that the machine learning-based miRNA prediction model can be automatically trained using a set of miRNAs of particular interest, with informative features extracted from miRNA-miRNA* duplexes and the optimized ratio between positive and negative samples. Here, we present a detailed protocol for miRLocator that performs the training and prediction processes using a python implementation and web interface. The source codes, web interface, and manual documents are freely available to academic users at https://github.com/cma2015/miRLocator .


Asunto(s)
Biología Computacional/métodos , MicroARNs/genética , Precursores del ARN/genética , Computadores , Genoma/genética , Aprendizaje Automático , Programas Informáticos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA