Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 105
Filtrar
1.
J Am Heart Assoc ; 13(2): e031257, 2024 Jan 16.
Artículo en Inglés | MEDLINE | ID: mdl-38226515

RESUMEN

BACKGROUND: Identification of children with latent rheumatic heart disease (RHD) by echocardiography, before onset of symptoms, provides an opportunity to initiate secondary prophylaxis and prevent disease progression. There have been limited artificial intelligence studies published assessing the potential of machine learning to detect and analyze mitral regurgitation or to detect the presence of RHD on standard portable echocardiograms. METHODS AND RESULTS: We used 511 echocardiograms in children, focusing on color Doppler images of the mitral valve. Echocardiograms were independently reviewed by an expert adjudication panel. Among 511 cases, 229 were normal, and 282 had RHD. Our automated method included harmonization of echocardiograms to localize the left atrium during systole using convolutional neural networks and RHD detection using mitral regurgitation jet analysis and deep learning models with an attention mechanism. We identified the correct view with an average accuracy of 0.99 and the correct systolic frame with an average accuracy of 0.94 (apical) and 0.93 (parasternal long axis). It localized the left atrium with an average Dice coefficient of 0.88 (apical) and 0.9 (parasternal long axis). Maximum mitral regurgitation jet measurements were similar to expert manual measurements (P value=0.83) and a 9-feature mitral regurgitation analysis showed an area under the receiver operating characteristics curve of 0.93, precision of 0.83, recall of 0.92, and F1 score of 0.87. Our deep learning model showed an area under the receiver operating characteristics curve of 0.84, precision of 0.78, recall of 0.98, and F1 score of 0.87. CONCLUSIONS: Artificial intelligence has the potential to detect RHD as accurately as expert cardiologists and to improve with more data. These innovative approaches hold promise to scale echocardiography screening for RHD.


Asunto(s)
Insuficiencia de la Válvula Mitral , Cardiopatía Reumática , Niño , Humanos , Insuficiencia de la Válvula Mitral/diagnóstico por imagen , Cardiopatía Reumática/diagnóstico por imagen , Inteligencia Artificial , Sensibilidad y Especificidad , Ecocardiografía/métodos
2.
medRxiv ; 2024 Jan 03.
Artículo en Inglés | MEDLINE | ID: mdl-37961086

RESUMEN

Background: Diffuse midline gliomas (DMG) are aggressive pediatric brain tumors that are diagnosed and monitored through MRI. We developed an automatic pipeline to segment subregions of DMG and select radiomic features that predict patient overall survival (OS). Methods: We acquired diagnostic and post-radiation therapy (RT) multisequence MRI (T1, T1ce, T2, T2 FLAIR) and manual segmentations from two centers of 53 (internal cohort) and 16 (external cohort) DMG patients. We pretrained a deep learning model on a public adult brain tumor dataset, and finetuned it to automatically segment tumor core (TC) and whole tumor (WT) volumes. PyRadiomics and sequential feature selection were used for feature extraction and selection based on the segmented volumes. Two machine learning models were trained on our internal cohort to predict patient 1-year survival from diagnosis. One model used only diagnostic tumor features and the other used both diagnostic and post-RT features. Results: For segmentation, Dice score (mean [median]±SD) was 0.91 (0.94)±0.12 and 0.74 (0.83)±0.32 for TC, and 0.88 (0.91)±0.07 and 0.86 (0.89)±0.06 for WT for internal and external cohorts, respectively. For OS prediction, accuracy was 77% and 81% at time of diagnosis, and 85% and 78% post-RT for internal and external cohorts, respectively. Homogeneous WT intensity in baseline T2 FLAIR and larger post-RT TC/WT volume ratio indicate shorter OS. Conclusions: Machine learning analysis of MRI radiomics has potential to accurately and non-invasively predict which pediatric patients with DMG will survive less than one year from the time of diagnosis to provide patient stratification and guide therapy.

3.
Artículo en Inglés | MEDLINE | ID: mdl-38082727

RESUMEN

An accurate classification of upper limb movements using electroencephalogram (EEG) signals is gaining significant importance in recent years due to the prevalence of brain-computer interfaces. The upper limbs in the human body are crucial since different skeletal segments combine to make a range of motions that helps us in our trivial daily tasks. Decoding EEG-based upper limb movements can be of great help to people with spinal cord injury (SCI) or other neuro-muscular diseases such as amyotrophic lateral sclerosis (ALS), primary lateral sclerosis, and periodic paralysis. This can manifest in a loss of sensory and motor function, which could make a person reliant on others to provide care in day-to-day activities. We can detect and classify upper limb movement activities, whether they be executed or imagined using an EEG-based brain-computer interface (BCI). Toward this goal, we focus our attention on decoding movement execution (ME) of the upper limb in this study. For this purpose, we utilize a publicly available EEG dataset that contains EEG signal recordings from fifteen subjects acquired using a 61-channel EEG device. We propose a method to classify four ME classes for different subjects using spectrograms of the EEG data through pre-trained deep learning (DL) models. Our proposed method of using EEG spectrograms for the classification of ME has shown significant results, where the highest average classification accuracy (for four ME classes) obtained is 87.36%, with one subject achieving the best classification accuracy of 97.03%.Clinical relevance- This research shows that movement execution of upper limbs is classified with significant accuracy by employing a spectrogram of the EEG signals and a pre-trained deep learning model which is fine-tuned for the downstream task.


Asunto(s)
Interfaces Cerebro-Computador , Humanos , Extremidad Superior , Electroencefalografía/métodos , Movimiento , Movimiento (Física)
4.
Artículo en Inglés | MEDLINE | ID: mdl-38083430

RESUMEN

Children with optic pathway gliomas (OPGs), a low-grade brain tumor associated with neurofibromatosis type 1 (NF1-OPG), are at risk for permanent vision loss. While OPG size has been associated with vision loss, it is unclear how changes in size, shape, and imaging features of OPGs are associated with the likelihood of vision loss. This paper presents a fully automatic framework for accurate prediction of visual acuity loss using multi-sequence magnetic resonance images (MRIs). Our proposed framework includes a transformer-based segmentation network using transfer learning, statistical analysis of radiomic features, and a machine learning method for predicting vision loss. Our segmentation network was evaluated on multi-sequence MRIs acquired from 75 pediatric subjects with NF1-OPG and obtained an average Dice similarity coefficient of 0.791. The ability to predict vision loss was evaluated on a subset of 25 subjects with ground truth using cross-validation and achieved an average accuracy of 0.8. Analyzing multiple MRI features appear to be good indicators of vision loss, potentially permitting early treatment decisions.Clinical relevance- Accurately determining which children with NF1-OPGs are at risk and hence require preventive treatment before vision loss remains challenging, towards this we present a fully automatic deep learning-based framework for vision outcome prediction, potentially permitting early treatment decisions.


Asunto(s)
Neurofibromatosis 1 , Glioma del Nervio Óptico , Humanos , Niño , Glioma del Nervio Óptico/complicaciones , Glioma del Nervio Óptico/diagnóstico por imagen , Glioma del Nervio Óptico/patología , Neurofibromatosis 1/complicaciones , Neurofibromatosis 1/diagnóstico por imagen , Neurofibromatosis 1/patología , Imagen por Resonancia Magnética/métodos , Trastornos de la Visión , Agudeza Visual
5.
ArXiv ; 2023 Oct 02.
Artículo en Inglés | MEDLINE | ID: mdl-38106459

RESUMEN

Pediatric brain and spinal cancers remain the leading cause of cancer-related death in children. Advancements in clinical decision-support in pediatric neuro-oncology utilizing the wealth of radiology imaging data collected through standard care, however, has significantly lagged other domains. Such data is ripe for use with predictive analytics such as artificial intelligence (AI) methods, which require large datasets. To address this unmet need, we provide a multi-institutional, large-scale pediatric dataset of 23,101 multi-parametric MRI exams acquired through routine care for 1,526 brain tumor patients, as part of the Children's Brain Tumor Network. This includes longitudinal MRIs across various cancer diagnoses, with associated patient-level clinical information, digital pathology slides, as well as tissue genotype and omics data. To facilitate downstream analysis, treatment-naïve images for 370 subjects were processed and released through the NCI Childhood Cancer Data Initiative via the Cancer Data Service. Through ongoing efforts to continuously build these imaging repositories, our aim is to accelerate discovery and translational AI models with real-world data, to ultimately empower precision medicine for children.

6.
Sci Rep ; 13(1): 20557, 2023 11 23.
Artículo en Inglés | MEDLINE | ID: mdl-37996454

RESUMEN

We present the first data-driven pediatric model that explains cranial sutural growth in the pediatric population. We segmented the cranial bones in the neurocranium from the cross-sectional CT images of 2068 normative subjects (age 0-10 years), and we used a 2D manifold-based cranial representation to establish local anatomical correspondences between subjects guided by the location of the cranial sutures. We designed a diffeomorphic spatiotemporal model of cranial bone development as a function of local sutural growth rates, and we inferred its parameters statistically from our cross-sectional dataset. We used the constructed model to predict growth for 51 independent normative patients who had longitudinal images. Moreover, we used our model to simulate the phenotypes of single suture craniosynostosis, which we compared to the observations from 212 patients. We also evaluated the accuracy predicting personalized cranial growth for 10 patients with craniosynostosis who had pre-surgical longitudinal images. Unlike existing statistical and simulation methods, our model was inferred from real image observations, explains cranial bone expansion and displacement as a consequence of sutural growth and it can simulate craniosynostosis. This pediatric cranial suture growth model constitutes a necessary tool to study abnormal development in the presence of cranial suture pathology.


Asunto(s)
Suturas Craneales , Craneosinostosis , Humanos , Niño , Recién Nacido , Lactante , Preescolar , Craneosinostosis/patología , Cráneo/patología , Cuidados Paliativos
8.
Comput Methods Programs Biomed ; 240: 107689, 2023 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-37393741

RESUMEN

BACKGROUND AND OBJECTIVE: Accurate and repeatable detection of craniofacial landmarks is crucial for automated quantitative evaluation of head development anomalies. Since traditional imaging modalities are discouraged in pediatric patients, 3D photogrammetry has emerged as a popular and safe imaging alternative to evaluate craniofacial anomalies. However, traditional image analysis methods are not designed to operate on unstructured image data representations such as 3D photogrammetry. METHODS: We present a fully automated pipeline to identify craniofacial landmarks in real time, and we use it to assess the head shape of patients with craniosynostosis using 3D photogrammetry. To detect craniofacial landmarks, we propose a novel geometric convolutional neural network based on Chebyshev polynomials to exploit the point connectivity information in 3D photogrammetry and quantify multi-resolution spatial features. We propose a landmark-specific trainable scheme that aggregates the multi-resolution geometric and texture features quantified at every vertex of a 3D photogram. Then, we embed a new probabilistic distance regressor module that leverages the integrated features at every point to predict landmark locations without assuming correspondences with specific vertices in the original 3D photogram. Finally, we use the detected landmarks to segment the calvaria from the 3D photograms of children with craniosynostosis, and we derive a new statistical index of head shape anomaly to quantify head shape improvements after surgical treatment. RESULTS: We achieved an average error of 2.74 ± 2.70 mm identifying Bookstein Type I craniofacial landmarks, which is a significant improvement compared to other state-of-the-art methods. Our experiments also demonstrated a high robustness to spatial resolution variability in the 3D photograms. Finally, our head shape anomaly index quantified a significant reduction of head shape anomalies as a consequence of surgical treatment. CONCLUSION: Our fully automated framework provides real-time craniofacial landmark detection from 3D photogrammetry with state-of-the-art accuracy. In addition, our new head shape anomaly index can quantify significant head phenotype changes and can be used to quantitatively evaluate surgical treatment in patients with craniosynostosis.


Asunto(s)
Craneosinostosis , Imagenología Tridimensional , Humanos , Imagenología Tridimensional/métodos , Cráneo , Craneosinostosis/diagnóstico por imagen , Craneosinostosis/cirugía , Fotogrametría/métodos , Resultado del Tratamiento
9.
IEEE Trans Med Imaging ; 42(10): 3117-3126, 2023 10.
Artículo en Inglés | MEDLINE | ID: mdl-37216247

RESUMEN

Image segmentation, labeling, and landmark detection are essential tasks for pediatric craniofacial evaluation. Although deep neural networks have been recently adopted to segment cranial bones and locate cranial landmarks from computed tomography (CT) or magnetic resonance (MR) images, they may be hard to train and provide suboptimal results in some applications. First, they seldom leverage global contextual information that can improve object detection performance. Second, most methods rely on multi-stage algorithm designs that are inefficient and prone to error accumulation. Third, existing methods often target simple segmentation tasks and have shown low reliability in more challenging scenarios such as multiple cranial bone labeling in highly variable pediatric datasets. In this paper, we present a novel end-to-end neural network architecture based on DenseNet that incorporates context regularization to jointly label cranial bone plates and detect cranial base landmarks from CT images. Specifically, we designed a context-encoding module that encodes global context information as landmark displacement vector maps and uses it to guide feature learning for both bone labeling and landmark identification. We evaluated our model on a highly diverse pediatric CT image dataset of 274 normative subjects and 239 patients with craniosynostosis (age 0.63 ± 0.54 years, range 0-2 years). Our experiments demonstrate improved performance compared to state-of-the-art approaches.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Tomografía Computarizada por Rayos X , Humanos , Niño , Recién Nacido , Lactante , Preescolar , Procesamiento de Imagen Asistido por Computador/métodos , Reproducibilidad de los Resultados , Tomografía Computarizada por Rayos X/métodos , Redes Neurales de la Computación , Algoritmos
10.
Med Image Anal ; 82: 102605, 2022 11.
Artículo en Inglés | MEDLINE | ID: mdl-36156419

RESUMEN

Artificial intelligence (AI) methods for the automatic detection and quantification of COVID-19 lesions in chest computed tomography (CT) might play an important role in the monitoring and management of the disease. We organized an international challenge and competition for the development and comparison of AI algorithms for this task, which we supported with public data and state-of-the-art benchmark methods. Board Certified Radiologists annotated 295 public images from two sources (A and B) for algorithms training (n=199, source A), validation (n=50, source A) and testing (n=23, source A; n=23, source B). There were 1,096 registered teams of which 225 and 98 completed the validation and testing phases, respectively. The challenge showed that AI models could be rapidly designed by diverse teams with the potential to measure disease or facilitate timely and patient-specific interventions. This paper provides an overview and the major outcomes of the COVID-19 Lung CT Lesion Segmentation Challenge - 2020.


Asunto(s)
COVID-19 , Pandemias , Humanos , COVID-19/diagnóstico por imagen , Inteligencia Artificial , Tomografía Computarizada por Rayos X/métodos , Pulmón/diagnóstico por imagen
11.
Plast Reconstr Surg Glob Open ; 10(8): e4457, 2022 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-35983543

RESUMEN

Available normative references of cranial bone development and suture fusion are incomplete or based on simplified assumptions due to the lack of large datasets. We present a fully data-driven normative model that represents the age- and sex-specific variability of bone shape, thickness, and density between birth and 10 years of age at every location of the calvaria. Methods: The model was built using a cross-sectional and multi-institutional pediatric computed tomography image dataset with 2068 subjects without cranial pathology (age 0-10 years). We combined principal component analysis and temporal regression to build a statistical model of cranial bone development at every location of the calvaria. We studied the influences of sex on cranial bone growth, and our bone density model allowed quantifying for the first time suture fusion as a continuous temporal process. We evaluated the predictive accuracy of our model using an independent longitudinal image dataset of 51 subjects. Results: Our model achieved temporal predictive errors of 2.98 ± 0.69 mm, 0.27 ± 0.29 mm, and 76.72 ± 91.50 HU in cranial bone shape, thickness, and mineral density changes, respectively. Significant sex differences were found in intracranial volume and bone surface areas (P < 0.01). No significant differences were found in cephalic index, bone thickness, mineral density, or suture fusion. Conclusions: We presented the first pediatric age- and sex-specific statistical reference for local cranial bone shape, thickness, and mineral density changes. We showed its predictive accuracy using an independent longitudinal dataset, we studied developmental differences associated with sex, and we quantified suture fusion as a continuous process.

12.
Comput Methods Programs Biomed ; 221: 106893, 2022 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-35660764

RESUMEN

BACKGROUND AND OBJECTIVE: The fetal face is an essential source of information in the assessment of congenital malformations and neurological anomalies. Disturbance in early stages of development can lead to a wide range of effects, from subtle changes in facial and neurological features to characteristic facial shapes observed in craniofacial syndromes. Three-dimensional ultrasound (3D US) can provide more detailed information about the facial morphology of the fetus than the conventional 2D US, but its use for pre-natal diagnosis is challenging due to imaging noise, fetal movements, limited field-of-view, low soft-tissue contrast, and occlusions. METHODS: In this paper, we propose the use of a novel statistical morphable model of newborn faces, the BabyFM, for fetal face reconstruction from 3D US images. We test the feasibility of using newborn statistics to accurately reconstruct fetal faces by fitting the regularized morphable model to the noisy 3D US images. RESULTS: The results indicate that the reconstructions are quite accurate in the central-face and less reliable in the lateral regions (mean point-to-surface error of 2.35 mm vs 4.86 mm). The algorithm is able to reconstruct the whole facial morphology of babies from US scans while handle adverse conditions (e.g. missing parts, noisy data). CONCLUSIONS: The proposed algorithm has the potential to aid in-utero diagnosis for conditions that involve facial dysmorphology.


Asunto(s)
Cara , Ultrasonografía Prenatal , Cara/diagnóstico por imagen , Femenino , Feto/diagnóstico por imagen , Humanos , Imagenología Tridimensional/métodos , Recién Nacido , Embarazo , Ultrasonografía , Ultrasonografía Prenatal/métodos
14.
IEEE Trans Biomed Eng ; 69(2): 537-546, 2022 02.
Artículo en Inglés | MEDLINE | ID: mdl-34324420

RESUMEN

OBJECTIVE: We present a data-driven method to build a spatiotemporal statistical shape model predictive of normal cranial growth from birth to the age of 2 years. METHODS: The model was constructed using a normative cross-sectional computed tomography image dataset of 278 subjects. First, we propose a new standard representation of the calvaria using spherical maps to establish anatomical correspondences between subjects at the cranial sutures - the main areas of cranial bone expansion. Then, we model the cranial bone shape as a bilinear function of two factors: inter-subject anatomical variability and temporal growth. We estimate these factors using principal component analysis on the spatial and temporal dimensions, using a novel coarse-to-fine temporal multi-resolution approach to mitigate the lack of longitudinal images of the same patient. RESULTS: Our model achieved an accuracy of 1.54 ± 1.05 mm predicting development on an independent longitudinal dataset. We also used the model to calculate the cranial volume, cephalic index and cranial bone surface changes during the first two years of age, which were in agreement with clinical observations. SIGNIFICANCE: To our knowledge, this is the first data-driven and personalized predictive model of cranial bone shape development during infancy and it can serve as a baseline to study abnormal growth patterns in the population.


Asunto(s)
Modelos Estadísticos , Cráneo , Preescolar , Estudios Transversales , Humanos , Cráneo/diagnóstico por imagen , Tomografía Computarizada por Rayos X/métodos
15.
Radiol Artif Intell ; 3(6): e210248, 2021 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-34870225

RESUMEN

In March 2021, the Radiological Society of North America hosted a virtual panel discussion with members of the Medical Image Computing and Computer Assisted Intervention Society. Both organizations share a vision to develop radiologic and medical imaging techniques through advanced quantitative imaging biomarkers and artificial intelligence. The panel addressed how radiologists and data scientists can collaborate to advance the science of AI in radiology. Keywords: Adults and Pediatrics, Segmentation, Feature Detection, Quantification, Diagnosis/Classification, Prognosis/Classification © RSNA, 2021.

16.
Plast Reconstr Surg Glob Open ; 9(11): e3937, 2021 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-34786322

RESUMEN

BACKGROUND: The surgical correction of metopic craniosynostosis usually relies on the subjective judgment of surgeons to determine the configuration of the cranial bone fragments and the degree of overcorrection. This study evaluates the effectiveness of a new approach for automatic planning of fronto-orbital advancement based on statistical shape models and including overcorrection. METHODS: This study presents a planning software to automatically estimate osteotomies in the fronto-orbital region and calculate the optimal configuration of the bone fragments required to achieve an optimal postoperative shape. The optimal cranial shape is obtained using a statistical head shape model built from 201 healthy subjects (age 23 ± 20 months; 89 girls). Automatic virtual plans were computed for nine patients (age 10.68 ± 1.73 months; four girls) with different degrees of overcorrection, and compared with manual plans designed by experienced surgeons. RESULTS: Postoperative cranial shapes generated by automatic interventional plans present accurate matching with normative morphology and enable to reduce the malformations in the fronto-orbital region by 82.01 ± 6.07%. The system took on average 19.22 seconds to provide the automatic plan, and allows for personalized levels of overcorrection. The automatic plans with an overcorrection of 7 mm in minimal frontal breadth provided the closest match (no significant difference) to the manual plans. CONCLUSIONS: The automatic software technology effectively achieves correct cranial morphometrics and volumetrics with respect to normative cranial shapes. The automatic approach has the potential to reduce the duration of preoperative planning, reduce inter-surgeon variability, and provide consistent surgical outcomes.

17.
Lancet Digit Health ; 3(10): e635-e643, 2021 10.
Artículo en Inglés | MEDLINE | ID: mdl-34481768

RESUMEN

BACKGROUND: Delays in the diagnosis of genetic syndromes are common, particularly in low and middle-income countries with limited access to genetic screening services. We, therefore, aimed to develop and evaluate a machine learning-based screening technology using facial photographs to evaluate a child's risk of presenting with a genetic syndrome for use at the point of care. METHODS: In this retrospective study, we developed a facial deep phenotyping technology based on deep neural networks and facial statistical shape models to screen children for genetic syndromes. We trained the machine learning models on facial photographs from children (aged <21 years) with a clinical or molecular diagnosis of a genetic syndrome and controls without a genetic syndrome matched for age, sex, and race or ethnicity. Images were obtained from three publicly available databases (the Atlas of Human Malformations in Diverse Populations of the National Human Genome Research Institute, Face2Gene, and the dataset available from Ferry and colleagues) and the archives of the Children's National Hospital (Washington, DC, USA), in addition to photographs taken on a standard smartphone at the Children's National Hospital. We designed a deep learning architecture structured into three neural networks, which performed image standardisation (Network A), facial morphology detection (Network B), and genetic syndrome risk estimation, accounting for phenotypic variations due to age, sex, and race or ethnicity (Network C). Data were divided randomly into 40 groups for cross validation, and the performance of the model was evaluated in terms of accuracy, sensitivity, and specificity in both the total population and stratified by race or ethnicity, age, and sex. FINDINGS: Our dataset included 2800 facial photographs of children (1318 [47%] female and 1482 [53%] male; 1576 [56%] White, 432 [15%] African, 430 [15%] Hispanic, and 362 [13%] Asian). 1400 children with 128 genetic conditions were included (the most prevalent being Williams-Beuren syndrome [19%], Cornelia de Lange syndrome [17%], Down syndrome [16%], 22q11.2 deletion [13%], and Noonan syndrome [12%] syndrome) in addition to 1400 photographs of matched controls. In the total population, our deep learning-based model had an accuracy of 88% (95% CI 87-89) for the detection of a genetic syndrome, with 90% sensitivity (95% CI 88-92) and 86% specificity (95% CI 84-88). Accuracy was greater in White (90%, 89-91) and Hispanic populations (91%, 88-94) than in African (84%, 81-87) and Asian populations (82%, 78-86). Accuracy was also similar in male (89%, 87-91) and female children (87%, 85-89), and similar in children younger than 2 years (86%, 84-88) and children aged 2 years or older (eg, 89% [87-91] for those aged 2 years to <5 years). INTERPRETATION: This genetic screening technology could support early risk stratification at the point of care in global populations, which has the potential accelerate diagnosis and reduce mortality and morbidity through preventive care. FUNDING: Children's National Hospital and Government of Abu Dhabi.


Asunto(s)
Enfermedades Genéticas Congénitas/diagnóstico , Aprendizaje Automático , Fenotipo , Fotograbar , Sistemas de Atención de Punto , África , Asia , Cara , Expresión Facial , Femenino , Hispánicos o Latinos , Humanos , Lactante , Internacionalidad , Masculino , Reproducibilidad de los Resultados , Estudios Retrospectivos , Medición de Riesgo , Sensibilidad y Especificidad , Población Blanca
18.
Nat Med ; 27(10): 1735-1743, 2021 10.
Artículo en Inglés | MEDLINE | ID: mdl-34526699

RESUMEN

Federated learning (FL) is a method used for training artificial intelligence models with data from multiple sources while maintaining data anonymity, thus removing many barriers to data sharing. Here we used data from 20 institutes across the globe to train a FL model, called EXAM (electronic medical record (EMR) chest X-ray AI model), that predicts the future oxygen requirements of symptomatic patients with COVID-19 using inputs of vital signs, laboratory data and chest X-rays. EXAM achieved an average area under the curve (AUC) >0.92 for predicting outcomes at 24 and 72 h from the time of initial presentation to the emergency room, and it provided 16% improvement in average AUC measured across all participating sites and an average increase in generalizability of 38% when compared with models trained at a single site using that site's data. For prediction of mechanical ventilation treatment or death at 24 h at the largest independent test site, EXAM achieved a sensitivity of 0.950 and specificity of 0.882. In this study, FL facilitated rapid data science collaboration without data exchange and generated a model that generalized across heterogeneous, unharmonized datasets for prediction of clinical outcomes in patients with COVID-19, setting the stage for the broader use of FL in healthcare.


Asunto(s)
COVID-19/fisiopatología , Aprendizaje Automático , Evaluación de Resultado en la Atención de Salud , COVID-19/terapia , COVID-19/virología , Registros Electrónicos de Salud , Humanos , Pronóstico , SARS-CoV-2/aislamiento & purificación
19.
Eur J Med Genet ; 64(9): 104267, 2021 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-34161860

RESUMEN

Down syndrome is one of the most common chromosomal anomalies affecting the world's population, with an estimated frequency of 1 in 700 live births. Despite its relatively high prevalence, diagnostic rates based on clinical features have remained under 70% for most of the developed world and even lower in countries with limited resources. While genetic and cytogenetic confirmation greatly increases the diagnostic rate, such resources are often non-existent in many low- and middle-income countries, particularly in Sub-Saharan Africa. To address the needs of countries with limited resources, the implementation of mobile, user-friendly and affordable technologies that aid in diagnosis would greatly increase the odds of success for a child born with a genetic condition. Given that the Democratic Republic of the Congo is estimated to have one of the highest rates of birth defects in the world, our team sought to determine if smartphone-based facial analysis technology could accurately detect Down syndrome in individuals of Congolese descent. Prior to technology training, we confirmed the presence of trisomy 21 using low-cost genomic applications that do not need advanced expertise to utilize and are available in many low-resourced countries. Our software technology trained on 132 Congolese subjects had a significantly improved performance (91.67% accuracy, 95.45% sensitivity, 87.88% specificity) when compared to previous technology trained on individuals who are not of Congolese origin (p < 5%). In addition, we provide the list of most discriminative facial features of Down syndrome and their ranges in the Congolese population. Collectively, our technology provides low-cost and accurate diagnosis of Down syndrome in the local population.


Asunto(s)
Reconocimiento Facial Automatizado/métodos , Síndrome de Down/patología , Facies , Procesamiento de Imagen Asistido por Computador/métodos , Reconocimiento Facial Automatizado/economía , Reconocimiento Facial Automatizado/normas , República Democrática del Congo , Países en Desarrollo , Síndrome de Down/genética , Pruebas Genéticas , Humanos , Procesamiento de Imagen Asistido por Computador/economía , Procesamiento de Imagen Asistido por Computador/normas , Aprendizaje Automático , Sensibilidad y Especificidad
20.
Res Sq ; 2021 Jun 04.
Artículo en Inglés | MEDLINE | ID: mdl-34100010

RESUMEN

Artificial intelligence (AI) methods for the automatic detection and quantification of COVID-19 lesions in chest computed tomography (CT) might play an important role in the monitoring and management of the disease. We organized an international challenge and competition for the development and comparison of AI algorithms for this task, which we supported with public data and state-of-the-art benchmark methods. Board Certified Radiologists annotated 295 public images from two sources (A and B) for algorithms training (n=199, source A), validation (n=50, source A) and testing (n=23, source A; n=23, source B). There were 1,096 registered teams of which 225 and 98 completed the validation and testing phases, respectively. The challenge showed that AI models could be rapidly designed by diverse teams with the potential to measure disease or facilitate timely and patient-specific interventions. This paper provides an overview and the major outcomes of the COVID-19 Lung CT Lesion Segmentation Challenge - 2020.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...