Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 37
Filtrar
1.
Nature ; 627(8003): 347-357, 2024 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-38374256

RESUMO

Type 2 diabetes (T2D) is a heterogeneous disease that develops through diverse pathophysiological processes1,2 and molecular mechanisms that are often specific to cell type3,4. Here, to characterize the genetic contribution to these processes across ancestry groups, we aggregate genome-wide association study data from 2,535,601 individuals (39.7% not of European ancestry), including 428,452 cases of T2D. We identify 1,289 independent association signals at genome-wide significance (P < 5 × 10-8) that map to 611 loci, of which 145 loci are, to our knowledge, previously unreported. We define eight non-overlapping clusters of T2D signals that are characterized by distinct profiles of cardiometabolic trait associations. These clusters are differentially enriched for cell-type-specific regions of open chromatin, including pancreatic islets, adipocytes, endothelial cells and enteroendocrine cells. We build cluster-specific partitioned polygenic scores5 in a further 279,552 individuals of diverse ancestry, including 30,288 cases of T2D, and test their association with T2D-related vascular outcomes. Cluster-specific partitioned polygenic scores are associated with coronary artery disease, peripheral artery disease and end-stage diabetic nephropathy across ancestry groups, highlighting the importance of obesity-related processes in the development of vascular outcomes. Our findings show the value of integrating multi-ancestry genome-wide association study data with single-cell epigenomics to disentangle the aetiological heterogeneity that drives the development and progression of T2D. This might offer a route to optimize global access to genetically informed diabetes care.


Assuntos
Diabetes Mellitus Tipo 2 , Progressão da Doença , Predisposição Genética para Doença , Estudo de Associação Genômica Ampla , Humanos , Adipócitos/metabolismo , Cromatina/genética , Cromatina/metabolismo , Doença da Artéria Coronariana/complicações , Doença da Artéria Coronariana/genética , Diabetes Mellitus Tipo 2/classificação , Diabetes Mellitus Tipo 2/complicações , Diabetes Mellitus Tipo 2/genética , Diabetes Mellitus Tipo 2/patologia , Diabetes Mellitus Tipo 2/fisiopatologia , Nefropatias Diabéticas/complicações , Nefropatias Diabéticas/genética , Células Endoteliais/metabolismo , Células Enteroendócrinas , Epigenômica , Predisposição Genética para Doença/genética , Ilhotas Pancreáticas/metabolismo , Herança Multifatorial/genética , Doença Arterial Periférica/complicações , Doença Arterial Periférica/genética , Análise de Célula Única
2.
Skeletal Radiol ; 53(2): 377-383, 2024 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-37530866

RESUMO

PURPOSE: To develop a deep learning model to distinguish rheumatoid arthritis (RA) from osteoarthritis (OA) using hand radiographs and to evaluate the effects of changing pretraining and training parameters on model performance. MATERIALS AND METHODS: A convolutional neural network was retrospectively trained on 9714 hand radiograph exams from 8387 patients obtained from 2017 to 2021 at seven hospitals within an integrated healthcare network. Performance was assessed using an independent test set of 250 exams from 146 patients. Binary discriminatory capacity (no arthritis versus arthritis; RA versus not RA) and three-way classification (no arthritis versus OA versus RA) were evaluated. The effects of additional pretraining using musculoskeletal radiographs, using all views as opposed to only the posteroanterior view, and varying image resolution on model performance were also investigated. Area under the receiver operating characteristic curve (AUC) and Cohen's kappa coefficient were used to evaluate diagnostic performance. RESULTS: For no arthritis versus arthritis, the model achieved an AUC of 0.975 (95% CI: 0.957, 0.989). For RA versus not RA, the model achieved an AUC of 0.955 (95% CI: 0.919, 0.983). For three-way classification, the model achieved a kappa of 0.806 (95% CI: 0.742, 0.866) and accuracy of 87.2% (95% CI: 83.2%, 91.2%) on the test set. Increasing image resolution increased performance up to 1024 × 1024 pixels. Additional pretraining on musculoskeletal radiographs and using all views did not significantly affect performance. CONCLUSION: A deep learning model can be used to distinguish no arthritis, OA, and RA on hand radiographs with high performance.


Assuntos
Artrite Reumatoide , Aprendizado Profundo , Osteoartrite , Humanos , Estudos Retrospectivos , Radiografia , Osteoartrite/diagnóstico por imagem , Artrite Reumatoide/diagnóstico por imagem
3.
Curr Opin Neurol ; 36(6): 549-556, 2023 Dec 01.
Artigo em Inglês | MEDLINE | ID: mdl-37973024

RESUMO

PURPOSE OF REVIEW: To provide an updated overview of artificial intelligence (AI) applications in neuro-oncologic imaging and discuss current barriers to wider clinical adoption. RECENT FINDINGS: A wide variety of AI applications in neuro-oncologic imaging have been developed and researched, spanning tasks from pretreatment brain tumor classification and segmentation, preoperative planning, radiogenomics, prognostication and survival prediction, posttreatment surveillance, and differentiating between pseudoprogression and true disease progression. While earlier studies were largely based on data from a single institution, more recent studies have demonstrated that the performance of these algorithms are also effective on external data from other institutions. Nevertheless, most of these algorithms have yet to see widespread clinical adoption, given the lack of prospective studies demonstrating their efficacy and the logistical difficulties involved in clinical implementation. SUMMARY: While there has been significant progress in AI and neuro-oncologic imaging, clinical utility remains to be demonstrated. The next wave of progress in this area will be driven by prospective studies measuring outcomes relevant to clinical practice and go beyond retrospective studies which primarily aim to demonstrate high performance.


Assuntos
Inteligência Artificial , Neoplasias Encefálicas , Humanos , Estudos Prospectivos , Estudos Retrospectivos , Neuroimagem , Neoplasias Encefálicas/diagnóstico por imagem , Neoplasias Encefálicas/terapia
4.
Radiology ; 303(1): 52-53, 2022 04.
Artigo em Inglês | MEDLINE | ID: mdl-35014902

RESUMO

Online supplemental material is available for this article.


Assuntos
Inteligência Artificial , Humanos
5.
Eur Radiol ; 32(1): 205-212, 2022 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-34223954

RESUMO

OBJECTIVES: Early recognition of coronavirus disease 2019 (COVID-19) severity can guide patient management. However, it is challenging to predict when COVID-19 patients will progress to critical illness. This study aimed to develop an artificial intelligence system to predict future deterioration to critical illness in COVID-19 patients. METHODS: An artificial intelligence (AI) system in a time-to-event analysis framework was developed to integrate chest CT and clinical data for risk prediction of future deterioration to critical illness in patients with COVID-19. RESULTS: A multi-institutional international cohort of 1,051 patients with RT-PCR confirmed COVID-19 and chest CT was included in this study. Of them, 282 patients developed critical illness, which was defined as requiring ICU admission and/or mechanical ventilation and/or reaching death during their hospital stay. The AI system achieved a C-index of 0.80 for predicting individual COVID-19 patients' to critical illness. The AI system successfully stratified the patients into high-risk and low-risk groups with distinct progression risks (p < 0.0001). CONCLUSIONS: Using CT imaging and clinical data, the AI system successfully predicted time to critical illness for individual patients and identified patients with high risk. AI has the potential to accurately triage patients and facilitate personalized treatment. KEY POINT: • AI system can predict time to critical illness for patients with COVID-19 by using CT imaging and clinical data.


Assuntos
COVID-19 , Inteligência Artificial , Humanos , Estudos Retrospectivos , SARS-CoV-2 , Tomografia Computadorizada por Raios X
6.
Retina ; 42(8): 1417-1424, 2022 08 01.
Artigo em Inglês | MEDLINE | ID: mdl-35877964

RESUMO

PURPOSE: To survey the current literature regarding applications of deep learning to optical coherence tomography in age-related macular degeneration (AMD). METHODS: A Preferred Reporting Items for Systematic Reviews and Meta-Analyses systematic review was conducted from January 1, 2000, to May 9, 2021, using PubMed and EMBASE databases. Original research investigations that applied deep learning to optical coherence tomography in patients with AMD or features of AMD (choroidal neovascularization, geographic atrophy, and drusen) were included. Summary statements, data set characteristics, and performance metrics were extracted from included articles for analysis. RESULTS: We identified 95 articles for this review. The majority of articles fell into one of six categories: 1) classification of AMD or AMD biomarkers (n = 40); 2) segmentation of AMD biomarkers (n = 20); 3) segmentation of retinal layers or the choroid in patients with AMD (n = 7); 4) assessing treatment response and disease progression (n = 13); 5) predicting visual function (n = 6); and 6) determining the need for referral to a retina specialist (n = 3). CONCLUSION: Deep learning models generally achieved high performance, at times comparable with that of specialists. However, external validation and experimental parameters enabling reproducibility were often limited. Prospective studies that demonstrate generalizability and clinical utility of these models are needed.


Assuntos
Aprendizado Profundo , Degeneração Macular , Drusas Retinianas , Humanos , Degeneração Macular/diagnóstico , Estudos Prospectivos , Reprodutibilidade dos Testes , Tomografia de Coerência Óptica/métodos
7.
J Digit Imaging ; 35(2): 335-339, 2022 04.
Artigo em Inglês | MEDLINE | ID: mdl-35018541

RESUMO

Preparing radiology examinations for interpretation requires prefetching relevant prior examinations and implementing hanging protocols to optimally display the examination along with comparisons. Body part is a critical piece of information to facilitate both prefetching and hanging protocols, but body part information encoded using the Digital Imaging and Communications in Medicine (DICOM) standard is widely variable, error-prone, not granular enough, or missing altogether. This results in inappropriate examinations being prefetched or relevant examinations left behind; hanging protocol optimization suffers as well. Modern artificial intelligence (AI) techniques, particularly when harnessing federated deep learning techniques, allow for highly accurate automatic detection of body part based on the image data within a radiological examination; this allows for much more reliable implementation of this categorization and workflow. Additionally, new avenues to further optimize examination viewing such as dynamic hanging protocol and image display can be implemented using these techniques.


Assuntos
Inteligência Artificial , Aprendizado Profundo , Corpo Humano , Humanos , Radiografia , Fluxo de Trabalho
8.
J Stroke Cerebrovasc Dis ; 31(11): 106753, 2022 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-36115105

RESUMO

OBJECTIVES: In this study, we developed a deep learning pipeline that detects large vessel occlusion (LVO) and predicts functional outcome based on computed tomography angiography (CTA) images to improve the management of the LVO patients. METHODS: A series identifier picked out 8650 LVO-protocoled studies from 2015 to 2019 at Rhode Island Hospital with an identified thin axial series that served as the data pool. Data were annotated into 2 classes: 1021 LVOs and 7629 normal. The Inception-V1 I3D architecture was applied for LVO detection. For outcome prediction, 323 patients undergoing thrombectomy were selected. A 3D convolution neural network (CNN) was used for outcome prediction (30-day mRS) with CTA volumes and embedded pre-treatment variables as inputs. RESULT: For LVO-detection model, CTAs from 8,650 patients (median age 68 years, interquartile range (IQR): 58-81; 3934 females) were analyzed. The cross-validated AUC for LVO vs. not was 0.74 (95% CI: 0.72-0.75). For the mRS classification model, CTAs from 323 patients (median age 75 years, IQR: 63-84; 164 females) were analyzed. The algorithm achieved a test AUC of 0.82 (95% CI: 0.79-0.84), sensitivity of 89%, and specificity 66%. The two models were then integrated with hospital infrastructure where CTA was collected in real-time and processed by the model. If LVO was detected, interventionists were notified and provided with predicted clinical outcome information. CONCLUSION: 3D CNNs based on CTA were effective in selecting LVO and predicting LVO mechanical thrombectomy short-term prognosis. End-to-end AI platform allows users to receive immediate prognosis prediction and facilitates clinical workflow.


Assuntos
Isquemia Encefálica , Acidente Vascular Cerebral , Feminino , Humanos , Idoso , Inteligência Artificial , Trombectomia/efeitos adversos , Angiografia por Tomografia Computadorizada/métodos , Artéria Cerebral Média , Estudos Retrospectivos
9.
Eur Radiol ; 31(7): 4960-4971, 2021 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-33052463

RESUMO

OBJECTIVES: There currently lacks a noninvasive and accurate method to distinguish benign and malignant ovarian lesion prior to treatment. This study developed a deep learning algorithm that distinguishes benign from malignant ovarian lesion by applying a convolutional neural network on routine MR imaging. METHODS: Five hundred forty-five lesions (379 benign and 166 malignant) from 451 patients from a single institution were divided into training, validation, and testing set in a 7:2:1 ratio. Model performance was compared with four junior and three senior radiologists on the test set. RESULTS: Compared with junior radiologists averaged, the final ensemble model combining MR imaging and clinical variables had a higher test accuracy (0.87 vs 0.64, p < 0.001) and specificity (0.92 vs 0.64, p < 0.001) with comparable sensitivity (0.75 vs 0.63, p = 0.407). Against the senior radiologists averaged, the final ensemble model also had a higher test accuracy (0.87 vs 0.74, p = 0.033) and specificity (0.92 vs 0.70, p < 0.001) with comparable sensitivity (0.75 vs 0.83, p = 0.557). Assisted by the model's probabilities, the junior radiologists achieved a higher average test accuracy (0.77 vs 0.64, Δ = 0.13, p < 0.001) and specificity (0.81 vs 0.64, Δ = 0.17, p < 0.001) with unchanged sensitivity (0.69 vs 0.63, Δ = 0.06, p = 0.302). With the AI probabilities, the junior radiologists had higher specificity (0.81 vs 0.70, Δ = 0.11, p = 0.005) but similar accuracy (0.77 vs 0.74, Δ = 0.03, p = 0.409) and sensitivity (0.69 vs 0.83, Δ = -0.146, p = 0.097) when compared with the senior radiologists. CONCLUSIONS: These results demonstrate that artificial intelligence based on deep learning can assist radiologists in assessing the nature of ovarian lesions and improve their performance. KEY POINTS: • Artificial Intelligence based on deep learning can assess the nature of ovarian lesions on routine MRI with higher accuracy and specificity than radiologists. • Assisted by the deep learning model's probabilities, junior radiologists achieved better performance that matched those of senior radiologists.


Assuntos
Aprendizado Profundo , Cistos Ovarianos , Neoplasias Ovarianas , Inteligência Artificial , Feminino , Humanos , Imageamento por Ressonância Magnética , Redes Neurais de Computação , Neoplasias Ovarianas/diagnóstico por imagem , Sensibilidade e Especificidade
10.
Radiographics ; 41(5): 1427-1445, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34469211

RESUMO

Deep learning is a class of machine learning methods that has been successful in computer vision. Unlike traditional machine learning methods that require hand-engineered feature extraction from input images, deep learning methods learn the image features by which to classify data. Convolutional neural networks (CNNs), the core of deep learning methods for imaging, are multilayered artificial neural networks with weighted connections between neurons that are iteratively adjusted through repeated exposure to training data. These networks have numerous applications in radiology, particularly in image classification, object detection, semantic segmentation, and instance segmentation. The authors provide an update on a recent primer on deep learning for radiologists, and they review terminology, data requirements, and recent trends in the design of CNNs; illustrate building blocks and architectures adapted to computer vision tasks, including generative architectures; and discuss training and validation, performance metrics, visualization, and future directions. Familiarity with the key concepts described will help radiologists understand advances of deep learning in medical imaging and facilitate clinical adoption of these techniques. Online supplemental material is available for this article. ©RSNA, 2021.


Assuntos
Aprendizado Profundo , Diagnóstico por Imagem , Humanos , Processamento de Imagem Assistida por Computador , Aprendizado de Máquina , Redes Neurais de Computação , Radiologistas
11.
J Digit Imaging ; 34(6): 1405-1413, 2021 12.
Artigo em Inglês | MEDLINE | ID: mdl-34727303

RESUMO

In the era of data-driven medicine, rapid access and accurate interpretation of medical images are becoming increasingly important. The DICOM Image ANalysis and Archive (DIANA) system is an open-source, lightweight, and scalable Python interface that enables users to interact with hospital Picture Archiving and Communications Systems (PACS) to access such data. In this work, DIANA functionality was detailed and evaluated in the context of retrospective PACS data retrieval and two prospective clinical artificial intelligence (AI) pipelines: bone age (BA) estimation and intra-cranial hemorrhage (ICH) detection. DIANA orchestrates activity beginning with post-acquisition study discovery and ending with online notifications of findings. For AI applications, system latency (exam completion to system report time) was quantified and compared to that of clinicians (exam completion to initial report creation time). Mean DIANA latency was 9.04 ± 3.83 and 20.17 ± 10.16 min compared to clinician latency of 51.52 ± 58.9 and 65.62 ± 110.39 min for BA and ICH, respectively, with DIANA latencies being significantly lower (p < 0.001). DIANA's capabilities were also explored and found effective in retrieving and anonymizing protected health information for "big-data" medical imaging research and analysis. Mean per-image retrieval times were 1.12 ± 0.50 and 0.08 ± 0.01 s across x-ray and computed tomography studies, respectively. The data herein demonstrate that DIANA can flexibly integrate into existing hospital infrastructure and improve the process by which researchers/clinicians access imaging repository data. This results in a simplified workflow for large data retrieval and clinical integration of AI models.


Assuntos
Inteligência Artificial , Sistemas de Informação em Radiologia , Humanos , Processamento de Imagem Assistida por Computador , Estudos Prospectivos , Estudos Retrospectivos
12.
Radiology ; 296(3): E156-E165, 2020 09.
Artigo em Inglês | MEDLINE | ID: mdl-32339081

RESUMO

Background Coronavirus disease 2019 (COVID-19) and pneumonia of other diseases share similar CT characteristics, which contributes to the challenges in differentiating them with high accuracy. Purpose To establish and evaluate an artificial intelligence (AI) system for differentiating COVID-19 and other pneumonia at chest CT and assessing radiologist performance without and with AI assistance. Materials and Methods A total of 521 patients with positive reverse transcription polymerase chain reaction results for COVID-19 and abnormal chest CT findings were retrospectively identified from 10 hospitals from January 2020 to April 2020. A total of 665 patients with non-COVID-19 pneumonia and definite evidence of pneumonia at chest CT were retrospectively selected from three hospitals between 2017 and 2019. To classify COVID-19 versus other pneumonia for each patient, abnormal CT slices were input into the EfficientNet B4 deep neural network architecture after lung segmentation, followed by a two-layer fully connected neural network to pool slices together. The final cohort of 1186 patients (132 583 CT slices) was divided into training, validation, and test sets in a 7:2:1 and equal ratio. Independent testing was performed by evaluating model performance in separate hospitals. Studies were blindly reviewed by six radiologists without and then with AI assistance. Results The final model achieved a test accuracy of 96% (95% confidence interval [CI]: 90%, 98%), a sensitivity of 95% (95% CI: 83%, 100%), and a specificity of 96% (95% CI: 88%, 99%) with area under the receiver operating characteristic curve of 0.95 and area under the precision-recall curve of 0.90. On independent testing, this model achieved an accuracy of 87% (95% CI: 82%, 90%), a sensitivity of 89% (95% CI: 81%, 94%), and a specificity of 86% (95% CI: 80%, 90%) with area under the receiver operating characteristic curve of 0.90 and area under the precision-recall curve of 0.87. Assisted by the probabilities of the model, the radiologists achieved a higher average test accuracy (90% vs 85%, Δ = 5, P < .001), sensitivity (88% vs 79%, Δ = 9, P < .001), and specificity (91% vs 88%, Δ = 3, P = .001). Conclusion Artificial intelligence assistance improved radiologists' performance in distinguishing coronavirus disease 2019 pneumonia from non-coronavirus disease 2019 pneumonia at chest CT. © RSNA, 2020 Online supplemental material is available for this article.


Assuntos
Inteligência Artificial , Infecções por Coronavirus/diagnóstico por imagem , Pneumonia Viral/diagnóstico por imagem , Radiologistas , Tomografia Computadorizada por Raios X/métodos , Adolescente , Adulto , Idoso , Idoso de 80 Anos ou mais , Betacoronavirus , COVID-19 , Criança , Pré-Escolar , China , Diagnóstico Diferencial , Feminino , Humanos , Lactente , Recém-Nascido , Pulmão/diagnóstico por imagem , Masculino , Pessoa de Meia-Idade , Pandemias , Philadelphia , Pneumonia/diagnóstico por imagem , Radiografia Torácica , Radiologistas/normas , Radiologistas/estatística & dados numéricos , Estudos Retrospectivos , Rhode Island , SARS-CoV-2 , Sensibilidade e Especificidade , Adulto Jovem
13.
Radiology ; 296(2): E46-E54, 2020 08.
Artigo em Inglês | MEDLINE | ID: mdl-32155105

RESUMO

Background Despite its high sensitivity in diagnosing coronavirus disease 2019 (COVID-19) in a screening population, the chest CT appearance of COVID-19 pneumonia is thought to be nonspecific. Purpose To assess the performance of radiologists in the United States and China in differentiating COVID-19 from viral pneumonia at chest CT. Materials and Methods In this study, 219 patients with positive COVID-19, as determined with reverse-transcription polymerase chain reaction (RT-PCR) and abnormal chest CT findings, were retrospectively identified from seven Chinese hospitals in Hunan Province, China, from January 6 to February 20, 2020. Two hundred five patients with positive respiratory pathogen panel results for viral pneumonia and CT findings consistent with or highly suspicious for pneumonia, according to original radiologic interpretation within 7 days of each other, were identified from Rhode Island Hospital in Providence, RI. Three radiologists from China reviewed all chest CT scans (n = 424) blinded to RT-PCR findings to differentiate COVID-19 from viral pneumonia. A sample of 58 age-matched patients was randomly selected and evaluated by four radiologists from the United States in a similar fashion. Different CT features were recorded and compared between the two groups. Results For all chest CT scans (n = 424), the accuracy of the three radiologists from China in differentiating COVID-19 from non-COVID-19 viral pneumonia was 83% (350 of 424), 80% (338 of 424), and 60% (255 of 424). In the randomly selected sample (n = 58), the sensitivities of three radiologists from China and four radiologists from the United States were 80%, 67%, 97%, 93%, 83%, 73%, and 70%, respectively. The corresponding specificities of the same readers were 100%, 93%, 7%, 100%, 93%, 93%, and 100%, respectively. Compared with non-COVID-19 pneumonia, COVID-19 pneumonia was more likely to have a peripheral distribution (80% vs 57%, P < .001), ground-glass opacity (91% vs 68%, P < .001), fine reticular opacity (56% vs 22%, P < .001), and vascular thickening (59% vs 22%, P < .001), but it was less likely to have a central and peripheral distribution (14% vs 35%, P < .001), pleural effusion (4% vs 39%, P < .001), or lymphadenopathy (3% vs 10%, P = .002). Conclusion Radiologists in China and in the United States distinguished coronavirus disease 2019 from viral pneumonia at chest CT with moderate to high accuracy. © RSNA, 2020 Online supplemental material is available for this article. A translation of this abstract in Farsi is available in the supplement. ترجمه چکیده این مقاله به فارسی، در ضمیمه موجود است.


Assuntos
Betacoronavirus , Competência Clínica , Infecções por Coronavirus/diagnóstico por imagem , Pneumonia Viral/diagnóstico por imagem , Radiologistas/normas , Adulto , Idoso , COVID-19 , Teste para COVID-19 , Técnicas de Laboratório Clínico/métodos , Infecções por Coronavirus/diagnóstico , Infecções por Coronavirus/patologia , Diagnóstico Diferencial , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Pandemias , Pneumonia Viral/patologia , Pneumonia Viral/virologia , Valor Preditivo dos Testes , Estudos Retrospectivos , Reação em Cadeia da Polimerase Via Transcriptase Reversa , SARS-CoV-2 , Sensibilidade e Especificidade , Tomografia Computadorizada por Raios X/métodos
14.
Alzheimer Dis Assoc Disord ; 34(4): 333-338, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-32701514

RESUMO

BACKGROUND: The Hong Kong version of Montreal Cognitive Assessment (HK-MoCA) has been used to screen for dementia, but it has not been validated to delineate the stages of Alzheimer disease (AD). This study aimed to determine the cut-off score ranges for mild, moderate, and severe AD. METHODS: The HK-MoCA score was matched against the Clinical Dementia Rating on 155 patients with AD. Investigators performing the HK-MoCA and Clinical Dementia Rating were blinded to each other. Receiver-operating characteristic analysis was used to determine the cut-off scores between different stages of AD (mild, moderate, and severe stage). A secondary analysis with adjustments for age and education received were also performed. RESULT: The cut-off score in HK-MoCA was ≤4 for those with severe AD (sensitivity 84.4%, specificity 91.9%, area under curve=0.92, P<0.001) and 5 to 9 for those with moderate AD (sensitivity 86.3%, specificity of 93.3%, area under curve=0.953, P<0.001). With adjustments for age and education, the cut-off score for moderate AD was adjusted to 5 to 8, whereas the cut-off score for severe AD remained unchanged. CONCLUSIONS: The severity of AD could be delineated using the HK-MoCA for the Cantonese-speaking population in Hong Kong, and the effect of education on the cut-off score needs further investigation.


Assuntos
Demência/classificação , Testes de Estado Mental e Demência/estatística & dados numéricos , Idoso de 80 Anos ou mais , Estudos Transversais , Feminino , Hong Kong , Humanos , Masculino , Reprodutibilidade dos Testes , Tradução
15.
Radiology ; 290(2): 498-503, 2019 02.
Artigo em Inglês | MEDLINE | ID: mdl-30480490

RESUMO

Purpose The Radiological Society of North America (RSNA) Pediatric Bone Age Machine Learning Challenge was created to show an application of machine learning (ML) and artificial intelligence (AI) in medical imaging, promote collaboration to catalyze AI model creation, and identify innovators in medical imaging. Materials and Methods The goal of this challenge was to solicit individuals and teams to create an algorithm or model using ML techniques that would accurately determine skeletal age in a curated data set of pediatric hand radiographs. The primary evaluation measure was the mean absolute distance (MAD) in months, which was calculated as the mean of the absolute values of the difference between the model estimates and those of the reference standard, bone age. Results A data set consisting of 14 236 hand radiographs (12 611 training set, 1425 validation set, 200 test set) was made available to registered challenge participants. A total of 260 individuals or teams registered on the Challenge website. A total of 105 submissions were uploaded from 48 unique users during the training, validation, and test phases. Almost all methods used deep neural network techniques based on one or more convolutional neural networks (CNNs). The best five results based on MAD were 4.2, 4.4, 4.4, 4.5, and 4.5 months, respectively. Conclusion The RSNA Pediatric Bone Age Machine Learning Challenge showed how a coordinated approach to solving a medical imaging problem can be successfully conducted. Future ML challenges will catalyze collaboration and development of ML tools and methods that can potentially improve diagnostic accuracy and patient care. © RSNA, 2018 Online supplemental material is available for this article. See also the editorial by Siegel in this issue.


Assuntos
Determinação da Idade pelo Esqueleto/métodos , Interpretação de Imagem Assistida por Computador/métodos , Aprendizado de Máquina , Radiografia/métodos , Algoritmos , Criança , Bases de Dados Factuais , Feminino , Ossos da Mão/diagnóstico por imagem , Humanos , Masculino
16.
AJR Am J Roentgenol ; 213(3): 568-574, 2019 09.
Artigo em Inglês | MEDLINE | ID: mdl-31120793

RESUMO

OBJECTIVE. We provide overviews of deep learning approaches used by two top-placing teams for the 2018 Radiological Society of North America (RSNA) Pneumonia Detection Challenge. CONCLUSION. Practical applications of deep learning techniques, as well as insights into the annotation of the data, were keys to success in accurately detecting pneumonia on chest radiographs for the competition.


Assuntos
Distinções e Prêmios , Aprendizado Profundo , Pneumonia/diagnóstico por imagem , Sociedades Médicas , Algoritmos , Humanos , América do Norte
17.
J Digit Imaging ; 32(5): 888-896, 2019 10.
Artigo em Inglês | MEDLINE | ID: mdl-30838482

RESUMO

Our objective is to evaluate the effectiveness of efficient convolutional neural networks (CNNs) for abnormality detection in chest radiographs and investigate the generalizability of our models on data from independent sources. We used the National Institutes of Health ChestX-ray14 (NIH-CXR) and the Rhode Island Hospital chest radiograph (RIH-CXR) datasets in this study. Both datasets were split into training, validation, and test sets. The DenseNet and MobileNetV2 CNN architectures were used to train models on each dataset to classify chest radiographs into normal or abnormal categories; models trained on NIH-CXR were designed to also predict the presence of 14 different pathological findings. Models were evaluated on both NIH-CXR and RIH-CXR test sets based on the area under the receiver operating characteristic curve (AUROC). DenseNet and MobileNetV2 models achieved AUROCs of 0.900 and 0.893 for normal versus abnormal classification on NIH-CXR and AUROCs of 0.960 and 0.951 on RIH-CXR. For the 14 pathological findings in NIH-CXR, MobileNetV2 achieved an AUROC within 0.03 of DenseNet for each finding, with an average difference of 0.01. When externally validated on independently collected data (e.g., RIH-CXR-trained models on NIH-CXR), model AUROCs decreased by 3.6-5.2% relative to their locally trained counterparts. MobileNetV2 achieved comparable performance to DenseNet in our analysis, demonstrating the efficacy of efficient CNNs for chest radiograph abnormality detection. In addition, models were able to generalize to external data albeit with performance decreases that should be taken into consideration when applying models on data from different institutions.


Assuntos
Pneumopatias/diagnóstico por imagem , Redes Neurais de Computação , Interpretação de Imagem Radiográfica Assistida por Computador/métodos , Radiografia Torácica/métodos , Conjuntos de Dados como Assunto , Aprendizado Profundo , Feminino , Humanos , Masculino , Pessoa de Meia-Idade
18.
Am J Public Health ; 107(6): 938-944, 2017 06.
Artigo em Inglês | MEDLINE | ID: mdl-28426306

RESUMO

OBJECTIVES: To evaluate the positive predictive value of machine learning algorithms for early assessment of adverse birth risk among pregnant women as a means of improving the allocation of social services. METHODS: We used administrative data for 6457 women collected by the Illinois Department of Human Services from July 2014 to May 2015 to develop a machine learning model for adverse birth prediction and improve upon the existing paper-based risk assessment. We compared different models and determined the strongest predictors of adverse birth outcomes using positive predictive value as the metric for selection. RESULTS: Machine learning algorithms performed similarly, outperforming the current paper-based risk assessment by up to 36%; a refined paper-based assessment outperformed the current assessment by up to 22%. We estimate that these improvements will allow 100 to 170 additional high-risk pregnant women screened for program eligibility each year to receive services that would have otherwise been unobtainable. CONCLUSIONS: Our analysis exhibits the potential for machine learning to move government agencies toward a more data-informed approach to evaluating risk and providing social services. Overall, such efforts will improve the efficiency of allocating resource-intensive interventions.


Assuntos
Administração de Caso , Aprendizado de Máquina/estatística & dados numéricos , Cuidado Pré-Natal/métodos , Serviço Social/métodos , Adulto , Algoritmos , Feminino , Humanos , Illinois , Modelos Teóricos , Gravidez , Complicações na Gravidez/prevenção & controle , Medição de Risco
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA