Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 9.282
Filtrar
1.
Biomed Res Int ; 2021: 8840835, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-33708997

RESUMO

This study established an interpretable machine learning model to predict the severity of coronavirus disease 2019 (COVID-19) and output the most crucial deterioration factors. Clinical information, laboratory tests, and chest computed tomography (CT) scans at admission were collected. Two experienced radiologists reviewed the scans for the patterns, distribution, and CT scores of lung abnormalities. Six machine learning models were established to predict the severity of COVID-19. After parameter tuning and performance comparison, the optimal model was explained using Shapley Additive explanations to output the crucial factors. This study enrolled and classified 198 patients into mild (n = 162; 46.93 ± 14.49 years old) and severe (n = 36; 60.97 ± 15.91 years old) groups. The severe group had a higher temperature (37.42 ± 0.99°C vs. 36.75 ± 0.66°C), CT score at admission, neutrophil count, and neutrophil-to-lymphocyte ratio than the mild group. The XGBoost model ranked first among all models, with an AUC, sensitivity, and specificity of 0.924, 90.91%, and 97.96%, respectively. The early stage of chest CT, total CT score of the percentage of lung involvement, and age were the top three contributors to the prediction of the deterioration of XGBoost. A higher total score on chest CT had a more significant impact on the prediction. In conclusion, the XGBoost model to predict the severity of COVID-19 achieved excellent performance and output the essential factors in the deterioration process, which may help with early clinical intervention, improve prognosis, and reduce mortality.


Assuntos
/diagnóstico por imagem , Diagnóstico por Computador/métodos , Adulto , Idoso , Contagem de Células Sanguíneas , Dispneia/virologia , Feminino , Febre/virologia , Humanos , Aprendizado de Máquina , Masculino , Modelos Biológicos , Neutrófilos , Índice de Gravidade de Doença , Tomografia Computadorizada por Raios X
2.
Undersea Hyperb Med ; 48(1): 73-80, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-33648036

RESUMO

Venous gas emboli (VGE) are often quantified as a marker of decompression stress on echocardiograms. Bubble-counting has been proposed as an easy to learn method, but remains time-consuming, rendering large dataset analysis impractical. Computer automation of VGE counting following this method has therefore been suggested as a means to eliminate rater bias and save time. A necessary step for this automation relies on the selection of a frame during late ventricular diastole (LVD) for each cardiac cycle of the recording. Since electrocardiograms (ECG) are not always recorded in field experiments, here we propose a fully automated method for LVD frame selection based on regional intensity minimization. The algorithm is tested on 20 previously acquired echocardiography recordings (from the original bubble-counting publication), half of which were acquired at rest (Rest) and the other half after leg flexions (Flex). From the 7,140 frames analyzed, sensitivity was found to be 0.913 [95% CI: 0.875-0.940] and specificity 0.997 [95% CI: 0.996-0.998]. The method's performance is also compared to that of random chance selection and found to perform significantly better (p≺0.0001). No trend in algorithm performance was found with respect to VGE counts, and no significant difference was found between Flex and Rest (p>0.05). In conclusion, full automation of LVD frame selection for the purpose of bubble counting in post-dive echocardiography has been established with excellent accuracy, although we caution that high quality acquisitions remain paramount in retaining high reliability.


Assuntos
Algoritmos , Diagnóstico por Computador/métodos , Mergulho/fisiologia , Ecocardiografia/métodos , Embolia Aérea/diagnóstico por imagem , Função Ventricular/fisiologia , Doença da Descompressão/diagnóstico por imagem , Diagnóstico por Computador/estatística & dados numéricos , Diástole/fisiologia , Ecocardiografia/estatística & dados numéricos , Ventrículos do Coração/diagnóstico por imagem , Humanos , Contração Miocárdica/fisiologia , Sensibilidade e Especificidade
3.
JAMA Netw Open ; 4(1): e2030913, 2021 01 04.
Artigo em Inglês | MEDLINE | ID: mdl-33416883

RESUMO

Importance: Accurate clinical decision support tools are needed to identify patients at risk for iatrogenic hypoglycemia, a potentially serious adverse event, throughout hospitalization. Objective: To predict the risk of iatrogenic hypoglycemia within 24 hours after each blood glucose (BG) measurement during hospitalization using a machine learning model. Design, Setting, and Participants: This retrospective cohort study, conducted at 5 hospitals within the Johns Hopkins Health System, included 54 978 admissions of 35 147 inpatients who had at least 4 BG measurements and received at least 1 U of insulin during hospitalization between December 1, 2014, and July 31, 2018. Data from the largest hospital were split into a 70% training set and 30% test set. A stochastic gradient boosting machine learning model was developed using the training set and validated on internal and external validation. Exposures: A total of 43 clinical predictors of iatrogenic hypoglycemia were extracted from the electronic medical record, including demographic characteristics, diagnoses, procedures, laboratory data, medications, orders, anthropomorphometric data, and vital signs. Main Outcomes and Measures: Iatrogenic hypoglycemia was defined as a BG measurement less than or equal to 70 mg/dL occurring within the pharmacologic duration of action of administered insulin, sulfonylurea, or meglitinide. Results: This cohort study included 54 978 admissions (35 147 inpatients; median [interquartile range] age, 66.0 [56.0-75.0] years; 27 781 [50.5%] male; 30 429 [55.3%] White) from 5 hospitals. Of 1 612 425 index BG measurements, 50 354 (3.1%) were followed by iatrogenic hypoglycemia in the subsequent 24 hours. On internal validation, the model achieved a C statistic of 0.90 (95% CI, 0.89-0.90), a positive predictive value of 0.09 (95% CI, 0.08-0.09), a positive likelihood ratio of 4.67 (95% CI, 4.59-4.74), a negative predictive value of 1.00 (95% CI, 1.00-1.00), and a negative likelihood ratio of 0.22 (95% CI, 0.21-0.23). On external validation, the model achieved C statistics ranging from 0.86 to 0.88, positive predictive values ranging from 0.12 to 0.13, negative predictive values of 0.99, positive likelihood ratios ranging from 3.09 to 3.89, and negative likelihood ratios ranging from 0.23 to 0.25. Basal insulin dose, coefficient of variation of BG, and previous hypoglycemic episodes were the strongest predictors. Conclusions and Relevance: These findings suggest that iatrogenic hypoglycemia can be predicted in a short-term prediction horizon after each BG measurement during hospitalization. Further studies are needed to translate this model into a real-time informatics alert and evaluate its effectiveness in reducing the incidence of inpatient iatrogenic hypoglycemia.


Assuntos
Diagnóstico por Computador/métodos , Hipoglicemia/diagnóstico , Aprendizado de Máquina , Idoso , Glicemia/análise , Glicemia/fisiologia , Feminino , Hospitalização , Humanos , Hipoglicemia/epidemiologia , Hipoglicemia/prevenção & controle , Doença Iatrogênica , Masculino , Pessoa de Meia-Idade , Valor Preditivo dos Testes , Estudos Retrospectivos , Medição de Risco
4.
JAMA Netw Open ; 4(1): e2030939, 2021 01 04.
Artigo em Inglês | MEDLINE | ID: mdl-33471115

RESUMO

Importance: A chronic shortage of donor kidneys is compounded by a high discard rate, and this rate is directly associated with biopsy specimen evaluation, which shows poor reproducibility among pathologists. A deep learning algorithm for measuring percent global glomerulosclerosis (an important predictor of outcome) on images of kidney biopsy specimens could enable pathologists to more reproducibly and accurately quantify percent global glomerulosclerosis, potentially saving organs that would have been discarded. Objective: To compare the performances of pathologists with a deep learning model on quantification of percent global glomerulosclerosis in whole-slide images of donor kidney biopsy specimens, and to determine the potential benefit of a deep learning model on organ discard rates. Design, Setting, and Participants: This prognostic study used whole-slide images acquired from 98 hematoxylin-eosin-stained frozen and 51 permanent donor biopsy specimen sections retrieved from 83 kidneys. Serial annotation by 3 board-certified pathologists served as ground truth for model training and for evaluation. Images of kidney biopsy specimens were obtained from the Washington University database (retrieved between June 2015 and June 2017). Cases were selected randomly from a database of more than 1000 cases to include biopsy specimens representing an equitable distribution within 0% to 5%, 6% to 10%, 11% to 15%, 16% to 20%, and more than 20% global glomerulosclerosis. Main Outcomes and Measures: Correlation coefficient (r) and root-mean-square error (RMSE) with respect to annotations were computed for cross-validated model predictions and on-call pathologists' estimates of percent global glomerulosclerosis when using individual and pooled slide results. Data were analyzed from March 2018 to August 2020. Results: The cross-validated model results of section images retrieved from 83 donor kidneys showed higher correlation with annotations (r = 0.916; 95% CI, 0.886-0.939) than on-call pathologists (r = 0.884; 95% CI, 0.825-0.923) that was enhanced when pooling glomeruli counts from multiple levels (r = 0.933; 95% CI, 0.898-0.956). Model prediction error for single levels (RMSE, 5.631; 95% CI, 4.735-6.517) was 14% lower than on-call pathologists (RMSE, 6.523; 95% CI, 5.191-7.783), improving to 22% with multiple levels (RMSE, 5.094; 95% CI, 3.972-6.301). The model decreased the likelihood of unnecessary organ discard by 37% compared with pathologists. Conclusions and Relevance: The findings of this prognostic study suggest that this deep learning model provided a scalable and robust method to quantify percent global glomerulosclerosis in whole-slide images of donor kidneys. The model performance improved by analyzing multiple levels of a section, surpassing the capacity of pathologists in the time-sensitive setting of examining donor biopsy specimens. The results indicate the potential of a deep learning model to prevent erroneous donor organ discard.


Assuntos
Biópsia/métodos , Aprendizado Profundo , Diagnóstico por Computador/métodos , Glomerulonefrite , Rim/patologia , Algoritmos , Glomerulonefrite/diagnóstico , Glomerulonefrite/patologia , Humanos , Patologistas , Reprodutibilidade dos Testes
5.
Sensors (Basel) ; 20(24)2020 Dec 19.
Artigo em Inglês | MEDLINE | ID: mdl-33352690

RESUMO

Assessing the health condition has a wide range of applications in healthcare, military, aerospace, and industrial fields. Nevertheless, traditional feature-engineered techniques involve manual feature extraction, which are too cumbersome to adapt to the changes caused by the development of sensor network technology. Recently, deep-learning-based methods have achieved initial success in health-condition assessment research, but insufficient considerations for problems such as class skewness, noisy segments, and result interpretability make it difficult to apply them to real-world applications. In this paper, we propose a K-margin-based Interpretable Learning approach for health-condition assessment. In detail, a skewness-aware RCR-Net model is employed to handle problems of class skewness. Furthermore, we present a diagnosis model based on K-margin to automatically handle noisy segments by naturally exploiting expected consistency among the segments associated with each record. Additionally, a knowledge-directed interpretation method is presented to learn domain knowledge-level features automatically without the help of human experts which can be used as an interpretable decision-making basis. Finally, through experimental validation in the field of both medical and aerospace, the proposed method has a better generality and high efficiency with 0.7974 and 0.8005 F1 scores, which outperform all state-of-the-art deep learning methods for health-condition assessment task by 3.30% and 2.99%, respectively.


Assuntos
Diagnóstico por Computador/métodos , Aprendizado de Máquina , Humanos , Ruído
6.
Lancet Digit Health ; 2(9): e486-e488, 2020 09.
Artigo em Inglês | MEDLINE | ID: mdl-33328116

RESUMO

Artificial intelligence (AI) is a disruptive technology that involves the use of computerised algorithms to dissect complicated data. Among the most promising clinical applications of AI is diagnostic imaging, and mounting attention is being directed at establishing and fine-tuning its performance to facilitate detection and quantification of a wide array of clinical conditions. Investigations leveraging computer-aided diagnostics have shown excellent accuracy, sensitivity, and specificity for the detection of small radiographic abnormalities, with the potential to improve public health. However, outcome assessment in AI imaging studies is commonly defined by lesion detection while ignoring the type and biological aggressiveness of a lesion, which might create a skewed representation of AI's performance. Moreover, the use of non-patient-focused radiographic and pathological endpoints might enhance the estimated sensitivity at the expense of increasing false positives and possible overdiagnosis as a result of identifying minor changes that might reflect subclinical or indolent disease. We argue for refinement of AI imaging studies via consistent selection of clinically meaningful endpoints such as survival, symptoms, and need for treatment.


Assuntos
Inteligência Artificial , Diagnóstico por Computador/métodos , Diagnóstico por Imagem/métodos , Doenças Cardiovasculares/diagnóstico , Doenças Cardiovasculares/patologia , Humanos , Interpretação de Imagem Assistida por Computador/métodos , Neoplasias/diagnóstico , Neoplasias/patologia
8.
PLoS One ; 15(10): e0240048, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-33031408

RESUMO

BACKGROUND: The detection of wheezes as an exacerbation sign is important in certain respiratory diseases. However, few highly accurate clinical methods are available for automatic detection of wheezes in children. This study aimed to develop a wheeze detection algorithm for practical implementation in children. METHODS: A wheeze recognition algorithm was developed based on wheezes features following the Computerized Respiratory Sound Analysis guidelines. Wheezes can be detected by auscultation with a stethoscope and using an automatic computerized lung sound analysis. Lung sounds were recorded for 30 s in 214 children aged 2 months to 12 years and 11 months in a pediatric consultation room. Files containing recorded lung sounds were assessed by two specialist physicians and divided into two groups: 65 were designated as "wheeze" files, and 149 were designated as "no-wheeze" files. All lung sound judgments were agreed between two specialist physicians. We compared wheeze recognition between the specialist physicians and using the wheeze recognition algorithm and calculated the sensitivity, specificity, positive predictive value, and negative predictive value for all recorded sound files to evaluate the influence of age on the wheeze detection sensitivity. RESULTS: The detection of wheezes was not influenced by age. In all files, wheezes were differentiated from noise using the wheeze recognition algorithm. The sensitivity, specificity, positive predictive value, and negative predictive value of the wheeze recognition algorithm were 100%, 95.7%, 90.3%, and 100%, respectively. CONCLUSIONS: The wheeze recognition algorithm could identify wheezes in sound files and therefore may be useful in the practical implementation of respiratory illness management at home using properly developed devices.


Assuntos
Algoritmos , Pneumopatias/diagnóstico , Sons Respiratórios/fisiologia , Auscultação , Criança , Pré-Escolar , Diagnóstico por Computador/métodos , Feminino , Humanos , Lactente , Masculino , Sensibilidade e Especificidade
9.
PLoS Comput Biol ; 16(9): e1008186, 2020 09.
Artigo em Inglês | MEDLINE | ID: mdl-32941425

RESUMO

Identifying heterogeneous cognitive impairment markers at an early stage is vital for Alzheimer's disease diagnosis. However, due to complex and uncertain brain connectivity features in the cognitive domains, it remains challenging to quantify functional brain connectomic changes during non-pharmacological interventions for amnestic mild cognitive impairment (aMCI) patients. We present a quantitative method for functional brain network analysis of fMRI data based on the multi-graph unsupervised Gaussian embedding method (MG2G). This neural network-based model can effectively learn low-dimensional Gaussian distributions from the original high-dimensional sparse functional brain networks, quantify uncertainties in link prediction, and discover the intrinsic dimensionality of brain networks. Using the Wasserstein distance to measure probabilistic changes, we discovered that brain regions in the default mode network and somatosensory/somatomotor hand, fronto-parietal task control, memory retrieval, and visual and dorsal attention systems had relatively large variations during non-pharmacological training, which might provide distinct biomarkers for fine-grained monitoring of aMCI cognitive alteration. An important finding of our study is the ability of the new method to capture subtle changes for individual patients before and after short-term intervention. More broadly, the MG2G method can be used in studying multiple brain disorders and injuries, e.g., in Parkinson's disease or traumatic brain injury (TBI), and hence it will be useful to the wider neuroscience community.


Assuntos
Encéfalo , Disfunção Cognitiva , Diagnóstico por Computador/métodos , Distribuição Normal , Encéfalo/diagnóstico por imagem , Encéfalo/fisiopatologia , Disfunção Cognitiva/diagnóstico por imagem , Disfunção Cognitiva/fisiopatologia , Disfunção Cognitiva/terapia , Conectoma , Humanos , Imagem por Ressonância Magnética , Memória/fisiologia , Testes de Estado Mental e Demência , Pessoa de Meia-Idade , Aprendizado de Máquina não Supervisionado
10.
Artigo em Inglês | MEDLINE | ID: mdl-32971756

RESUMO

PURPOSE: To compare different commercial software in the quantification of Pneumonia Lesions in COVID-19 infection and to stratify the patients based on the disease severity using on chest computed tomography (CT) images. MATERIALS AND METHODS: We retrospectively examined 162 patients with confirmed COVID-19 infection by reverse transcriptase-polymerase chain reaction (RT-PCR) test. All cases were evaluated separately by radiologists (visually) and by using three computer software programs: (1) Thoracic VCAR software, GE Healthcare, United States; (2) Myrian, Intrasense, France; (3) InferRead, InferVision Europe, Wiesbaden, Germany. The degree of lesions was visually scored by the radiologist using a score on 5 levels (none, mild, moderate, severe, and critic). The parameters obtained using the computer tools included healthy residual lung parenchyma, ground-glass opacity area, and consolidation volume. Intraclass coefficient (ICC), Spearman correlation analysis, and non-parametric tests were performed. RESULTS: Thoracic VCAR software was not able to perform volumes segmentation in 26/162 (16.0%) cases, Myrian software in 12/162 (7.4%) patients while InferRead software in 61/162 (37.7%) patients. A great variability (ICC ranged for 0.17 to 0.51) was detected among the quantitative measurements of the residual healthy lung parenchyma volume, GGO, and consolidations volumes calculated by different computer tools. The overall radiological severity score was moderately correlated with the residual healthy lung parenchyma volume obtained by ThoracicVCAR or Myrian software, with the GGO area obtained by the ThoracicVCAR tool and with consolidation volume obtained by Myrian software. Quantified volumes by InferRead software had a low correlation with the overall radiological severity score. CONCLUSIONS: Computer-aided pneumonia quantification could be an easy and feasible way to stratify COVID-19 cases according to severity; however, a great variability among quantitative measurements provided by computer tools should be considered.


Assuntos
Infecções por Coronavirus/diagnóstico por imagem , Diagnóstico por Computador/métodos , Pneumonia Viral/diagnóstico por imagem , Tomografia Computadorizada por Raios X/métodos , Estudos de Viabilidade , Humanos , Pandemias , Estudos Retrospectivos , Índice de Gravidade de Doença , Software
11.
Sci Rep ; 10(1): 13590, 2020 08 12.
Artigo em Inglês | MEDLINE | ID: mdl-32788602

RESUMO

Chest radiographs are among the most frequently acquired images in radiology and are often the subject of computer vision research. However, most of the models used to classify chest radiographs are derived from openly available deep neural networks, trained on large image datasets. These datasets differ from chest radiographs in that they are mostly color images and have substantially more labels. Therefore, very deep convolutional neural networks (CNN) designed for ImageNet and often representing more complex relationships, might not be required for the comparably simpler task of classifying medical image data. Sixteen different architectures of CNN were compared regarding the classification performance on two openly available datasets, the CheXpert and COVID-19 Image Data Collection. Areas under the receiver operating characteristics curves (AUROC) between 0.83 and 0.89 could be achieved on the CheXpert dataset. On the COVID-19 Image Data Collection, all models showed an excellent ability to detect COVID-19 and non-COVID pneumonia with AUROC values between 0.983 and 0.998. It could be observed, that more shallow networks may achieve results comparable to their deeper and more complex counterparts with shorter training times, enabling classification performances on medical image data close to the state-of-the-art methods even when using limited hardware.


Assuntos
Betacoronavirus , Infecções por Coronavirus/diagnóstico por imagem , Aprendizado Profundo , Diagnóstico por Computador/métodos , Redes Neurais de Computação , Pneumonia Viral/diagnóstico por imagem , Radiografia Torácica/classificação , Tórax/diagnóstico por imagem , Infecções por Coronavirus/virologia , Humanos , Pandemias , Pneumonia Viral/virologia , Curva ROC , Sensibilidade e Especificidade
12.
PLoS One ; 15(8): e0237213, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-32797099

RESUMO

Bone metastasis is one of the most frequent diseases in prostate cancer; scintigraphy imaging is particularly important for the clinical diagnosis of bone metastasis. Up to date, minimal research has been conducted regarding the application of machine learning with emphasis on modern efficient convolutional neural networks (CNNs) algorithms, for the diagnosis of prostate cancer metastasis from bone scintigraphy images. The advantageous and outstanding capabilities of deep learning, machine learning's groundbreaking technological advancement, have not yet been fully investigated regarding their application in computer-aided diagnosis systems in the field of medical image analysis, such as the problem of bone metastasis classification in whole-body scans. In particular, CNNs are gaining great attention due to their ability to recognize complex visual patterns, in the same way as human perception operates. Considering all these new enhancements in the field of deep learning, a set of simpler, faster and more accurate CNN architectures, designed for classification of metastatic prostate cancer in bones, is explored. This research study has a two-fold goal: to create and also demonstrate a set of simple but robust CNN models for automatic classification of whole-body scans in two categories, malignant (bone metastasis) or healthy, using solely the scans at the input level. Through a meticulous exploration of CNN hyper-parameter selection and fine-tuning, the best architecture is selected with respect to classification accuracy. Thus a CNN model with improved classification capabilities for bone metastasis diagnosis is produced, using bone scans from prostate cancer patients. The achieved classification testing accuracy is 97.38%, whereas the average sensitivity is approximately 95.8%. Finally, the best-performing CNN method is compared to other popular and well-known CNN architectures used for medical imaging, like VGG16, ResNet50, GoogleNet and MobileNet. The classification results show that the proposed CNN-based approach outperforms the popular CNN methods in nuclear medicine for metastatic prostate cancer diagnosis in bones.


Assuntos
Neoplasias Ósseas/secundário , Redes Neurais de Computação , Neoplasias da Próstata/patologia , Imagem Corporal Total/métodos , Neoplasias Ósseas/classificação , Neoplasias Ósseas/diagnóstico por imagem , Diagnóstico por Computador/métodos , Humanos , Interpretação de Imagem Assistida por Computador/métodos , Aprendizado de Máquina , Masculino , Cintilografia/métodos , Software
13.
Hautarzt ; 71(9): 669-676, 2020 Sep.
Artigo em Alemão | MEDLINE | ID: mdl-32747996

RESUMO

BACKGROUND: Artificial intelligence (AI) is increasingly being used in medical practice. Especially in the image-based diagnosis of skin cancer, AI shows great potential. However, there is a significant discrepancy between expectations and true relevance of AI in current dermatological practice. OBJECTIVES: This article summarizes promising study results of skin cancer diagnosis by computer-based diagnostic systems and discusses their significance for daily practice. We hereby focus on the analysis of dermoscopic images of pigmented and unpigmented skin lesions. MATERIALS AND METHODS: A selective literature search for recent relevant trials was conducted. The included studies used machine learning, and in particular "convolutional neural networks", which have been shown to be particularly effective for the classification of image data. RESULTS AND CONCLUSIONS: In numerous studies, computer algorithms were able to detect pigmented and nonpigmented neoplasms of the skin with high precision, comparable to that of dermatologists. The combination of the physician's assessment and AI showed the best results. Computer-based diagnostic systems are widely accepted among patients and physicians. However, they are still not applicable in daily practice, since computer-based diagnostic systems have only been tested in an experimental environment. In addition, many digital diagnostic criteria that help AI to classify skin lesions remain unclear. This lack of transparency still needs to be addressed. Moreover, clinical studies on the use of AI-based assistance systems are needed in order to prove its applicability in daily dermatologic practice.


Assuntos
Inteligência Artificial , Diagnóstico por Computador/métodos , Programas de Rastreamento/métodos , Melanoma/diagnóstico , Redes Neurais de Computação , Neoplasias Cutâneas/diagnóstico , Algoritmos , Dermoscopia , Humanos , Processamento de Imagem Assistida por Computador/métodos
14.
Nat Commun ; 11(1): 4294, 2020 08 27.
Artigo em Inglês | MEDLINE | ID: mdl-32855423

RESUMO

The early detection and accurate histopathological diagnosis of gastric cancer increase the chances of successful treatment. The worldwide shortage of pathologists offers a unique opportunity for the use of artificial intelligence assistance systems to alleviate the workload and increase diagnostic accuracy. Here, we report a clinically applicable system developed at the Chinese PLA General Hospital, China, using a deep convolutional neural network trained with 2,123 pixel-level annotated H&E-stained whole slide images. The model achieves a sensitivity near 100% and an average specificity of 80.6% on a real-world test dataset with 3,212 whole slide images digitalized by three scanners. We show that the system could aid pathologists in improving diagnostic accuracy and preventing misdiagnoses. Moreover, we demonstrate that our system performs robustly with 1,582 whole slide images from two other medical centres. Our study suggests the feasibility and benefits of using histopathological artificial intelligence assistance systems in routine practice scenarios.


Assuntos
Aprendizado Profundo , Diagnóstico por Computador/métodos , Neoplasias Gástricas/patologia , Bases de Dados Factuais , Reações Falso-Positivas , Humanos , Processamento de Imagem Assistida por Computador/métodos , Redes Neurais de Computação
15.
Medicine (Baltimore) ; 99(26): e20787, 2020 Jun 26.
Artigo em Inglês | MEDLINE | ID: mdl-32590758

RESUMO

Convolutional neural networks (CNNs), a particular type of deep learning architecture, are positioned to become one of the most transformative technologies for medical applications. The aim of the current study was to evaluate the efficacy of deep CNN algorithm for the identification and classification of dental implant systems.A total of 5390 panoramic and 5380 periapical radiographic images from 3 types of dental implant systems, with similar shape and internal conical connection, were randomly divided into training and validation dataset (80%) and a test dataset (20%). We performed image preprocessing and transfer learning techniques, based on fine-tuned and pre-trained deep CNN architecture (GoogLeNet Inception-v3). The test dataset was used to assess the accuracy, sensitivity, specificity, receiver operating characteristic curve, area under the receiver operating characteristic curve (AUC), and confusion matrix compared between deep CNN and periodontal specialist.We found that the deep CNN architecture (AUC = 0.971, 95% confidence interval 0.963-0.978) and board-certified periodontist (AUC = 0.925, 95% confidence interval 0.913-0.935) showed reliable classification accuracies.This study demonstrated that deep CNN architecture is useful for the identification and classification of dental implant systems using panoramic and periapical radiographic images.


Assuntos
Algoritmos , Implantes Dentários , Diagnóstico por Computador/métodos , Redes Neurais de Computação , Radiografia Dentária/métodos , Aprendizado Profundo , Implantes Dentários/classificação , Implantes Dentários/normas , Humanos , Projetos Piloto , Radiografia Panorâmica/métodos , Reprodutibilidade dos Testes , Resultado do Tratamento
16.
J Pain Symptom Manage ; 60(2): e1-e6, 2020 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-32437944

RESUMO

Effective prognostication for a novel disease presents significant challenges, especially given the stress induced during a pandemic. We developed a point-of-care tool to summarize outcome data for critically ill patients with COVID-19 and help guide clinicians through a thoughtful prognostication process. Two authors reviewed studies of outcomes of patients with critical illness due to COVID-19 and created a visual infographic tool based on available data. Survival data were supplemented by descriptions of best- and worst-case clinical scenarios. The tool also included prompts for clinician reflection designed to enhance awareness of cognitive biases that may affect prognostic accuracy. This online, open-source COVID-19 Prognostication Tool has been made available to all clinicians at our institution and is updated weekly to reflect evolving data. Our COVID-19 Prognostication Tool may provide a useful approach to promoting consistent and high-quality prognostic communication across a health care system.


Assuntos
Infecções por Coronavirus/diagnóstico , Diagnóstico por Computador , Comunicação em Saúde , Pneumonia Viral/diagnóstico , Idoso , Infecções por Coronavirus/terapia , Cuidados Críticos , Estado Terminal , Visualização de Dados , Diagnóstico por Computador/métodos , Comunicação em Saúde/métodos , Pessoal de Saúde/psicologia , Humanos , Internet , Pessoa de Meia-Idade , Cuidados Paliativos/métodos , Pandemias , Pneumonia Viral/terapia , Sistemas Automatizados de Assistência Junto ao Leito , Preconceito , Prognóstico
17.
PLoS One ; 15(5): e0233079, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-32407355

RESUMO

PURPOSE: To evaluate ways to improve the generalizability of a deep learning algorithm for identifying glaucomatous optic neuropathy (GON) using a limited number of fundus photographs, as well as the key features being used for classification. METHODS: A total of 944 fundus images from Taipei Veterans General Hospital (TVGH) were retrospectively collected. Clinical and demographic characteristics, including structural and functional measurements of the images with GON, were recorded. Transfer learning based on VGGNet was used to construct a convolutional neural network (CNN) to identify GON. To avoid missing cases with advanced GON, an ensemble model was adopted in which a support vector machine classifier would make final classification based on cup-to-disc ratio if the CNN classifier had low-confidence score. The CNN classifier was first established using TVGH dataset, and then fine-tuned by combining the training images of TVGH and Drishti-GS datasets. Class activation map (CAM) was used to identify key features used for CNN classification. Performance of each classifier was determined through area under receiver operating characteristic curve (AUC) and compared with the ensemble model by diagnostic accuracy. RESULTS: In 187 TVGH test images, the accuracy, sensitivity, and specificity of the CNN classifier were 95.0%, 95.7%, and 94.2%, respectively, and the AUC was 0.992 compared to the 92.8% accuracy rate of the ensemble model. For the Drishti-GS test images, the accuracy of the CNN, the fine-tuned CNN and ensemble model was 33.3%, 80.3%, and 80.3%, respectively. The CNN classifier did not misclassify images with moderate to severe diseases. Class-discriminative regions revealed by CAM co-localized with known characteristics of GON. CONCLUSIONS: The ensemble model or a fine-tuned CNN classifier may be potential designs to build a generalizable deep learning model for glaucoma detection when large image databases are not available.


Assuntos
Diagnóstico por Computador/métodos , Glaucoma/complicações , Glaucoma/diagnóstico , Doenças do Nervo Óptico/complicações , Doenças do Nervo Óptico/diagnóstico , Adulto , Idoso , Idoso de 80 Anos ou mais , Algoritmos , Área Sob a Curva , Bases de Dados Factuais , Aprendizado Profundo , Diagnóstico por Computador/estatística & dados numéricos , Feminino , Fundo de Olho , Glaucoma/classificação , Humanos , Interpretação de Imagem Assistida por Computador , Masculino , Pessoa de Meia-Idade , Redes Neurais de Computação , Doenças do Nervo Óptico/classificação , Estudos Retrospectivos , Máquina de Vetores de Suporte , Taiwan
18.
Neural Netw ; 128: 47-60, 2020 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-32416467

RESUMO

The analysis of tissue of a tumor in the oral cavity is essential for the pathologist to ascertain its grading. Recent studies using biopsy images reveal computer-aided diagnosis for oral sub-mucous fibrosis (OSF) carried out using machine learning algorithms, but no research has yet been outlined for multi-class grading of oral squamous cell carcinoma (OSCC). Pertinently, with the advent of deep learning in digital imaging and computational aid in the diagnosis, multi-class classification of OSCC biopsy images can help in timely and effective prognosis and multi-modal treatment protocols for oral cancer patients, thus reducing the operational workload of pathologists while enhancing management of the disease. With this motivation, this study attempts to classify OSCC into its four classes as per the Broder's system of histological grading. The study is conducted on oral biopsy images applying two methods: (i) through the application of transfer learning using pre-trained deep convolutional neural network (CNN) wherein four candidate pre-trained models, namely Alexnet, VGG-16, VGG-19 and Resnet-50, were chosen to find the most suitable model for our classification problem, and (ii) by a proposed CNN model. Although the highest classification accuracy of 92.15% is achieved by Resnet-50 model, the experimental findings highlight that the proposed CNN model outperformed the transfer learning approaches displaying accuracy of 97.5%. It can be concluded that the proposed CNN based multi-class grading method of OSCC could be used for diagnosis of patients with OSCC.


Assuntos
Carcinoma de Células Escamosas/patologia , Aprendizado Profundo , Diagnóstico por Computador/métodos , Células Epiteliais/classificação , Neoplasias Bucais/patologia , Células Epiteliais/patologia , Humanos
19.
Radiology ; 295(3): 517-526, 2020 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-32228293

RESUMO

Background Higher peak enhancement and washout component values measured on preoperative breast MRI scans with computer-aided diagnosis (CAD) are presumed to be associated with worse recurrence-free survival. Purpose To investigate whether CAD-extracted kinetic features of breast cancer and the heterogeneity of these features at preoperative MRI are associated with distant metastasis-free survival in women with invasive breast cancer. Materials and Methods Consecutive women with newly diagnosed invasive breast cancer who underwent preoperative MRI were retrospectively evaluated between 2011 and 2012. A commercially available CAD system was used to extract the peak enhancement and delayed enhancement profiles of each breast cancer case from preoperative MRI data. The kinetic heterogeneity of these features (a measure of heterogeneity in the proportions of tumor pixels with delayed washout, plateau, and persistent components within a tumor) was calculated to evaluate intratumoral heterogeneity. Cox proportional hazards models were used to investigate the associations between CAD-extracted kinetic features and distant metastasis-free survival after adjusting for clinical-pathologic factors. Results A total of 276 consecutive women (mean age, 53 years) were evaluated. In 28 of 276 (10.1%) women, distant metastasis developed at a median follow-up of 79 months. A higher degree of kinetic heterogeneity was observed in women with distant metastases than in those without distant metastases (mean, 0.70 ± 0.2 vs 0.43 ± 0.3; P < .001). Multivariable Cox proportional hazards analysis revealed that a higher degree of kinetic heterogeneity (hazard ratio [HR], 19.2; 95% confidence interval [CI]: 4.2, 87.1; P < .001), higher peak enhancement (HR, 1.001; 95% CI: 1.000, 1.002; P = .045), the presence of lymphovascular invasion (HR, 3.3; 95% CI: 1.5, 7.5; P = .004), and a higher histologic grade (ie, grade 3) (HR, 2.2; 95% CI: 1.0, 4.9; P = .044) were associated with worse distant metastasis-free survival. Conclusion Higher values of kinetic heterogeneity and peak enhancement as determined with computer-aided diagnosis of preoperative MRI were associated with worse distant metastasis-free survival in women with invasive breast cancer. © RSNA, 2020 See also the editorial by El Khouli and Jacobs in this issue.


Assuntos
Neoplasias da Mama/diagnóstico por imagem , Diagnóstico por Computador/métodos , Imagem por Ressonância Magnética/métodos , Mamografia/métodos , Neoplasias da Mama/mortalidade , Neoplasias da Mama/cirurgia , Feminino , Seguimentos , Humanos , Metástase Linfática , Pessoa de Meia-Idade , Invasividade Neoplásica , Cuidados Pré-Operatórios , Intervalo Livre de Progressão , Modelos de Riscos Proporcionais , Estudos Retrospectivos
20.
Technol Cancer Res Treat ; 19: 1533033820916191, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-32347167

RESUMO

Breast cancer has been a worldwide burden of women's health. Although concerns have been raised for early diagnosis and timely treatment, the efforts are still needed for precision medicine and individualized treatment. Radiomics is a new technology with immense potential to obtain mineable data to provide rich information about the diagnosis and prognosis of breast cancer. In our study, we introduced the workflow and application of radiomics as well as its outlook and challenges based on published studies. Radiomics has the potential ability to differentiate between malignant and benign breast lesions, predict axillary lymph node status, molecular subtypes of breast cancer, tumor response to chemotherapy, and survival outcomes. Our study aimed to help clinicians and radiologists to know the basic information of radiomics and encourage cooperation with scientists to mine data for better application in clinical practice.


Assuntos
Neoplasias da Mama/diagnóstico por imagem , Processamento de Imagem Assistida por Computador/métodos , Radiologia/métodos , Neoplasias da Mama/diagnóstico , Diagnóstico por Computador/métodos , Feminino , Humanos , Imagem por Ressonância Magnética/métodos , Mamografia/métodos , Tomografia por Emissão de Pósitrons/métodos , Medicina de Precisão , Prognóstico , Ultrassonografia/métodos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...