Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 2.587
Filtrar
1.
Comput Math Methods Med ; 2020: 9756518, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-33014121

RESUMO

The COVID-19 diagnostic approach is mainly divided into two broad categories, a laboratory-based and chest radiography approach. The last few months have witnessed a rapid increase in the number of studies use artificial intelligence (AI) techniques to diagnose COVID-19 with chest computed tomography (CT). In this study, we review the diagnosis of COVID-19 by using chest CT toward AI. We searched ArXiv, MedRxiv, and Google Scholar using the terms "deep learning", "neural networks", "COVID-19", and "chest CT". At the time of writing (August 24, 2020), there have been nearly 100 studies and 30 studies among them were selected for this review. We categorized the studies based on the classification tasks: COVID-19/normal, COVID-19/non-COVID-19, COVID-19/non-COVID-19 pneumonia, and severity. The sensitivity, specificity, precision, accuracy, area under the curve, and F1 score results were reported as high as 100%, 100%, 99.62, 99.87%, 100%, and 99.5%, respectively. However, the presented results should be carefully compared due to the different degrees of difficulty of different classification tasks.


Assuntos
Betacoronavirus , Técnicas de Laboratório Clínico , Infecções por Coronavirus/diagnóstico por imagem , Pandemias , Pneumonia Viral/diagnóstico por imagem , Interpretação de Imagem Radiográfica Assistida por Computador/métodos , Tomografia Computadorizada por Raios X/estatística & dados numéricos , Inteligência Artificial , Infecções por Coronavirus/diagnóstico , Infecções por Coronavirus/epidemiologia , Aprendizado Profundo , Humanos , Redes Neurais de Computação , Pneumonia/classificação , Pneumonia/diagnóstico por imagem , Pneumonia Viral/epidemiologia , Interpretação de Imagem Radiográfica Assistida por Computador/estatística & dados numéricos , Radiografia Torácica/estatística & dados numéricos , Sensibilidade e Especificidade
2.
Environ Monit Assess ; 192(11): 698, 2020 Oct 12.
Artigo em Inglês | MEDLINE | ID: mdl-33044609

RESUMO

Environmental monitoring guides conservation and is particularly important for aquatic habitats which are heavily impacted by human activities. Underwater cameras and uncrewed devices monitor aquatic wildlife, but manual processing of footage is a significant bottleneck to rapid data processing and dissemination of results. Deep learning has emerged as a solution, but its ability to accurately detect animals across habitat types and locations is largely untested for coastal environments. Here, we produce five deep learning models using an object detection framework to detect an ecologically important fish, luderick (Girella tricuspidata). We trained two models on footage from single habitats (seagrass or reef) and three on footage from both habitats. All models were subjected to tests from both habitat types. Models performed well on test data from the same habitat type (object detection measure: mAP50: 91.7 and 86.9% performance for seagrass and reef, respectively) but poorly on test sets from a different habitat type (73.3 and 58.4%, respectively). The model trained on a combination of both habitats produced the highest object detection results for both tests (an average of 92.4 and 87.8%, respectively). The ability of the combination trained models to correctly estimate the ecological abundance metric, MaxN, showed similar patterns. The findings demonstrate that deep learning models extract ecologically useful information from video footage accurately and consistently and can perform across habitat types when trained on footage from the variety of habitat types.


Assuntos
Aprendizado Profundo , Monitoramento Ambiental , Animais , Ecossistema , Meio Ambiente , Peixes , Humanos
3.
Zhonghua Yan Ke Za Zhi ; 56(10): 774-779, 2020 Oct 11.
Artigo em Chinês | MEDLINE | ID: mdl-33059421

RESUMO

Objective: To evaluate the application value of a deep-learning-based imaging method for rapid measurement and evaluation of meibomian glands. Methods: Diagnostic evaluation study. From January 2017 to December 2018, 2 304 meibomian gland images of 576 dry eye patients who were treated at the Eye Center of Wuhan University People's Hospital with an average age of (40.03±11.46) years were collected to build a meibomian gland image database. These images were labeled by 2 clinicians, and a deep learning algorithm was used to build a model and detect the accuracy of the model in identifying and labeling the meibomian glands and calculating the rate of meibomian gland loss. Mean average precision (mAP) and validation loss were used to assess the accuracy of the model in identifying feature areas. Sixty-four meibomian gland images apart from the database were randomly selected and evaluated by 7 clinicians independently. The results were analyzed with paired t-test. Results: This model marked the meibomian conjunctiva (mAP>0.976, validation loss<0.35) and the meibomian gland (mAP>0.922, validation loss<1.0), respectively, thereby achieving high accuracy to calculate the area and ratio of meibomian gland loss. The proportion of meibomian glands marked by the model was 53.24%±11.09%, and the artificial marking was 52.13%±13.38%. There was no statistically significant difference (t=1.935, P>0.05). In addition, the model took only 0.499 second to evaluate each image, while the average time for clinicians was more than 10 seconds. Conclusion: The deep-learning-based imaging model can improve the accuracy of the examination and save time and be used for clinical auxiliary diagnosis and screening of diseases related to meibomian gland dysfunction.(Chin J Ophthalmol, 2020, 56: 774-779).


Assuntos
Síndromes do Olho Seco , Doenças Palpebrais , Adulto , Aprendizado Profundo , Humanos , Glândulas Tarsais/diagnóstico por imagem , Pessoa de Meia-Idade , Lágrimas
4.
Annu Int Conf IEEE Eng Med Biol Soc ; 2020: 1516-1519, 2020 07.
Artigo em Inglês | MEDLINE | ID: mdl-33018279

RESUMO

Brain insults such as cerebral ischemia and intracranial hemorrhage are critical stroke conditions with high mortality rates. Currently, medical image analysis for critical stroke conditions is still largely done manually, which is time-consuming and labor-intensive. While deep learning algorithms are increasingly being applied in medical image analysis, the performance of these methods still needs substantial improvement before they can be widely used in the clinical setting. Among other challenges, the lack of sufficient labelled data is one of the key problems that has limited the progress of deep learning methods in this domain. To mitigate this bottleneck, we propose an integrated method that includes a data augmentation framework using a conditional Generative Adversarial Network (cGAN) which is followed by a supervised segmentation with a Convolutional Neural Network (CNN). The adopted cGAN generates meaningful brain images from specially altered lesion masks as a form of data augmentation to supplement the training dataset, while the CNN incorporates depth-wise-convolution based X-blocks as well as Feature Similarity Module (FSM) to ease and aid the training process, resulting in better lesion segmentation. We evaluate the proposed deep learning strategy on the Anatomical Tracings of Lesions After Stroke (ATLAS) dataset and show that this approach outperforms the current state-of-art methods in task of stroke lesion segmentation.


Assuntos
Aprendizado Profundo , Neuroimagem , Algoritmos , Encéfalo , Redes Neurais de Computação
5.
Annu Int Conf IEEE Eng Med Biol Soc ; 2020: 1532-1535, 2020 07.
Artigo em Inglês | MEDLINE | ID: mdl-33018283

RESUMO

18FDG PET/CT imaging is commonly used in diagnosis and follow-up of metastatic breast cancer, but its quantitative analysis is complicated by the number and location heterogeneity of metastatic lesions. Considering that bones are the most common location among metastatic sites, this work aims to compare different approaches to segment the bones and bone metastatic lesions in breast cancer.Two deep learning methods based on U-Net were developed and trained to segment either both bones and bone lesions or bone lesions alone on PET/CT images. These methods were cross-validated on 24 patients from the prospective EPICUREseinmeta metastatic breast cancer study and were evaluated using recall and precision to measure lesion detection, as well as the Dice score to assess bones and bone lesions segmentation accuracy.Results show that taking into account bone information in the training process allows to improve the precision of the lesions detection as well as the Dice score of the segmented lesions. Moreover, using the obtained bone and bone lesion masks, we were able to compute a PET bone index (PBI) inspired by the recognized Bone Scan Index (BSI). This automatically computed PBI globally agrees with the one calculated from ground truth delineations.Clinical relevance- We propose a completely automatic deep learning based method to detect and segment bones and bone lesions on 18FDG PET/CT in the context of metastatic breast cancer. We also introduce an automatic PET bone index which could be incorporated in the monitoring and decision process.


Assuntos
Neoplasias da Mama , Aprendizado Profundo , Fluordesoxiglucose F18 , Neoplasias da Mama/diagnóstico por imagem , Humanos , Tomografia Computadorizada com Tomografia por Emissão de Pósitrons , Estudos Prospectivos , Tomografia Computadorizada por Raios X
6.
Annu Int Conf IEEE Eng Med Biol Soc ; 2020: 1536-1539, 2020 07.
Artigo em Inglês | MEDLINE | ID: mdl-33018284

RESUMO

Semi-automatic measurements are performed on 18FDG PET-CT images to monitor the evolution of metastatic sites in the clinical follow-up of metastatic breast cancer patients. Apart from being time-consuming and prone to subjective approximation, semi-automatic tools cannot make the difference between cancerous regions and active organs, presenting a high 18FDG uptake.In this work, we combine a deep learning-based approach with a superpixel segmentation method to segment the main active organs (brain, heart, bladder) from full-body PET images. In particular, we integrate a superpixel SLIC algorithm at different levels of a convolutional network. Results are compared with a deep learning segmentation network alone. The methods are cross-validated on full-body PET images of 36 patients and tested on the acquisitions of 24 patients from a different study center, in the context of the ongoing EPICUREseinmeta study. The similarity between the manually defined organ masks and the results is evaluated with the Dice score. Moreover, the amount of false positives is evaluated through the positive predictive value (PPV).According to the computed Dice scores, all approaches allow to accurately segment the target organs. However, the networks integrating superpixels are better suited to transfer knowledge across datasets acquired on multiple sites (domain adaptation) and are less likely to segment structures outside of the target organs, according to the PPV.Hence, combining deep learning with superpixels allows to segment organs presenting a high 18FDG uptake on PET images without selecting cancerous lesion, and thus improves the precision of the semi-automatic tools monitoring the evolution of breast cancer metastasis.Clinical relevance- We demonstrate the utility of combining deep learning and superpixel segmentation methods to accurately find the contours of active organs from metastatic breast cancer images, to different dataset distributions.


Assuntos
Neoplasias da Mama , Aprendizado Profundo , Algoritmos , Encéfalo , Neoplasias da Mama/diagnóstico por imagem , Neoplasias da Mama/patologia , Humanos , Metástase Neoplásica , Tomografia Computadorizada com Tomografia por Emissão de Pósitrons
7.
Annu Int Conf IEEE Eng Med Biol Soc ; 2020: 1548-1551, 2020 07.
Artigo em Inglês | MEDLINE | ID: mdl-33018287

RESUMO

This paper proposes an automatic method for classifying Aortic valvular stenosis (AS) using ECG (Electrocardiogram) images by the deep learning whose training ECG images are annotated by the diagnoses given by the medical doctor who observes the echocardiograms. Besides, it explores the relationship between the trained deep learning network and its determinations, using the Grad-CAM.In this study, one-beat ECG images for 12-leads and 4-leads are generated from ECG's and train CNN's (Convolutional neural network). By applying the Grad-CAM to the trained CNN's, feature areas are detected in the early time range of the one-beat ECG image. Also, by limiting the time range of the ECG image to that of the feature area, the CNN for the 4-lead achieves the best classification performance, which is close to expert medical doctors' diagnoses.Clinical Relevance-This paper achieves as high AS classification performance as medical doctors' diagnoses based on echocardiograms by proposing an automatic method for detecting AS only using ECG.


Assuntos
Estenose da Valva Aórtica , Aprendizado Profundo , Eletrocardiografia , Estenose da Valva Aórtica/diagnóstico , Ecocardiografia , Humanos , Redes Neurais de Computação
8.
Annu Int Conf IEEE Eng Med Biol Soc ; 2020: 1560-1563, 2020 07.
Artigo em Inglês | MEDLINE | ID: mdl-33018290

RESUMO

The characteristics of diabetic retinopathy (DR) fundus images generally consist of multiple types of lesions which provided strong evidence for the ophthalmologists to make diagnosis. It is particularly significant to figure out an efficient method to not only accurately classify DR fundus images but also recognize all kinds of lesions on them. In this paper, a deep learning-based multi-label classification model with Gradient-weighted Class Activation Mapping (Grad-CAM) was proposed, which can both make DR classification and automatically locate the regions of different lesions. To reducing laborious annotation work and improve the efficiency of labeling, this paper innovatively considered different types of lesions as different labels for a fundus image so that this paper changed the task of lesion detection into that of image classification. A total of five labels were pre-defined and 3228 fundus images were collected for developing our model. The architecture of deep learning model was designed by ourselves based on ResNet. Through experiments on the test images, this method acquired a sensitive of 93.9% and a specificity of 94.4% on DR classification. Moreover, the corresponding regions of lesions were reasonably outlined on the DR fundus images.


Assuntos
Aprendizado Profundo , Diabetes Mellitus , Retinopatia Diabética , Retinopatia Diabética/diagnóstico por imagem , Fundo de Olho , Humanos , Sensibilidade e Especificidade
9.
Annu Int Conf IEEE Eng Med Biol Soc ; 2020: 1564-1567, 2020 07.
Artigo em Inglês | MEDLINE | ID: mdl-33018291

RESUMO

Magnetic resonance imaging (MRI) has been one of the most powerful and valuable imaging methods for medical diagnosis and staging of disease. Due to the long scan time of MRI acquisition, k-space under-samplings is required during the acquisition processing. Thus, MRI reconstruction, which transfers undersampled k-space data to high-quality magnetic resonance imaging, becomes an important and meaningful task. There have been many explorations on k-space interpolation for MRI reconstruction. However, most of these methods ignore the strong correlation between target slice and its adjacent slices. Inspired by this, we propose a fully data-driven deep learning algorithm for k-space interpolation, utilizing the correlation information between the target slice and its neighboring slices. A novel network is proposed, which models the inter-dependencies between different slices. In addition, the network is easily implemented and expended. Experiments show that our methods consistently surpass existing image-domain and k-space-domain magnetic resonance imaging reconstructing methods.


Assuntos
Aprendizado Profundo , Processamento de Imagem Assistida por Computador , Imagem por Ressonância Magnética , Algoritmos , Cintilografia
10.
Annu Int Conf IEEE Eng Med Biol Soc ; 2020: 1568-1571, 2020 07.
Artigo em Inglês | MEDLINE | ID: mdl-33018292

RESUMO

There is growing evidence that the use of stringent and dichotomic diagnostic categories in many medical disciplines (particularly 'brain sciences' as neurology and psychiatry) is an oversimplification. Although clear diagnostic boundaries remain useful for patients, families, and their access to dedicated NHS and health care services, the traditional dichotomic categories are not helpful to describe the complexity and large heterogeneity of symptoms across many and overlapping clinical phenotypes. With the advent of 'big' multimodal neuroimaging databases, data-driven stratification of the wide spectrum of healthy human physiology or disease based on neuroimages is theoretically become possible. However, this conceptual framework is hampered by severe computational constraints. In this paper we present a novel, deep learning based encode-decode architecture which leverages several parameter efficiency techniques generate latent deep embedding which compress the information contained in a full 3D neuroimaging volume by a factor 1000 while still retaining anatomical detail and hence rendering the subsequent stratification problem tractable. We train our architecture on 1003 brain scan derived from the human connectome project and demonstrate the faithfulness of the obtained reconstructions. Further, we employ a data driven clustering technique driven by a grid search in hyperparameter space to identify six different strata within the 1003 healthy community dwelling individuals which turn out to correspond to highly significant group differences in both physiological and cognitive data. Indicating that the well-known relationships between such variables and brain structure can be probed in an unsupervised manner through our novel architecture and pipeline. This opens the door to a variety of previously inaccessible applications in the realm of data driven stratification of large cohorts based on neuroimaging data.Clinical Relevance -With our approach, each person can be described and classified within a multi-dimensional space of data, where they are uniquely classified according to their individual anatomy, physiology and disease-related anatomical and physiological alterations.


Assuntos
Conectoma , Aprendizado Profundo , Neuroimagem , Encéfalo , Análise por Conglomerados , Bases de Dados Factuais , Humanos
11.
J Med Internet Res ; 22(9): e19907, 2020 09 09.
Artigo em Inglês | MEDLINE | ID: mdl-32877350

RESUMO

BACKGROUND: The COVID-19 pandemic has caused major disruptions worldwide since March 2020. The experience of the 1918 influenza pandemic demonstrated that decreases in the infection rates of COVID-19 do not guarantee continuity of the trend. OBJECTIVE: The aim of this study was to develop a precise spread model of COVID-19 with time-dependent parameters via deep learning to respond promptly to the dynamic situation of the outbreak and proactively minimize damage. METHODS: In this study, we investigated a mathematical model with time-dependent parameters via deep learning based on forward-inverse problems. We used data from the Korea Centers for Disease Control and Prevention (KCDC) and the Center for Systems Science and Engineering (CSSE) at Johns Hopkins University for Korea and the other countries, respectively. Because the data consist of confirmed, recovered, and deceased cases, we selected the susceptible-infected-recovered (SIR) model and found approximated solutions as well as model parameters. Specifically, we applied fully connected neural networks to the solutions and parameters and designed suitable loss functions. RESULTS: We developed an entirely new SIR model with time-dependent parameters via deep learning methods. Furthermore, we validated the model with the conventional Runge-Kutta fourth order model to confirm its convergent nature. In addition, we evaluated our model based on the real-world situation reported from the KCDC, the Korean government, and news media. We also crossvalidated our model using data from the CSSE for Italy, Sweden, and the United States. CONCLUSIONS: The methodology and new model of this study could be employed for short-term prediction of COVID-19, which could help the government prepare for a new outbreak. In addition, from the perspective of measuring medical resources, our model has powerful strength because it assumes all the parameters as time-dependent, which reflects the exact status of viral spread.


Assuntos
Betacoronavirus , Infecções por Coronavirus/epidemiologia , Aprendizado Profundo , Modelos Teóricos , Redes Neurais de Computação , Pandemias , Pneumonia Viral/epidemiologia , Humanos , Meios de Comunicação de Massa , República da Coreia/epidemiologia , Fatores de Tempo
12.
Nat Commun ; 11(1): 4391, 2020 09 01.
Artigo em Inglês | MEDLINE | ID: mdl-32873806

RESUMO

Deep learning with Convolutional Neural Networks has shown great promise in image-based classification and enhancement but is often unsuitable for predictive modeling using features without spatial correlations. We present a feature representation approach termed REFINED (REpresentation of Features as Images with NEighborhood Dependencies) to arrange high-dimensional vectors in a compact image form conducible for CNN-based deep learning. We consider the similarities between features to generate a concise feature map in the form of a two-dimensional image by minimizing the pairwise distance values following a Bayesian Metric Multidimensional Scaling Approach. We hypothesize that this approach enables embedded feature extraction and, integrated with CNN-based deep learning, can boost the predictive accuracy. We illustrate the superior predictive capabilities of the proposed framework as compared to state-of-the-art methodologies in drug sensitivity prediction scenarios using synthetic datasets, drug chemical descriptors as predictors from NCI60, and both transcriptomic information and drug descriptors as predictors from GDSC.


Assuntos
Antineoplásicos/farmacologia , Aprendizado Profundo , Processamento de Imagem Assistida por Computador/métodos , Neoplasias/tratamento farmacológico , Antineoplásicos/uso terapêutico , Teorema de Bayes , Biomarcadores Tumorais/genética , Linhagem Celular Tumoral , Proliferação de Células/efeitos dos fármacos , Conjuntos de Dados como Assunto , Resistencia a Medicamentos Antineoplásicos , Ensaios de Seleção de Medicamentos Antitumorais/métodos , Perfilação da Expressão Gênica , Sequenciamento de Nucleotídeos em Larga Escala , Humanos , Neoplasias/patologia , Análise de Sequência com Séries de Oligonucleotídeos
14.
Nat Commun ; 11(1): 4703, 2020 09 17.
Artigo em Inglês | MEDLINE | ID: mdl-32943643

RESUMO

Deep learning models have shown great promise in predicting regulatory effects from DNA sequence, but their informativeness for human complex diseases is not fully understood. Here, we evaluate genome-wide SNP annotations from two previous deep learning models, DeepSEA and Basenji, by applying stratified LD score regression to 41 diseases and traits (average N = 320K), conditioning on a broad set of coding, conserved and regulatory annotations. We aggregated annotations across all (respectively blood or brain) tissues/cell-types in meta-analyses across all (respectively 11 blood or 8 brain) traits. The annotations were highly enriched for disease heritability, but produced only limited conditionally significant results: non-tissue-specific and brain-specific Basenji-H3K4me3 for all traits and brain traits respectively. We conclude that deep learning models have yet to achieve their full potential to provide considerable unique information for complex disease, and that their conditional informativeness for disease cannot be inferred from their accuracy in predicting regulatory annotations.


Assuntos
Aprendizado Profundo , Doença/genética , Anotação de Sequência Molecular , Alelos , Predisposição Genética para Doença , Genoma Humano , Estudo de Associação Genômica Ampla , Histonas/genética , Humanos , Desequilíbrio de Ligação , Modelos Genéticos , Fenótipo , Polimorfismo de Nucleotídeo Único
15.
Sci Rep ; 10(1): 15364, 2020 09 21.
Artigo em Inglês | MEDLINE | ID: mdl-32958781

RESUMO

Currently, we witness the severe spread of the pandemic of the new Corona virus, COVID-19, which causes dangerous symptoms to humans and animals, its complications may lead to death. Although convolutional neural networks (CNNs) is considered the current state-of-the-art image classification technique, it needs massive computational cost for deployment and training. In this paper, we propose an improved hybrid classification approach for COVID-19 images by combining the strengths of CNNs (using a powerful architecture called Inception) to extract features and a swarm-based feature selection algorithm (Marine Predators Algorithm) to select the most relevant features. A combination of fractional-order and marine predators algorithm (FO-MPA) is considered an integration among a robust tool in mathematics named fractional-order calculus (FO). The proposed approach was evaluated on two public COVID-19 X-ray datasets which achieves both high performance and reduction of computational complexity. The two datasets consist of X-ray COVID-19 images by international Cardiothoracic radiologist, researchers and others published on Kaggle. The proposed approach selected successfully 130 and 86 out of 51 K features extracted by inception from dataset 1 and dataset 2, while improving classification accuracy at the same time. The results are the best achieved on these datasets when compared to a set of recent feature selection algorithms. By achieving 98.7%, 98.2% and 99.6%, 99% of classification accuracy and F-Score for dataset 1 and dataset 2, respectively, the proposed approach outperforms several CNNs and all recent works on COVID-19 images.


Assuntos
Infecções por Coronavirus/diagnóstico por imagem , Infecções por Coronavirus/diagnóstico , Diagnóstico por Imagem/métodos , Processamento de Imagem Assistida por Computador/métodos , Pneumonia Viral/diagnóstico por imagem , Pneumonia Viral/diagnóstico , Algoritmos , Betacoronavirus , Aprendizado Profundo , Humanos , Redes Neurais de Computação , Pandemias , Raios X
16.
Zhongguo Yi Xue Ke Xue Yuan Xue Bao ; 42(4): 477-484, 2020 Aug 30.
Artigo em Chinês | MEDLINE | ID: mdl-32895099

RESUMO

Objective To make a preliminary pathological classification of lung adenocarcinoma with pure ground glass nodules(pGGN)on CT by using a deep learning model. Methods CT images and pathological data of 219 patients(240 lesions in total)with pGGN on CT and pathologically confirmed adenocarcinoma were collected.According to pathological subtypes,the lesions were divided into non-invasive lung adenocarcinoma group(which included atypical adenomatous hyperplasia and adenocarcinoma in situ and micro-invasive adenocarcinoma)and invasive lung adenocarcinoma group.First,the lesions were outlined and labeled by two young radiologists,and then the labeled data were randomly divided into two datasets:the training set(80%)and the test set(20%).The prediction Results of deep learning were compared with those of two experienced radiologists by using the test dataset. Results The deep learning model achieved high performance in predicting the pathological types(non-invasive and invasive)of pGGN lung adenocarcinoma.The accuracy rate in pGGN diagnosis was 0.8330(95% CI=0.7016-0.9157)for of deep learning model,0.5000(95% CI=0.3639-0.6361)for expert 1,0.5625(95% CI=0.4227-0.6931)for expert 2,and 0.5417(95% CI=0.4029-0.6743)for both two experts.Thus,the accuracy of the deep learning model was significantly higher than those of the experienced radiologists(P=0.002).The intra-observer agreements were good(Kappa values:0.939 and 0.799,respectively).The inter-observer agreement was general(Kappa value:0.667)(P=0.000). Conclusion The deep learning model showed better performance in predicting the pathological types of pGGN lung adenocarcinoma compared with experienced radiologists.


Assuntos
Adenocarcinoma de Pulmão , Neoplasias Pulmonares , Aprendizado Profundo , Humanos , Estudos Retrospectivos , Tomografia Computadorizada por Raios X
18.
IEEE J Biomed Health Inform ; 24(10): 2806-2813, 2020 10.
Artigo em Inglês | MEDLINE | ID: mdl-32915751

RESUMO

The pandemic of coronavirus disease 2019 (COVID-19) has lead to a global public health crisis spreading hundreds of countries. With the continuous growth of new infections, developing automated tools for COVID-19 identification with CT image is highly desired to assist the clinical diagnosis and reduce the tedious workload of image interpretation. To enlarge the datasets for developing machine learning methods, it is essentially helpful to aggregate the cases from different medical systems for learning robust and generalizable models. This paper proposes a novel joint learning framework to perform accurate COVID-19 identification by effectively learning with heterogeneous datasets with distribution discrepancy. We build a powerful backbone by redesigning the recently proposed COVID-Net in aspects of network architecture and learning strategy to improve the prediction accuracy and learning efficiency. On top of our improved backbone, we further explicitly tackle the cross-site domain shift by conducting separate feature normalization in latent space. Moreover, we propose to use a contrastive training objective to enhance the domain invariance of semantic embeddings for boosting the classification performance on each dataset. We develop and evaluate our method with two public large-scale COVID-19 diagnosis datasets made up of CT images. Extensive experiments show that our approach consistently improves the performanceson both datasets, outperforming the original COVID-Net trained on each dataset by 12.16% and 14.23% in AUC respectively, also exceeding existing state-of-the-art multi-site learning methods.


Assuntos
Betacoronavirus , Técnicas de Laboratório Clínico/estatística & dados numéricos , Infecções por Coronavirus/diagnóstico por imagem , Infecções por Coronavirus/diagnóstico , Aprendizado Profundo , Pandemias , Pneumonia Viral/diagnóstico por imagem , Pneumonia Viral/diagnóstico , Tomografia Computadorizada por Raios X/estatística & dados numéricos , Biologia Computacional , Sistemas Computacionais , Infecções por Coronavirus/classificação , Bases de Dados Factuais/estatística & dados numéricos , Humanos , Aprendizado de Máquina , Pandemias/classificação , Pneumonia Viral/classificação , Interpretação de Imagem Radiográfica Assistida por Computador/estatística & dados numéricos
19.
Nat Commun ; 11(1): 4560, 2020 09 11.
Artigo em Inglês | MEDLINE | ID: mdl-32917899

RESUMO

The rhesus macaque is an important model species in several branches of science, including neuroscience, psychology, ethology, and medicine. The utility of the macaque model would be greatly enhanced by the ability to precisely measure behavior in freely moving conditions. Existing approaches do not provide sufficient tracking. Here, we describe OpenMonkeyStudio, a deep learning-based markerless motion capture system for estimating 3D pose in freely moving macaques in large unconstrained environments. Our system makes use of 62 machine vision cameras that encircle an open 2.45 m × 2.45 m × 2.75 m enclosure. The resulting multiview image streams allow for data augmentation via 3D-reconstruction of annotated images to train a robust view-invariant deep neural network. This view invariance represents an important advance over previous markerless 2D tracking approaches, and allows fully automatic pose inference on unconstrained natural motion. We show that OpenMonkeyStudio can be used to accurately recognize actions and track social interactions.


Assuntos
Processamento de Imagem Assistida por Computador/métodos , Imageamento Tridimensional/métodos , Macaca mulatta/fisiologia , Movimento (Física) , Algoritmos , Animais , Fenômenos Biomecânicos , Aprendizado Profundo , Masculino , Modelos Animais , Movimento , Rede Nervosa/diagnóstico por imagem , Rede Nervosa/fisiologia , Redes Neurais de Computação
20.
Comput Biol Med ; 124: 103960, 2020 09.
Artigo em Inglês | MEDLINE | ID: mdl-32919186

RESUMO

Artificial intelligence (AI) has penetrated the field of medicine, particularly the field of radiology. Since its emergence, the highly virulent coronavirus disease 2019 (COVID-19) has infected over 10 million people, leading to over 500,000 deaths as of July 1st, 2020. Since the outbreak began, almost 28,000 articles about COVID-19 have been published (https://pubmed.ncbi.nlm.nih.gov); however, few have explored the role of imaging and artificial intelligence in COVID-19 patients-specifically, those with comorbidities. This paper begins by presenting the four pathways that can lead to heart and brain injuries following a COVID-19 infection. Our survey also offers insights into the role that imaging can play in the treatment of comorbid patients, based on probabilities derived from COVID-19 symptom statistics. Such symptoms include myocardial injury, hypoxia, plaque rupture, arrhythmias, venous thromboembolism, coronary thrombosis, encephalitis, ischemia, inflammation, and lung injury. At its core, this study considers the role of image-based AI, which can be used to characterize the tissues of a COVID-19 patient and classify the severity of their infection. Image-based AI is more important than ever as the pandemic surges and countries worldwide grapple with limited medical resources for detection and diagnosis.


Assuntos
Betacoronavirus , Lesões Encefálicas/epidemiologia , Infecções por Coronavirus/epidemiologia , Traumatismos Cardíacos/epidemiologia , Pneumonia Viral/epidemiologia , Inteligência Artificial , Betacoronavirus/patogenicidade , Betacoronavirus/fisiologia , Lesões Encefálicas/classificação , Lesões Encefálicas/diagnóstico por imagem , Técnicas de Laboratório Clínico/métodos , Comorbidade , Biologia Computacional , Infecções por Coronavirus/classificação , Infecções por Coronavirus/diagnóstico , Infecções por Coronavirus/diagnóstico por imagem , Aprendizado Profundo , Traumatismos Cardíacos/classificação , Traumatismos Cardíacos/diagnóstico por imagem , Humanos , Aprendizado de Máquina , Pandemias/classificação , Pneumonia Viral/classificação , Pneumonia Viral/diagnóstico por imagem , Fatores de Risco , Índice de Gravidade de Doença
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA