Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
J Digit Imaging ; 34(2): 263-272, 2021 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-33674979

RESUMO

Coronavirus (COVID-19) is a pandemic, which caused suddenly unexplained pneumonia cases and caused a devastating effect on global public health. Computerized tomography (CT) is one of the most effective tools for COVID-19 screening. Since some specific patterns such as bilateral, peripheral, and basal predominant ground-glass opacity, multifocal patchy consolidation, crazy-paving pattern with a peripheral distribution can be observed in CT images and these patterns have been declared as the findings of COVID-19 infection. For patient monitoring, diagnosis and segmentation of COVID-19, which spreads into the lung, expeditiously and accurately from CT, will provide vital information about the stage of the disease. In this work, we proposed a SegNet-based network using the attention gate (AG) mechanism for the automatic segmentation of COVID-19 regions in CT images. AGs can be easily integrated into standard convolutional neural network (CNN) architectures with a minimum computing load as well as increasing model precision and predictive accuracy. Besides, the success of the proposed network has been evaluated based on dice, Tversky, and focal Tversky loss functions to deal with low sensitivity arising from the small lesions. The experiments were carried out using a fivefold cross-validation technique on a COVID-19 CT segmentation database containing 473 CT images. The obtained sensitivity, specificity, and dice scores were reported as 92.73%, 99.51%, and 89.61%, respectively. The superiority of the proposed method has been highlighted by comparing with the results reported in previous studies and it is thought that it will be an auxiliary tool that accurately detects automatic COVID-19 regions from CT images.


Assuntos
COVID-19 , Humanos , Redes Neurais de Computação , SARS-CoV-2 , Semântica , Tomografia Computadorizada por Raios X
2.
Med Hypotheses ; 134: 109426, 2020 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-31622926

RESUMO

Recent studies have shown that convolutional neural networks (CNNs) can be more accurate, efficient and even deeper on their training if they include direct connections from the layers close to the input to those close to the output in order to transfer activation maps. Through this observation, this study introduces a new CNN model, namely Densely Connected and Concatenated Multi Encoder-Decoder (DCCMED) network. DCCMED contains concatenated multi encoder-decoder CNNs and connects certain layers to the corresponding input of the subsequent encoder-decoder block in a feed-forward fashion, for retinal vessel extraction from fundus image. The DCCMED model has assertive aspects such as reducing pixel-vanishing and encouraging features reuse. A patch-based data augmentation strategy is also developed for the training of the proposed DCCMED model that increases the generalization ability of the network. Experiments are carried out on two publicly available datasets, namely Digital Retinal Images for Vessel Extraction (DRIVE) and Structured Analysis of the Retina (STARE). Evaluation criterions such as sensitivity (Se), specificity (Sp), accuracy (Acc), dice and area under the receiver operating characteristic curve (AUC) are used for verifying the effectiveness of the proposed method. The obtained results are compared with several supervised and unsupervised state-of-the-art methods based on AUC scores. The obtained results demonstrate that the proposed DCCMED model yields the best performance compared with the-state-of-the-art methods according to accuracy and AUC scores.


Assuntos
Aprendizado Profundo , Diagnóstico por Computador/métodos , Fundo de Olho , Processamento de Imagem Assistida por Computador , Vasos Retinianos/diagnóstico por imagem , Algoritmos , Área Sob a Curva , Angiofluoresceinografia , Humanos , Curva ROC , Sensibilidade e Especificidade
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA