Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
1.
Front Med (Lausanne) ; 9: 861680, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35755067

RESUMO

As the COVID-19 pandemic devastates globally, the use of chest X-ray (CXR) imaging as a complimentary screening strategy to RT-PCR testing continues to grow given its routine clinical use for respiratory complaint. As part of the COVID-Net open source initiative, we introduce COVID-Net CXR-2, an enhanced deep convolutional neural network design for COVID-19 detection from CXR images built using a greater quantity and diversity of patients than the original COVID-Net. We also introduce a new benchmark dataset composed of 19,203 CXR images from a multinational cohort of 16,656 patients from at least 51 countries, making it the largest, most diverse COVID-19 CXR dataset in open access form. The COVID-Net CXR-2 network achieves sensitivity and positive predictive value of 95.5 and 97.0%, respectively, and was audited in a transparent and responsible manner. Explainability-driven performance validation was used during auditing to gain deeper insights in its decision-making behavior and to ensure clinically relevant factors are leveraged for improving trust in its usage. Radiologist validation was also conducted, where select cases were reviewed and reported on by two board-certified radiologists with over 10 and 19 years of experience, respectively, and showed that the critical factors leveraged by COVID-Net CXR-2 are consistent with radiologist interpretations.

2.
Sci Rep ; 12(1): 83, 2022 01 07.
Artigo em Inglês | MEDLINE | ID: mdl-34997022

RESUMO

Malnutrition is a multidomain problem affecting 54% of older adults in long-term care (LTC). Monitoring nutritional intake in LTC is laborious and subjective, limiting clinical inference capabilities. Recent advances in automatic image-based food estimation have not yet been evaluated in LTC settings. Here, we describe a fully automatic imaging system for quantifying food intake. We propose a novel deep convolutional encoder-decoder food network with depth-refinement (EDFN-D) using an RGB-D camera for quantifying a plate's remaining food volume relative to reference portions in whole and modified texture foods. We trained and validated the network on the pre-labelled UNIMIB2016 food dataset and tested on our two novel LTC-inspired plate datasets (689 plate images, 36 unique foods). EDFN-D performed comparably to depth-refined graph cut on IOU (0.879 vs. 0.887), with intake errors well below typical 50% (mean percent intake error: [Formula: see text]%). We identify how standard segmentation metrics are insufficient due to visual-volume discordance, and include volume disparity analysis to facilitate system trust. This system provides improved transparency, approximates human assessors with enhanced objectivity, accuracy, and precision while avoiding hefty semi-automatic method time requirements. This may help address short-comings currently limiting utility of automated early malnutrition detection in resource-constrained LTC and hospital settings.


Assuntos
Aprendizado Profundo , Ingestão de Alimentos , Processamento de Imagem Assistida por Computador , Assistência de Longa Duração , Desnutrição/diagnóstico , Refeições , Casas de Saúde , Fotografação , Automação , Dieta , Diagnóstico Precoce , Humanos , Desnutrição/fisiopatologia , Estado Nutricional , Valor Nutritivo , Valor Preditivo dos Testes , Reprodutibilidade dos Testes
3.
BMC Med Imaging ; 18(1): 16, 2018 05 16.
Artigo em Inglês | MEDLINE | ID: mdl-29769042

RESUMO

BACKGROUND: Quantitative radiomic features provide a plethora of minable data extracted from multi-parametric magnetic resonance imaging (MP-MRI) which can be used for accurate detection and localization of prostate cancer. While most cancer detection algorithms utilize either voxel-based or region-based feature models, the complexity of prostate tumour phenotype in MP-MRI requires a more sophisticated framework to better leverage available data and exploit a priori knowledge in the field. METHODS: In this paper, we present MPCaD, a novel Multi-scale radiomics-driven framework for Prostate Cancer Detection and localization which leverages radiomic feature models at different scales as well as incorporates a priori knowledge of the field. Tumour candidate localization is first performed using a statistical texture distinctiveness strategy that leverages a voxel-resolution feature model to localize tumour candidate regions. Tumour region classification via a region-resolution feature model is then performed to identify tumour regions. Both voxel-resolution and region-resolution feature models are built upon and extracted from six different MP-MRI modalities. Finally, a conditional random field framework that is driven by voxel-resolution relative ADC features is used to further refine the localization of the tumour regions in the peripheral zone to improve the accuracy of the results. RESULTS: The proposed framework is evaluated using clinical prostate MP-MRI data from 30 patients, and results demonstrate that the proposed framework exhibits enhanced separability of cancerous and healthy tissue, as well as outperforms individual quantitative radiomics models for prostate cancer detection. CONCLUSION: Quantitative radiomic features extracted from MP-MRI of prostate can be utilized to detect and localize prostate cancer.


Assuntos
Interpretação de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética/métodos , Neoplasias da Próstata/diagnóstico por imagem , Algoritmos , Mineração de Dados , Detecção Precoce de Câncer/métodos , Humanos , Masculino , Sensibilidade e Especificidade
4.
J Med Imaging (Bellingham) ; 4(4): 041305, 2017 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-29021990

RESUMO

While lung cancer is the second most diagnosed form of cancer in men and women, a sufficiently early diagnosis can be pivotal in patient survival rates. Imaging-based, or radiomics-driven, detection methods have been developed to aid diagnosticians, but largely rely on hand-crafted features that may not fully encapsulate the differences between cancerous and healthy tissue. Recently, the concept of discovery radiomics was introduced, where custom abstract features are discovered from readily available imaging data. We propose an evolutionary deep radiomic sequencer discovery approach based on evolutionary deep intelligence. Motivated by patient privacy concerns and the idea of operational artificial intelligence, the evolutionary deep radiomic sequencer discovery approach organically evolves increasingly more efficient deep radiomic sequencers that produce significantly more compact yet similarly descriptive radiomic sequences over multiple generations. As a result, this framework improves operational efficiency and enables diagnosis to be run locally at the radiologist's computer while maintaining detection accuracy. We evaluated the evolved deep radiomic sequencer (EDRS) discovered via the proposed evolutionary deep radiomic sequencer discovery framework against state-of-the-art radiomics-driven and discovery radiomics methods using clinical lung CT data with pathologically proven diagnostic data from the LIDC-IDRI dataset. The EDRS shows improved sensitivity (93.42%), specificity (82.39%), and diagnostic accuracy (88.78%) relative to previous radiomics approaches.

5.
Annu Int Conf IEEE Eng Med Biol Soc ; 2017: 4309-4312, 2017 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-29060850

RESUMO

A novel platform, DeepPredict, for predicting hospital bed exit events from video camera systems is proposed. DeepPredict processes video data with a deep convolutional neural network consisting of five main layers: a 1 × 1 3D convolutional layer used for generating feature maps from raw video data, a context-aware pooling layer used for rectifying data from different camera angles, two fully connected layers used for applying pre-trained deep features, and an output layer used to provide a likelihood of a bed exit event. Results for a model trained on 180 hours of data demonstrate accuracy, sensitivity, and specificity of 86.47%, 78.87%, and 94.07%, respectively, when predicting a bed exit event up to seven seconds in advance.


Assuntos
Monitorização Fisiológica , Humanos , Inteligência , Redes Neurais de Computação
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA