Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 8 de 8
Filtrar
1.
Comput Biol Med ; 133: 104392, 2021 06.
Artigo em Inglês | MEDLINE | ID: mdl-33895458

RESUMO

Body-Mass-Index (BMI) conveys important information about one's life such as health and socio-economic conditions. Large-scale automatic estimation of BMIs can help predict several societal behaviors such as health, job opportunities, friendships, and popularity. The recent works have either employed hand-crafted geometrical face features or face-level deep convolutional neural network features for face to BMI prediction. The hand-crafted geometrical face feature lack generalizability and face-level deep features don't have detailed local information. Although useful, these methods missed the detailed local information which is essential for exact BMI prediction. In this paper, we propose to use deep features that are pooled from different face regions (eye, nose, eyebrow, lips, etc.) and demonstrate that this explicit pooling from face regions can significantly boost the performance of BMI prediction. To address the problem of accurate and pixel-level face regions localization, we propose to use face semantic segmentation in our framework. Extensive experiments are performed using different Convolutional Neural Network (CNN) backbones including FaceNet and VGG-face on three publicly available datasets: VisualBMI, Bollywood and VIP attributes. Experimental results demonstrate that, as compared to the recent works, the proposed Reg-GAP gives a percentage improvement of 22.4% on VIP-attribute, 3.3% on VisualBMI, and 63.09% on the Bollywood dataset.


Assuntos
Processamento de Imagem Assistida por Computador , Semântica , Índice de Massa Corporal , Face/diagnóstico por imagem , Redes Neurais de Computação
2.
IEEE Trans Med Imaging ; 38(8): 1777-1787, 2019 08.
Artigo em Inglês | MEDLINE | ID: mdl-30676950

RESUMO

Risk stratification (characterization) of tumors from radiology images can be more accurate and faster with computer-aided diagnosis (CAD) tools. Tumor characterization through such tools can also enable non-invasive cancer staging, prognosis, and foster personalized treatment planning as a part of precision medicine. In this papet, we propose both supervised and unsupervised machine learning strategies to improve tumor characterization. Our first approach is based on supervised learning for which we demonstrate significant gains with deep learning algorithms, particularly by utilizing a 3D convolutional neural network and transfer learning. Motivated by the radiologists' interpretations of the scans, we then show how to incorporate task-dependent feature representations into a CAD system via a graph-regularized sparse multi-task learning framework. In the second approach, we explore an unsupervised learning algorithm to address the limited availability of labeled training data, a common problem in medical imaging applications. Inspired by learning from label proportion approaches in computer vision, we propose to use proportion-support vector machine for characterizing tumors. We also seek the answer to the fundamental question about the goodness of "deep features" for unsupervised tumor classification. We evaluate our proposed supervised and unsupervised learning algorithms on two different tumor diagnosis challenges: lung and pancreas with 1018 CT and 171 MRI scans, respectively, and obtain the state-of-the-art sensitivity and specificity results in both problems.


Assuntos
Aprendizado Profundo , Interpretação de Imagem Assistida por Computador/métodos , Neoplasias Pulmonares/diagnóstico por imagem , Neoplasias Pancreáticas/diagnóstico por imagem , Aprendizado de Máquina não Supervisionado , Algoritmos , Humanos , Imageamento Tridimensional
3.
Pancreas ; 48(6): 805-810, 2019 07.
Artigo em Inglês | MEDLINE | ID: mdl-31210661

RESUMO

OBJECTIVE: This study aimed to evaluate a deep learning protocol to identify neoplasia in intraductal papillary mucinous neoplasia (IPMN) in comparison to current radiographic criteria. METHODS: A computer-aided framework was designed using convolutional neural networks to classify IPMN. The protocol was applied to magnetic resonance images of the pancreas. Features of IPMN were classified according to American Gastroenterology Association guidelines, Fukuoka guidelines, and the new deep learning protocol. Sensitivity and specificity were calculated using surgically resected cystic lesions or healthy controls. RESULTS: Of 139 cases, 58 (42%) were male; mean (standard deviation) age was 65.3 (11.9) years. Twenty-two percent had normal pancreas; 34%, low-grade dysplasia; 14%, high-grade dysplasia; and 29%, adenocarcinoma. The deep learning protocol sensitivity and specificity to detect dysplasia were 92% and 52%, respectively. Sensitivity and specificity to identify high-grade dysplasia or cancer were 75% and 78%, respectively. Diagnostic performance was similar to radiologic criteria. Areas under the receiver operating curves (95% confidence interval) were 0.76 (0.70-0.84) for American Gastroenterology Association, 0.77 (0.70-0.85) for Fukuoka, and 0.78 (0.71-0.85) for the deep learning protocol (P = 0.90). CONCLUSIONS: The deep learning protocol showed accuracy comparable to current radiographic criteria. Computer-aided frameworks could be implemented as aids for radiologists to identify high-risk IPMN.


Assuntos
Aprendizado Profundo , Imageamento por Ressonância Magnética/métodos , Redes Neurais de Computação , Neoplasias Intraductais Pancreáticas/classificação , Neoplasias Intraductais Pancreáticas/diagnóstico por imagem , Idoso , Idoso de 80 Anos ou mais , Feminino , Gastroenterologia/métodos , Humanos , Masculino , Pessoa de Meia-Idade , Guias de Prática Clínica como Assunto , Curva ROC , Reprodutibilidade dos Testes
4.
IEEE Trans Biomed Eng ; 66(4): 1069-1081, 2019 04.
Artigo em Inglês | MEDLINE | ID: mdl-30176577

RESUMO

Magnetic resonance imaging (MRI) is the non-invasive modality of choice for body tissue composition analysis due to its excellent soft-tissue contrast and lack of ionizing radiation. However, quantification of body composition requires an accurate segmentation of fat, muscle, and other tissues from MR images, which remains a challenging goal due to the intensity overlap between them. In this study, we propose a fully automated, data-driven image segmentation platform that addresses multiple difficulties in segmenting MR images such as varying inhomogeneity, non-standardness, and noise, while producing a high-quality definition of different tissues. In contrast to most approaches in the literature, we perform segmentation operation by combining three different MRI contrasts and a novel segmentation tool, which takes into account variability in the data. The proposed system, based on a novel affinity definition within the fuzzy connectivity image segmentation family, prevents the need for user intervention and reparametrization of the segmentation algorithms. In order to make the whole system fully automated, we adapt an affinity propagation clustering algorithm to roughly identify tissue regions and image background. We perform a thorough evaluation of the proposed algorithm's individual steps as well as comparison with several approaches from the literature for the main application of muscle/fat separation. Furthermore, whole-body tissue composition and brain tissue delineation were conducted to show the generalization ability of the proposed system. This new automated platform outperforms other state-of-the-art segmentation approaches both in accuracy and efficiency.


Assuntos
Composição Corporal/fisiologia , Lógica Fuzzy , Processamento de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética/métodos , Imagem Corporal Total/métodos , Adulto , Idoso , Idoso de 80 Anos ou mais , Algoritmos , Encéfalo/diagnóstico por imagem , Análise por Conglomerados , Humanos , Pessoa de Meia-Idade , Coxa da Perna/diagnóstico por imagem
5.
Br J Radiol ; 91(1089): 20170545, 2018 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-29565644

RESUMO

Deep learning has demonstrated tremendous revolutionary changes in the computing industry and its effects in radiology and imaging sciences have begun to dramatically change screening paradigms. Specifically, these advances have influenced the development of computer-aided detection and diagnosis (CAD) systems. These technologies have long been thought of as "second-opinion" tools for radiologists and clinicians. However, with significant improvements in deep neural networks, the diagnostic capabilities of learning algorithms are approaching levels of human expertise (radiologists, clinicians etc.), shifting the CAD paradigm from a "second opinion" tool to a more collaborative utility. This paper reviews recently developed CAD systems based on deep learning technologies for breast cancer diagnosis, explains their superiorities with respect to previously established systems, defines the methodologies behind the improved achievements including algorithmic developments, and describes remaining challenges in breast cancer screening and diagnosis. We also discuss possible future directions for new CAD models that continue to change as artificial intelligence algorithms evolve.


Assuntos
Neoplasias da Mama/diagnóstico por imagem , Diagnóstico por Computador , Aprendizado de Máquina , Redes Neurais de Computação , Algoritmos , Feminino , Humanos , Imageamento por Ressonância Magnética , Mamografia , Ultrassonografia Mamária
6.
Nucl Med Commun ; 38(7): 629-635, 2017 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-28509766

RESUMO

PURPOSE: This retrospective review was performed to determine whether patients with brown adipose tissue (BAT) detected by fluorine-18-fluorodeoxyglucose (F-FDG) PET/computed tomography (CT) imaging have less central obesity than BMI-matched control patients without detectable BAT. PATIENTS AND METHODS: Thirty-seven adult oncology patients with F-FDG BAT uptake were retrospectively identified from PET/CT studies from 2011 to 2013. The control cohort consisted of 74 adult oncology patients without detectable F-FDG BAT uptake matched for BMI/sex/season. Tissue fat content was estimated by CT density (Hounsfield units) with a subsequent noise removal step. Total fat and abdominal fat were calculated. An automated separation algorithm was utilized to determine the visceral fat and subcutaneous fat at the L4/L5 level. In addition, liver density was obtained from CT images. CT imaging was interpreted blinded to clinical information. RESULTS: There was no difference in total fat for the BAT cohort (34±15 l) compared with the controls (34±16 l) (P=0.96). The BAT cohort had lower abdominal fat to total fat ratio compared with the controls (0.28±0.05 vs. 0.31±0.08, respectively; P=0.01). The BAT cohort had a lower visceral fat/(visceral fat+subcutaneous fat) ratio compared with the controls (0.30±0.10 vs. 0.34±0.12, respectively; P=0.03). Patients with BAT had higher liver density, suggesting less liver fat, compared with the controls (51.3±7.5 vs. 47.1±7.0 HU, P=0.003). CONCLUSION: The findings suggest that active BAT detected by F-FDG PET/CT is associated with less central obesity and liver fat. The presence of foci of BAT may be protective against features of the metabolic syndrome.


Assuntos
Tecido Adiposo Marrom/diagnóstico por imagem , Obesidade Abdominal/diagnóstico por imagem , Obesidade Abdominal/patologia , Tomografia por Emissão de Pósitrons combinada à Tomografia Computadorizada , Adulto , Idoso , Idoso de 80 Anos ou mais , Feminino , Humanos , Fígado/diagnóstico por imagem , Fígado/patologia , Masculino , Pessoa de Meia-Idade , Estudos Retrospectivos , Adulto Jovem
7.
IEEE Trans Med Imaging ; 36(3): 734-744, 2017 03.
Artigo em Inglês | MEDLINE | ID: mdl-28114010

RESUMO

In this paper, we investigate the automatic detection of white and brown adipose tissues using Positron Emission Tomography/Computed Tomography (PET/CT) scans, and develop methods for the quantification of these tissues at the whole-body and body-region levels. We propose a patient-specific automatic adiposity analysis system with two modules. In the first module, we detect white adipose tissue (WAT) and its two sub-types from CT scans: Visceral Adipose Tissue (VAT) and Subcutaneous Adipose Tissue (SAT). This process relies conventionally on manual or semi-automated segmentation, leading to inefficient solutions. Our novel framework addresses this challenge by proposing an unsupervised learning method to separate VAT from SAT in the abdominal region for the clinical quantification of central obesity. This step is followed by a context driven label fusion algorithm through sparse 3D Conditional Random Fields (CRF) for volumetric adiposity analysis. In the second module, we automatically detect, segment, and quantify brown adipose tissue (BAT) using PET scans because unlike WAT, BAT is metabolically active. After identifying BAT regions using PET, we perform a co-segmentation procedure utilizing asymmetric complementary information from PET and CT. Finally, we present a new probabilistic distance metric for differentiating BAT from non-BAT regions. Both modules are integrated via an automatic body-region detection unit based on one-shot learning. Experimental evaluations conducted on 151 PET/CT scans achieve state-of-the-art performances in both central obesity as well as brown adiposity quantification.


Assuntos
Tecido Adiposo Marrom/diagnóstico por imagem , Tecido Adiposo Branco/diagnóstico por imagem , Processamento de Imagem Assistida por Computador/métodos , Tomografia por Emissão de Pósitrons combinada à Tomografia Computadorizada/métodos , Adulto , Algoritmos , Humanos , Masculino , Imagem Corporal Total/métodos , Adulto Jovem
8.
Int J Comput Assist Radiol Surg ; 11(6): 977-85, 2016 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-27017502

RESUMO

PURPOSE: Image-based tracking for motion compensation is an important topic in image-guided interventions, as it enables physicians to operate in a less complex space. In this paper, we propose an automatic motion compensation scheme to boost image guidence power in transcatheter aortic valve implantation (TAVI). METHODS: The proposed tracking algorithm automatically discovers reliable regions that correlate strongly with the target. These discovered regions can assist to estimate target motion under severe occlusion, even if target tracker fails. RESULTS: We evaluate the proposed method for pigtail tracking during TAVI. We obtain significant improvement (12 %) over the baseline in a clinical dataset. Calcification regions are automatically discovered during tracking, which would aid TAVI processes. CONCLUSION: In this work, we open a new paradigm to provide dynamic real-time guidance for TAVI without user interventions, specially in case of severe occlusion where conventional tracking methods are challenged.


Assuntos
Algoritmos , Fluoroscopia/métodos , Cirurgia Assistida por Computador/métodos , Humanos , Movimento (Física)
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA