Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Más filtros

Bases de datos
Tipo del documento
Intervalo de año de publicación
1.
NPJ Digit Med ; 5(1): 79, 2022 Jun 29.
Artículo en Inglés | MEDLINE | ID: mdl-35768575

RESUMEN

Body composition is a key component of health in both individuals and populations, and excess adiposity is associated with an increased risk of developing chronic diseases. Body mass index (BMI) and other clinical or commercially available tools for quantifying body fat (BF) such as DXA, MRI, CT, and photonic scanners (3DPS) are often inaccurate, cost prohibitive, or cumbersome to use. The aim of the current study was to evaluate the performance of a novel automated computer vision method, visual body composition (VBC), that uses two-dimensional photographs captured via a conventional smartphone camera to estimate percentage total body fat (%BF). The VBC algorithm is based on a state-of-the-art convolutional neural network (CNN). The hypothesis is that VBC yields better accuracy than other consumer-grade fat measurements devices. 134 healthy adults ranging in age (21-76 years), sex (61.2% women), race (60.4% White; 23.9% Black), and body mass index (BMI, 18.5-51.6 kg/m2) were evaluated at two clinical sites (N = 64 at MGH, N = 70 at PBRC). Each participant had %BF measured with VBC, three consumer and two professional bioimpedance analysis (BIA) systems. The PBRC participants also had air displacement plethysmography (ADP) measured. %BF measured by dual-energy x-ray absorptiometry (DXA) was set as the reference against which all other %BF measurements were compared. To test our scientific hypothesis we run multiple, pair-wise Wilcoxon signed rank tests where we compare each competing measurement tool (VBC, BIA, …) with respect to the same ground-truth (DXA). Relative to DXA, VBC had the lowest mean absolute error and standard deviation (2.16 ± 1.54%) compared to all of the other evaluated methods (p < 0.05 for all comparisons). %BF measured by VBC also had good concordance with DXA (Lin's concordance correlation coefficient, CCC: all 0.96; women 0.93; men 0.94), whereas BMI had very poor concordance (CCC: all 0.45; women 0.40; men 0.74). Bland-Altman analysis of VBC revealed the tightest limits of agreement (LOA) and absence of significant bias relative to DXA (bias -0.42%, R2 = 0.03; p = 0.062; LOA -5.5% to +4.7%), whereas all other evaluated methods had significant (p < 0.01) bias and wider limits of agreement. Bias in Bland-Altman analyses is defined as the discordance between the y = 0 axis and the regressed line computed from the data in the plot. In this first validation study of a novel, accessible, and easy-to-use system, VBC body fat estimates were accurate and without significant bias compared to DXA as the reference; VBC performance exceeded those of all other BIA and ADP methods evaluated. The wide availability of smartphones suggests that the VBC method for evaluating %BF could play an important role in quantifying adiposity levels in a wide range of settings.Trial registration: ClinicalTrials.gov Identifier: NCT04854421.

2.
Front Comput Neurosci ; 14: 17, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-32265680

RESUMEN

Image registration and segmentation are the two most studied problems in medical image analysis. Deep learning algorithms have recently gained a lot of attention due to their success and state-of-the-art results in variety of problems and communities. In this paper, we propose a novel, efficient, and multi-task algorithm that addresses the problems of image registration and brain tumor segmentation jointly. Our method exploits the dependencies between these tasks through a natural coupling of their interdependencies during inference. In particular, the similarity constraints are relaxed within the tumor regions using an efficient and relatively simple formulation. We evaluated the performance of our formulation both quantitatively and qualitatively for registration and segmentation problems on two publicly available datasets (BraTS 2018 and OASIS 3), reporting competitive results with other recent state-of-the-art methods. Moreover, our proposed framework reports significant amelioration (p < 0.005) for the registration performance inside the tumor locations, providing a generic method that does not need any predefined conditions (e.g., absence of abnormalities) about the volumes to be registered. Our implementation is publicly available online at https://github.com/TheoEst/joint_registration_tumor_segmentation.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA