Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Más filtros












Base de datos
Intervalo de año de publicación
1.
Comput Methods Programs Biomed ; 203: 106043, 2021 May.
Artículo en Inglés | MEDLINE | ID: mdl-33744750

RESUMEN

BACKGROUND AND OBJECTIVE: [18f]-fluorodeoxyglucose (fdg) positron emission tomography - computed tomography (pet-ct) is now the preferred imaging modality for staging many cancers. Pet images characterize tumoral glucose metabolism while ct depicts the complementary anatomical localization of the tumor. Automatic tumor segmentation is an important step in image analysis in computer aided diagnosis systems. Recently, fully convolutional networks (fcns), with their ability to leverage annotated datasets and extract image feature representations, have become the state-of-the-art in tumor segmentation. There are limited fcn based methods that support multi-modality images and current methods have primarily focused on the fusion of multi-modality image features at various stages, i.e., early-fusion where the multi-modality image features are fused prior to fcn, late-fusion with the resultant features fused and hyper-fusion where multi-modality image features are fused across multiple image feature scales. Early- and late-fusion methods, however, have inherent, limited freedom to fuse complementary multi-modality image features. The hyper-fusion methods learn different image features across different image feature scales that can result in inaccurate segmentations, in particular, in situations where the tumors have heterogeneous textures. METHODS: we propose a recurrent fusion network (rfn), which consists of multiple recurrent fusion phases to progressively fuse the complementary multi-modality image features with intermediary segmentation results derived at individual recurrent fusion phases: (1) the recurrent fusion phases iteratively learn the image features and then refine the subsequent segmentation results; and, (2) the intermediary segmentation results allows our method to focus on learning the multi-modality image features around these intermediary segmentation results, which minimize the risk of inconsistent feature learning. RESULTS: we evaluated our method on two pathologically proven non-small cell lung cancer pet-ct datasets. We compared our method to the commonly used fusion methods (early-fusion, late-fusion and hyper-fusion) and the state-of-the-art pet-ct tumor segmentation methods on various network backbones (resnet, densenet and 3d-unet). Our results show that the rfn provides more accurate segmentation compared to the existing methods and is generalizable to different datasets. CONCLUSIONS: we show that learning through multiple recurrent fusion phases allows the iterative re-use of multi-modality image features that refines tumor segmentation results. We also identify that our rfn produces consistent segmentation results across different network architectures.


Asunto(s)
Carcinoma de Pulmón de Células no Pequeñas , Neoplasias Pulmonares , Fluorodesoxiglucosa F18 , Humanos , Procesamiento de Imagen Asistido por Computador , Neoplasias Pulmonares/diagnóstico por imagen , Tomografía Computarizada por Tomografía de Emisión de Positrones
2.
Phys Med Biol ; 65(5): 055005, 2020 02 28.
Artículo en Inglés | MEDLINE | ID: mdl-31722327

RESUMEN

Mammography is one of the most commonly applied tools for early breast cancer screening. Automatic segmentation of breast masses in mammograms is essential but challenging due to the low signal-to-noise ratio and the wide variety of mass shapes and sizes. Existing methods deal with these challenges mainly by extracting mass-centered image patches manually or automatically. However, manual patch extraction is time-consuming and automatic patch extraction brings errors that could not be compensated in the following segmentation step. In this study, we propose a novel attention-guided dense-upsampling network (AUNet) for accurate breast mass segmentation in whole mammograms directly. In AUNet, we employ an asymmetrical encoder-decoder structure and propose an effective upsampling block, attention-guided dense-upsampling block (AU block). Especially, the AU block is designed to have three merits. Firstly, it compensates the information loss of bilinear upsampling by dense upsampling. Secondly, it designs a more effective method to fuse high- and low-level features. Thirdly, it includes a channel-attention function to highlight rich-information channels. We evaluated the proposed method on two publicly available datasets, CBIS-DDSM and INbreast. Compared to three state-of-the-art fully convolutional networks, AUNet achieved the best performances with an average Dice similarity coefficient of 81.8% for CBIS-DDSM and 79.1% for INbreast.


Asunto(s)
Procesamiento de Imagen Asistido por Computador/métodos , Mamografía/métodos , Mama/diagnóstico por imagen , Neoplasias de la Mama/diagnóstico por imagen , Femenino , Humanos , Procesamiento de Imagen Asistido por Computador/normas
3.
IEEE Trans Biomed Eng ; 62(1): 196-207, 2015 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-25099393

RESUMEN

Automated and general medical image segmentation can be challenging because the foreground and the background may have complicated and overlapping density distributions in medical imaging. Conventional region-based level set algorithms often assume piecewise constant or piecewise smooth for segments, which are implausible for general medical image segmentation. Furthermore, low contrast and noise make identification of the boundaries between foreground and background difficult for edge-based level set algorithms. Thus, to address these problems, we suggest a supervised variational level set segmentation model to harness the statistical region energy functional with a weighted probability approximation. Our approach models the region density distributions by using the mixture-of-mixtures Gaussian model to better approximate real intensity distributions and distinguish statistical intensity differences between foreground and background. The region-based statistical model in our algorithm can intuitively provide better performance on noisy images. We constructed a weighted probability map on graphs to incorporate spatial indications from user input with a contextual constraint based on the minimization of contextual graphs energy functional. We measured the performance of our approach on ten noisy synthetic images and 58 medical datasets with heterogeneous intensities and ill-defined boundaries and compared our technique to the Chan-Vese region-based level set model, the geodesic active contour model with distance regularization, and the random walker model. Our method consistently achieved the highest Dice similarity coefficient when compared to the other methods.


Asunto(s)
Algoritmos , Encéfalo/anatomía & histología , Interpretación Estadística de Datos , Interpretación de Imagen Asistida por Computador/métodos , Reconocimiento de Normas Patrones Automatizadas/métodos , Simulación por Computador , Humanos , Aumento de la Imagen/métodos , Imagen por Resonancia Magnética/métodos , Modelos Biológicos , Modelos Estadísticos , Reproducibilidad de los Resultados , Sensibilidad y Especificidad , Tomografía Computarizada por Rayos X/métodos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...