Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Más filtros












Base de datos
Intervalo de año de publicación
1.
Artículo en Inglés | MEDLINE | ID: mdl-39236130

RESUMEN

Extracting geometric features from 3D point clouds is widely applied in many tasks, including registration and recognition. We propose a simple yet effective method, termed height-azimuth image based transformation-invariant net (HA-TiNet), to learn a distinctive, general and rotation-invariant 3D local descriptor. HA-TiNet is composed of a height-azimuth image generator and a feature extraction net. Based on a local reference axis (LRA), the height-azimuth image generator first partitions local region along the plane-radial direction, and then implements a statistic of height and azimuth information in each divided space to generate a set of height-azimuth images. The generated height-azimuth images are invariant in the rotation around x- and y-axes and have high accuracy due to the high repeatability of an LRA. Besides, they can be easily embedded in 2D convolutional neural networks (CNNs). Our feature extraction net learns the information on the height-azimuth images using a ResNet-based backbone and a rotation-invariant layer. The ResNet-based backbone is lightweight while very effective. The rotation-invariant layer removes the rotation-variance around z-axis, making our descriptor have full rotation-invariance. Extensive experiments on indoor and outdoor datasets show that our method presents superior overall performance, and exhibits strong descriptiveness and generalization ability compared to the state-of-the-art descriptors. The source code will be made publicly available at https://github.com/ahulq/HA-TiNet.

2.
Comput Biol Med ; 157: 106751, 2023 05.
Artículo en Inglés | MEDLINE | ID: mdl-36934534

RESUMEN

Accurate segmentation of brain tumor plays an important role in MRI diagnosis and treatment monitoring of brain tumor. However, the degree of lesions in each patient's brain tumor region is usually inconsistent, with large structural differences, and brain tumor MR images are characterized by low contrast and blur, current deep learning algorithms often cannot achieve accurate segmentation. To address this problem, we propose a novel end-to-end brain tumor segmentation algorithm by integrating the improved 3D U-Net network and super-resolution image reconstruction into one framework. In addition, the coordinate attention module is embedded before the upsampling operation of the backbone network, which enhances the capture ability of local texture feature information and global location feature information. To demonstrate the segmentation results of the proposed algorithm in different brain tumor MR images, we have trained and evaluated the proposed algorithm on BraTS datasets, and compared with other deep learning algorithms by dice similarity scores. On the BraTS2021 dataset, the proposed algorithm achieves the dice similarity score of 89.61%, 88.30%, 91.05%, and the Hausdorff distance (95%) of 1.414 mm, 7.810 mm, 4.583 mm for the enhancing tumors, tumor cores and whole tumors, respectively. The experimental results illuminate that our method outperforms the baseline 3D U-Net method and yields good performance on different datasets. It indicated that it is robust to segmentation of brain tumor MR images with structures vary considerably.


Asunto(s)
Neoplasias Encefálicas , Humanos , Algoritmos , Neoplasias Encefálicas/diagnóstico por imagen , Procesamiento de Imagen Asistido por Computador , Imagen por Resonancia Magnética
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...