Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
Comput Methods Programs Biomed ; 240: 107688, 2023 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-37487310

RESUMEN

BACKGROUND AND OBJECTIVE: Due to the depth of focus (DOF) limitations of the optical systems of microscopes, it is often difficult to achieve full clarity from microscopic biomedical images under high-magnification microscopy. Multifocus microscopic biomedical image fusion (MFBIF) can effectively solve this problem. Considering both information richness and visual authenticity, this paper proposes a transformer network for MFBIF called TransFusion-Net. METHODS: TransFusion-Net consists of two modules. One module is an interlayer cross-attention module, which is used to obtain feature mappings under the long-range dependencies observed among multiple nonfocus source images. The other module is a spatial attention upsampling network (SAU-Net) module, which is used to obtain global semantic information after further spatial attention is applied. Thus, TransFusion-Net can simultaneously receive multiple input images from a nonfull-focus microscope and make full use of the strong correlations between the source images to output accurate fusion results in an end-to-end manner. RESULTS: The fusion results were quantitatively and qualitatively compared with those of eight state-of-the-art algorithms. In the quantitative experiments, five evaluation metrics, QAB/F, QMI, QAVG, QCB, and PSNR, were used to evaluate the performance of each method, and the proposed method achieved values of 0.6574, 8.4572, 5.6305, 0.7341, and 89.5685, respectively, which are higher than those of the current state-of-the-art algorithms. In the qualitative experiments, a differential image was used for further validation, and the near-zero residuals visually verified the adequacy of the proposed method for fusion. Furthermore, we showed some fusion results of multifocused biomedical microscopy images to verify the reliability of the proposed method, which shows high-quality fusion results. CONCLUSION: Multifocus biomedical microscopic image fusion can be accurately and effectively achieved by devising a deep convolutional neural network with joint cross-attention and spatial attention mechanisms.


Asunto(s)
Algoritmos , Benchmarking , Reproducibilidad de los Resultados , Suministros de Energía Eléctrica , Microscopía , Procesamiento de Imagen Asistido por Computador
2.
Zhongguo Yi Liao Qi Xie Za Zhi ; 46(4): 377-381, 2022 Jul 30.
Artículo en Chino | MEDLINE | ID: mdl-35929150

RESUMEN

In order to better assist doctors in the diagnosis of dry eye and improve the ability of ophthalmologists to recognize the condition of meibomian gland, a meibomian gland image segmentation and enhancement method based on Mobile-U-Net network was proposed. Firstly, Mobile-Net is used as the coding part of U-Net for down sampling, and then features are extracted and fused with the features in decoder to guide image segmentation. Secondly, the segmentation of meibomian gland region is enhanced to assist doctors to judge the condition. Thirdly, a large number of meibomian gland images are collected to train and verify the semantic segmentation network, and the clarity evaluation index is used to verify the meibomian gland enhancement effect. The experimental results show that the similarity coefficient of the proposed method is stable at 92.71%, and the image clarity index is better than the similar dry eye detection instruments on the market.


Asunto(s)
Aprendizaje Profundo , Síndromes de Ojo Seco , Diagnóstico por Imagen , Humanos , Procesamiento de Imagen Asistido por Computador , Glándulas Tarsales/diagnóstico por imagen
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA