Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Mais filtros

Base de dados
Ano de publicação
Tipo de documento
Intervalo de ano de publicação
1.
Opt Lett ; 48(9): 2496-2499, 2023 May 01.
Artigo em Inglês | MEDLINE | ID: mdl-37126308

RESUMO

Lowering the excitation to reduce phototoxicity and photobleaching while numerically enhancing the fluorescence signal is a useful way to support long-term observation in fluorescence microscopy. However, invalid features, such as near-zero gradient dark backgrounds in fluorescence images, negatively affect the neural networks due to the network training locality. This problem makes it difficult for mature deep learning-based image enhancement methods to be directly extended to fluorescence imaging enhancement. To reduce the negative optimization effect, we previously designed Kindred-Nets in conjunction with a mixed fine-tuning scheme, but the mapping learned from the fine-tuning dataset may not fully apply to fluorescence images. In this work, we proposed a new, to the best of our knowledge, deep low-excitation fluorescence imaging global enhancement framework, named Deep-Gamma, that is completely different from our previously designed scheme. It contains GammaAtt, a self-attention module that calculates the attention weights from global features, thus avoiding negative optimization. Besides, in contrast to the classical self-attention module outputting multidimensional attention matrices, our proposed GammaAtt output, as multiple parameters, significantly reduces the optimization difficulty and thus supports easy convergence based on a small-scale fluorescence microscopy dataset. As proven by both simulations and experiments, Deep-Gamma can provide higher-quality fluorescence-enhanced images compared to other state-of-the-art methods. Deep-Gamma is envisioned as a future deep low-excitation fluorescence imaging enhancement modality with significant potential in medical imaging applications. This work is open source and available at https://github.com/ZhiboXiao/Deep-Gamma.

2.
IEEE J Biomed Health Inform ; 27(12): 5860-5871, 2023 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-37738185

RESUMO

Multimodal volumetric segmentation and fusion are two valuable techniques for surgical treatment planning, image-guided interventions, tumor growth detection, radiotherapy map generation, etc. In recent years, deep learning has demonstrated its excellent capability in both of the above tasks, while these methods inevitably face bottlenecks. On the one hand, recent segmentation studies, especially the U-Net-style series, have reached the performance ceiling in segmentation tasks. On the other hand, it is almost impossible to capture the ground truth of the fusion in multimodal imaging, due to differences in physical principles among imaging modalities. Hence, most of the existing studies in the field of multimodal medical image fusion, which fuse only two modalities at a time with hand-crafted proportions, are subjective and task-specific. To address the above concerns, this work proposes an integration of multimodal segmentation and fusion, namely SegCoFusion, which consists of a novel feature frequency dividing network named FDNet and a segmentation part using a dual-single path feature supplementing strategy to optimize the segmentation inputs and suture with the fusion part. Furthermore, focusing on multimodal brain tumor volumetric fusion and segmentation, the qualitative and quantitative results demonstrate that SegCoFusion can break the ceiling both of segmentation and fusion methods. Moreover, the effectiveness of the proposed framework is also revealed by comparing it with state-of-the-art fusion methods on 2D two-modality fusion tasks, our method achieves better fusion performance than others. Therefore, the proposed SegCoFusion develops a novel perspective that improves the performance in volumetric fusion by cooperating with segmentation and enhances lesion awareness.


Assuntos
Neoplasias Encefálicas , Procedimentos Neurocirúrgicos , Humanos , Exame Físico , Extremidade Superior , Processamento de Imagem Assistida por Computador
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA