Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 6 de 6
Filtrar
Mais filtros

Base de dados
Tipo de documento
Intervalo de ano de publicação
1.
BMC Ophthalmol ; 23(1): 451, 2023 Nov 13.
Artigo em Inglês | MEDLINE | ID: mdl-37953270

RESUMO

BACKGROUND: The purpose of this study was to investigate retinal layers changes in patients with age-related macular degeneration (AMD) treated with anti-vascular endothelial growth factor (anti-VEGF) agents and to evaluate if these changes may affect treatment response. METHODS: This study included 496 patients with AMD or PCV who were treated with anti-VEGF agents and followed up for at least 6 months. A comprehensive analysis of retinal layers affecting visual acuity was conducted. To eliminate the fact that the average thickness calculated may lead to differences tending to converge towards the mean, we proposed that the retinal layer was divided into different regions and the thickness of the retinal layer was analyzed at the same time. The labeled data will be publicly available for further research. RESULTS: Compared to baseline, significant improvement in visual acuity was observed in patients at the 6-month follow-up. Statistically significant reduction in central retinal thickness and separate retinal layer thickness was also observed (p < 0.05). Among all retinal layers, the thickness of the external limiting membrane to retinal pigment epithelium/Bruch's membrane (ELM to RPE/BrM) showed the greatest reduction. Furthermore, the subregional assessment revealed that the ELM to RPE/BrM decreased greater than that of other layers in each region. CONCLUSION: Treatment with anti-VEGF agents effectively reduced retinal thickness in all separate retinal layers as well as the retina as a whole and anti-VEGF treatment may be more targeted at the edema site. These findings could have implications for the development of more precise and targeted therapies for AMD treatment.


Assuntos
Degeneração Macular , Ranibizumab , Humanos , Ranibizumab/uso terapêutico , Inibidores da Angiogênese/uso terapêutico , Fator A de Crescimento do Endotélio Vascular , Retina , Degeneração Macular/tratamento farmacológico , Injeções Intravítreas , Tomografia de Coerência Óptica , Estudos Retrospectivos
2.
IEEE Trans Med Imaging ; 43(4): 1323-1336, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-38015687

RESUMO

Medical imaging provides many valuable clues involving anatomical structure and pathological characteristics. However, image degradation is a common issue in clinical practice, which can adversely impact the observation and diagnosis by physicians and algorithms. Although extensive enhancement models have been developed, these models require a well pre-training before deployment, while failing to take advantage of the potential value of inference data after deployment. In this paper, we raise an algorithm for source-free unsupervised domain adaptive medical image enhancement (SAME), which adapts and optimizes enhancement models using test data in the inference phase. A structure-preserving enhancement network is first constructed to learn a robust source model from synthesized training data. Then a teacher-student model is initialized with the source model and conducts source-free unsupervised domain adaptation (SFUDA) by knowledge distillation with the test data. Additionally, a pseudo-label picker is developed to boost the knowledge distillation of enhancement tasks. Experiments were implemented on ten datasets from three medical image modalities to validate the advantage of the proposed algorithm, and setting analysis and ablation studies were also carried out to interpret the effectiveness of SAME. The remarkable enhancement performance and benefits for downstream tasks demonstrate the potential and generalizability of SAME. The code is available at https://github.com/liamheng/Annotation-free-Medical-Image-Enhancement.


Assuntos
Algoritmos , Aumento da Imagem , Humanos , Processamento de Imagem Assistida por Computador
3.
Biomed Opt Express ; 15(6): 3699-3714, 2024 Jun 01.
Artigo em Inglês | MEDLINE | ID: mdl-38867787

RESUMO

Multi-modal eye disease screening improves diagnostic accuracy by providing lesion information from different sources. However, existing multi-modal automatic diagnosis methods tend to focus on the specificity of modalities and ignore the spatial correlation of images. This paper proposes a novel cross-modal retinal disease diagnosis network (CRD-Net) that digs out the relevant features from modal images aided for multiple retinal disease diagnosis. Specifically, our model introduces a cross-modal attention (CMA) module to query and adaptively pay attention to the relevant features of the lesion in the different modal images. In addition, we also propose multiple loss functions to fuse features with modality correlation and train a multi-modal retinal image classification network to achieve a more accurate diagnosis. Experimental evaluation on three publicly available datasets shows that our CRD-Net outperforms existing single-modal and multi-modal methods, demonstrating its superior performance.

4.
Int J Comput Assist Radiol Surg ; 18(10): 1769-1781, 2023 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-37199827

RESUMO

PURPOSE: Automatic surgical instrument segmentation is a crucial step for robotic-aided surgery. Encoder-decoder construction-based methods often directly fuse high-level and low-level features by skip connection to supplement some detailed information. However, irrelevant information fusion also increases misclassification or wrong segmentation, especially for complex surgical scenes. Uneven illumination always results in instruments similar to other tissues of background, which greatly increases the difficulty of automatic surgical instrument segmentation. The paper proposes a novel network to solve the problem. METHODS: The paper proposes to guide the network to select effective features for instrument segmentation. The network is named context-guided bidirectional attention network (CGBANet). The guidance connection attention (GCA) module is inserted into the network to adaptively filter out irrelevant low-level features. Moreover, we propose bidirectional attention (BA) module for the GCA module to capture both local information and local-global dependency for surgical scenes to provide accurate instrument features. RESULTS: The superiority of our CGBA-Net is verified by multiple instrument segmentation on two publicly available datasets of different surgical scenarios, including an endoscopic vision dataset (EndoVis 2018) and a cataract surgery dataset. Extensive experimental results demonstrate our CGBA-Net outperforms the state-of-the-art methods on two datasets. Ablation study based on the datasets proves the effectiveness of our modules. CONCLUSION: The proposed CGBA-Net increased the accuracy of multiple instruments segmentation, which accurately classifies and segments the instruments. The proposed modules effectively provided instrument-related features for the network.


Assuntos
Extração de Catarata , Oftalmologia , Procedimentos Cirúrgicos Robóticos , Humanos , Iluminação , Instrumentos Cirúrgicos , Processamento de Imagem Assistida por Computador
5.
Comput Biol Med ; 146: 105628, 2022 07.
Artigo em Inglês | MEDLINE | ID: mdl-35609472

RESUMO

Medical image segmentation is fundamental for computer-aided diagnosis or surgery. Various attention modules are proposed to improve segmentation results, which exist some limitations for medical image segmentation, such as large computations, weak framework applicability, etc. To solve the problems, we propose a new attention module named FGAM, short for Feature Guided Attention Module, which is a simple but pluggable and effective module for medical image segmentation. The FGAM tries to dig out the feature representation ability in the encoder and decoder features. Specifically, the decoder shallow layer always contains abundant information, which is taken as a queryable feature dictionary in the FGAM. The module contains a parameter-free activator and can be deleted after various encoder-decoder networks' training. The efficacy of the FGAM is proved on various encoder-decoder models based on five datasets, including four publicly available datasets and one in-house dataset.


Assuntos
Diagnóstico por Computador , Processamento de Imagem Assistida por Computador , Atenção , Processamento de Imagem Assistida por Computador/métodos
6.
Annu Int Conf IEEE Eng Med Biol Soc ; 2021: 2672-2675, 2021 11.
Artigo em Inglês | MEDLINE | ID: mdl-34891802

RESUMO

Surgical instrument segmentation is critical for the field of computer-aided surgery system. Most of deep-learning based algorithms only use either multi-scale information or multi-level information, which may lead to ambiguity of semantic information. In this paper, we propose a new neural network, which extracts both multi-scale and multilevel features based on the backbone of U-net. Specifically, the cascaded and double convolutional feature pyramid is input into the U-net. Then we propose a DFP (short for Dilation Feature-Pyramid) module for decoder which extracts multi-scale and multi-level information. The proposed algorithm is evaluated on two publicly available datasets, and extensive experiments prove that the five evaluation metrics by our algorithm are superior than other comparing methods.


Assuntos
Processamento de Imagem Assistida por Computador , Redes Neurais de Computação , Algoritmos , Semântica , Instrumentos Cirúrgicos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA