Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 7 de 7
Filtrar
Más filtros




Base de datos
Asunto de la revista
Intervalo de año de publicación
1.
Comput Biol Med ; 177: 108602, 2024 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-38805809

RESUMEN

High-quality 3D corneal reconstruction from AS-OCT images has demonstrated significant potential in computer-aided diagnosis, enabling comprehensive observation of corneal thickness, precise assessment of morphological characteristics, as well as location and quantification of keratitis-affected regions. However, it faces two main challenges: (1) prevalent medical image segmentation networks often struggle to accurately process low-contrast corneal regions, which is a vital pre-processing step for 3D corneal reconstruction, and (2) there are no reconstruction methods that can be directly applied to AS-OCT sequences with 180-degree scanning. To combat these, we propose CSCM-CCA-Net, a simple yet efficient network for accurate corneal segmentation. This network incorporates two key techniques: cascade spatial and channel-wise multifusion (CSCM), which captures intricate contextual interdependencies and effectively extracts low-contrast and obscure corneal features; and criss cross augmentation (CCA), which enhances shape-preserved feature representation to improve segmentation accuracy. Based on the obtained corneal segmentation results, we reconstruct the 3D volume data and generate a topographic map of corneal thickness through corneal image alignment. Additionally, we design a transfer function based on the analysis of intensity histogram and gradient histogram to explore more internal cues for better visualization results. Experimental results on CORNEA benchmark demonstrate the impressive performance of our proposed method in terms of both corneal segmentation and 3D reconstruction. Furthermore, we compare CSCM-CCA-Net with state-of-the-art medical image segmentation approaches using three challenging medical fundus segmentation datasets (DRIVE, CHASEDB1, FIVES), highlighting its superiority in terms of segmentation accuracy. The code and models will be made available at https://github.com/qianguiping/CSCM-CCA-Net.


Asunto(s)
Córnea , Humanos , Córnea/diagnóstico por imagen , Tomografía de Coherencia Óptica/métodos , Imagenología Tridimensional/métodos , Algoritmos , Procesamiento de Imagen Asistido por Computador/métodos
2.
Artif Intell Med ; 150: 102837, 2024 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-38553151

RESUMEN

The thickness of the choroid is considered to be an important indicator of clinical diagnosis. Therefore, accurate choroid segmentation in retinal OCT images is crucial for monitoring various ophthalmic diseases. However, this is still challenging due to the blurry boundaries and interference from other lesions. To address these issues, we propose a novel prior-guided and knowledge diffusive network (PGKD-Net) to fully utilize retinal structural information to highlight choroidal region features and boost segmentation performance. Specifically, it is composed of two parts: a Prior-mask Guided Network (PG-Net) for coarse segmentation and a Knowledge Diffusive Network (KD-Net) for fine segmentation. In addition, we design two novel feature enhancement modules, Multi-Scale Context Aggregation (MSCA) and Multi-Level Feature Fusion (MLFF). The MSCA module captures the long-distance dependencies between features from different receptive fields and improves the model's ability to learn global context. The MLFF module integrates the cascaded context knowledge learned from PG-Net to benefit fine-level segmentation. Comprehensive experiments are conducted to evaluate the performance of the proposed PGKD-Net. Experimental results show that our proposed method achieves superior segmentation accuracy over other state-of-the-art methods. Our code is made up publicly available at: https://github.com/yzh-hdu/choroid-segmentation.


Asunto(s)
Coroides , Aprendizaje , Coroides/diagnóstico por imagen , Retina/diagnóstico por imagen , Procesamiento de Imagen Asistido por Computador
4.
Sci Data ; 10(1): 380, 2023 06 14.
Artículo en Inglés | MEDLINE | ID: mdl-37316638

RESUMEN

When dentists see pediatric patients with more complex tooth development than adults during tooth replacement, they need to manually determine the patient's disease with the help of preoperative dental panoramic radiographs. To the best of our knowledge, there is no international public dataset for children's teeth and only a few datasets for adults' teeth, which limits the development of deep learning algorithms for segmenting teeth and automatically analyzing diseases. Therefore, we collected dental panoramic radiographs and cases from 106 pediatric patients aged 2 to 13 years old, and with the help of the efficient and intelligent interactive segmentation annotation software EISeg (Efficient Interactive Segmentation) and the image annotation software LabelMe. We propose the world's first dataset of children's dental panoramic radiographs for caries segmentation and dental disease detection by segmenting and detecting annotations. In addition, another 93 dental panoramic radiographs of pediatric patients, together with our three internationally published adult dental datasets with a total of 2,692 images, were collected and made into a segmentation dataset suitable for deep learning.


Asunto(s)
Susceptibilidad a Caries Dentarias , Enfermedades Estomatognáticas , Adolescente , Niño , Preescolar , Humanos , Algoritmos , Conocimiento , Radiografía Panorámica
5.
IEEE J Biomed Health Inform ; 27(7): 3525-3536, 2023 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-37126620

RESUMEN

Precise and rapid categorization of images in the B-scan ultrasound modality is vital for diagnosing ocular diseases. Nevertheless, distinguishing various diseases in ultrasound still challenges experienced ophthalmologists. Thus a novel contrastive disentangled network (CDNet) is developed in this work, aiming to tackle the fine-grained image categorization (FGIC) challenges of ocular abnormalities in ultrasound images, including intraocular tumor (IOT), retinal detachment (RD), posterior scleral staphyloma (PSS), and vitreous hemorrhage (VH). Three essential components of CDNet are the weakly-supervised lesion localization module (WSLL), contrastive multi-zoom (CMZ) strategy, and hyperspherical contrastive disentangled loss (HCD-Loss), respectively. These components facilitate feature disentanglement for fine-grained recognition in both the input and output aspects. The proposed CDNet is validated on our ZJU Ocular Ultrasound Dataset (ZJUOUSD), consisting of 5213 samples. Furthermore, the generalization ability of CDNet is validated on two public and widely-used chest X-ray FGIC benchmarks. Quantitative and qualitative results demonstrate the efficacy of our proposed CDNet, which achieves state-of-the-art performance in the FGIC task.


Asunto(s)
Cara , Oftalmólogos , Humanos , Benchmarking , Neuroimagen , Tórax
6.
J Pers Med ; 13(1)2022 Dec 29.
Artículo en Inglés | MEDLINE | ID: mdl-36675750

RESUMEN

Eyelid tumors are tumors that occur in the eye and its appendages, affecting vision and appearance, causing blindness and disability, and some having a high lethality rate. Pathological images of eyelid tumors are characterized by large pixels, multiple scales, and similar features. Solving the problem of difficult and time-consuming fine-grained classification of pathological images is important to improve the efficiency and quality of pathological diagnosis. The morphology of Basal Cell Carcinoma (BCC), Meibomian Gland Carcinoma (MGC), and Cutaneous Melanoma (CM) in eyelid tumors are very similar, and it is easy to be misdiagnosed among each category. In addition, the diseased area, which is decisive for the diagnosis of the disease, usually occupies only a relatively minor portion of the entire pathology section, and screening the area of interest is a tedious and time-consuming task. In this paper, deep learning techniques to investigate the pathological images of eyelid tumors. Inspired by the knowledge distillation process, we propose the Multiscale-Attention-Crop-ResNet (MAC-ResNet) network model to achieve the automatic classification of three malignant tumors and the automatic localization of whole slide imaging (WSI) lesion regions using U-Net. The final accuracy rates of the three classification problems of eyelid tumors on MAC-ResNet were 96.8%, 94.6%, and 90.8%, respectively.

7.
J Inequal Appl ; 2018(1): 140, 2018.
Artículo en Inglés | MEDLINE | ID: mdl-30137727

RESUMEN

Inequalities are frequently used for solving practical engineering problem. There are two key issues of bounding inequalities; one is to find the bounds, and the other is to prove the bounds. This paper takes Wilker type inequalities as an example, presents a two-point-Padé-approximant-based method for finding the bounds, and it also provides a method to prove the bounds in a new way. It not only recovers the estimates in Mortici's method, but it also provides new improvements of estimates obtained from prevailing methods. In principle, it can be applied for other inequalities.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA