Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 6 de 6
Filtrar
Mais filtros

Base de dados
Tipo de documento
Intervalo de ano de publicação
1.
Sci Data ; 11(1): 627, 2024 Jun 13.
Artigo em Inglês | MEDLINE | ID: mdl-38871784

RESUMO

Infectious keratitis is among the major causes of global blindness. Anterior segment optical coherence tomography (AS-OCT) images allow the characterizing of cross-sectional structures in the cornea with keratitis thus revealing the severity of inflammation, and can also provide 360-degree information on anterior chambers. The development of image analysis methods for such cases, particularly deep learning methods, requires a large number of annotated images, but to date, there is no such open-access AS-OCT image repository. For this reason, this work provides a dataset containing a total of 1168 AS-OCT images of patients with keratitis, including 768 full-frame images (6 patients). Each image has associated segmentation labels for lesions and cornea, and also labels of iris for full-frame images. This study provides a great opportunity to advance the field of image analysis on AS-OCT images in both two-dimensional (2D) and three-dimensional (3D) and would aid in the development of artificial intelligence-based keratitis management.


Assuntos
Aprendizado Profundo , Ceratite , Tomografia de Coerência Óptica , Humanos , Ceratite/diagnóstico por imagem , Imageamento Tridimensional , Córnea/diagnóstico por imagem , Processamento de Imagem Assistida por Computador
2.
Comput Biol Med ; 177: 108602, 2024 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-38805809

RESUMO

High-quality 3D corneal reconstruction from AS-OCT images has demonstrated significant potential in computer-aided diagnosis, enabling comprehensive observation of corneal thickness, precise assessment of morphological characteristics, as well as location and quantification of keratitis-affected regions. However, it faces two main challenges: (1) prevalent medical image segmentation networks often struggle to accurately process low-contrast corneal regions, which is a vital pre-processing step for 3D corneal reconstruction, and (2) there are no reconstruction methods that can be directly applied to AS-OCT sequences with 180-degree scanning. To combat these, we propose CSCM-CCA-Net, a simple yet efficient network for accurate corneal segmentation. This network incorporates two key techniques: cascade spatial and channel-wise multifusion (CSCM), which captures intricate contextual interdependencies and effectively extracts low-contrast and obscure corneal features; and criss cross augmentation (CCA), which enhances shape-preserved feature representation to improve segmentation accuracy. Based on the obtained corneal segmentation results, we reconstruct the 3D volume data and generate a topographic map of corneal thickness through corneal image alignment. Additionally, we design a transfer function based on the analysis of intensity histogram and gradient histogram to explore more internal cues for better visualization results. Experimental results on CORNEA benchmark demonstrate the impressive performance of our proposed method in terms of both corneal segmentation and 3D reconstruction. Furthermore, we compare CSCM-CCA-Net with state-of-the-art medical image segmentation approaches using three challenging medical fundus segmentation datasets (DRIVE, CHASEDB1, FIVES), highlighting its superiority in terms of segmentation accuracy. The code and models will be made available at https://github.com/qianguiping/CSCM-CCA-Net.


Assuntos
Córnea , Humanos , Córnea/diagnóstico por imagem , Tomografia de Coerência Óptica/métodos , Imageamento Tridimensional/métodos , Algoritmos , Processamento de Imagem Assistida por Computador/métodos
3.
Food Chem X ; 23: 101560, 2024 Oct 30.
Artigo em Inglês | MEDLINE | ID: mdl-39007115

RESUMO

Mustard sprouts is a new form of vegetable product that is gaining attention due to its high content of health-promoting compounds such as glucosinolates. This study investigated the effects of different light qualities (white, red, and blue) alone and in combination with 100 µmol L-1 melatonin on the growth and health-promoting substance content of mustard sprouts. The results showed that white light + melatonin treatment promoted the accumulation of glucosinolates in sprouts (compared with white light increased by 47.89%). The edible fresh weight of sprouts treated with red light + melatonin was the highest, followed by white light + melatonin treatment. In addition, the sprouts treated with blue light + melatonin contained more ascorbic acid, flavonoids, and total phenolics. Therefore, the combined treatment of light quality (especially white light) and melatonin can provide a new strategy to improve the quality of mustard sprouts.

4.
IEEE J Biomed Health Inform ; 27(7): 3525-3536, 2023 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-37126620

RESUMO

Precise and rapid categorization of images in the B-scan ultrasound modality is vital for diagnosing ocular diseases. Nevertheless, distinguishing various diseases in ultrasound still challenges experienced ophthalmologists. Thus a novel contrastive disentangled network (CDNet) is developed in this work, aiming to tackle the fine-grained image categorization (FGIC) challenges of ocular abnormalities in ultrasound images, including intraocular tumor (IOT), retinal detachment (RD), posterior scleral staphyloma (PSS), and vitreous hemorrhage (VH). Three essential components of CDNet are the weakly-supervised lesion localization module (WSLL), contrastive multi-zoom (CMZ) strategy, and hyperspherical contrastive disentangled loss (HCD-Loss), respectively. These components facilitate feature disentanglement for fine-grained recognition in both the input and output aspects. The proposed CDNet is validated on our ZJU Ocular Ultrasound Dataset (ZJUOUSD), consisting of 5213 samples. Furthermore, the generalization ability of CDNet is validated on two public and widely-used chest X-ray FGIC benchmarks. Quantitative and qualitative results demonstrate the efficacy of our proposed CDNet, which achieves state-of-the-art performance in the FGIC task.


Assuntos
Face , Oftalmologistas , Humanos , Benchmarking , Neuroimagem , Tórax
5.
Artigo em Inglês | MEDLINE | ID: mdl-36136924

RESUMO

Eyelid malignant melanoma (MM) is a rare disease with high mortality. Accurate diagnosis of such disease is important but challenging. In clinical practice, the diagnosis of MM is currently performed manually by pathologists, which is subjective and biased. Since the heavy manual annotation workload, most pathological whole slide image (WSI) datasets are only partially labeled (without region annotations), which cannot be directly used in supervised deep learning. For these reasons, it is of great practical significance to design a laborsaving and high data utilization diagnosis method. In this paper, a self-supervised learning (SSL) based framework for automatically detecting eyelid MM is proposed. The framework consists of a self-supervised model for detecting MM areas at the patch-level and a second model for classifying lesion types at the slide level. A squeeze-excitation (SE) attention structure and a feature-projection (FP) structure are integrated to boost learning on details of pathological images and improve model performance. In addition, this framework also provides visual heatmaps with high quality and reliability to highlight the likely areas of the lesion to assist the evaluation and diagnosis of the eyelid MM. Extensive experimental results on different datasets show that our proposed method outperforms other state-of-the-art SSL and fully supervised methods at both patch and slide levels when only a subset of WSIs are annotated. It should be noted that our method is even comparable to supervised methods when all WSIs are fully annotated. To the best of our knowledge, our work is the first SSL method for automatic diagnosis of MM at the eyelid and has a great potential impact on reducing the workload of human annotations in clinical practice.

6.
IEEE J Biomed Health Inform ; 26(4): 1684-1695, 2022 04.
Artigo em Inglês | MEDLINE | ID: mdl-34797767

RESUMO

Accurate evaluation of the treatment result on X-ray images is a significant and challenging step in root canal therapy since the incorrect interpretation of the therapy results will hamper timely follow-up which is crucial to the patients' treatment outcome. Nowadays, the evaluation is performed in a manual manner, which is time-consuming, subjective, and error-prone. In this article, we aim to automate this process by leveraging the advances in computer vision and artificial intelligence, to provide an objective and accurate method for root canal therapy result assessment. A novel anatomy-guided multi-branch Transformer (AGMB-Transformer) network is proposed, which first extracts a set of anatomy features and then uses them to guide a multi-branch Transformer network for evaluation. Specifically, we design a polynomial curve fitting segmentation strategy with the help of landmark detection to extract the anatomy features. Moreover, a branch fusion module and a multi-branch structure including our progressive Transformer and Group Multi-Head Self-Attention (GMHSA) are designed to focus on both global and local features for an accurate diagnosis. To facilitate the research, we have collected a large-scale root canal therapy evaluation dataset with 245 root canal therapy X-ray images, and the experiment results show that our AGMB-Transformer can improve the diagnosis accuracy from 57.96% to 90.20% compared with the baseline network. The proposed AGMB-Transformer can achieve a highly accurate evaluation of root canal therapy. To our best knowledge, our work is the first to perform automatic root canal therapy evaluation and has important clinical value to reduce the workload of endodontists.


Assuntos
Inteligência Artificial , Radiografia Dentária , Algoritmos , Humanos , Tratamento do Canal Radicular
SELEÇÃO DE REFERÊNCIAS
Detalhe da pesquisa