Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
Artif Intell Med ; 143: 102637, 2023 09.
Artículo en Inglés | MEDLINE | ID: mdl-37673569

RESUMEN

Accurate airway segmentation from computed tomography (CT) images is critical for planning navigation bronchoscopy and realizing a quantitative assessment of airway-related chronic obstructive pulmonary disease (COPD). Existing methods face difficulty in airway segmentation, particularly for the small branches of the airway. These difficulties arise due to the constraints of limited labeling and failure to meet clinical use requirements in COPD. We propose a two-stage framework with a novel 3D contextual transformer for segmenting the overall airway and small airway branches using CT images. The method consists of two training stages sharing the same modified 3D U-Net network. The novel 3D contextual transformer block is integrated into both the encoder and decoder path of the network to effectively capture contextual and long-range information. In the first training stage, the proposed network segments the overall airway with the overall airway mask. To improve the performance of the segmentation result, we generate the intrapulmonary airway branch label, and train the network to focus on producing small airway branches in the second training stage. Extensive experiments were performed on in-house and multiple public datasets. Quantitative and qualitative analyses demonstrate that our proposed method extracts significantly more branches and longer lengths of the airway tree while accomplishing state-of-the-art airway segmentation performance. The code is available at https://github.com/zhaozsq/airway_segmentation.


Asunto(s)
Enfermedad Pulmonar Obstructiva Crónica , Humanos , Enfermedad Pulmonar Obstructiva Crónica/diagnóstico por imagen , Redes Neurales de la Computación , Tomografía Computarizada por Rayos X
2.
Med Image Anal ; 90: 102957, 2023 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-37716199

RESUMEN

Open international challenges are becoming the de facto standard for assessing computer vision and image analysis algorithms. In recent years, new methods have extended the reach of pulmonary airway segmentation that is closer to the limit of image resolution. Since EXACT'09 pulmonary airway segmentation, limited effort has been directed to the quantitative comparison of newly emerged algorithms driven by the maturity of deep learning based approaches and extensive clinical efforts for resolving finer details of distal airways for early intervention of pulmonary diseases. Thus far, public annotated datasets are extremely limited, hindering the development of data-driven methods and detailed performance evaluation of new algorithms. To provide a benchmark for the medical imaging community, we organized the Multi-site, Multi-domain Airway Tree Modeling (ATM'22), which was held as an official challenge event during the MICCAI 2022 conference. ATM'22 provides large-scale CT scans with detailed pulmonary airway annotation, including 500 CT scans (300 for training, 50 for validation, and 150 for testing). The dataset was collected from different sites and it further included a portion of noisy COVID-19 CTs with ground-glass opacity and consolidation. Twenty-three teams participated in the entire phase of the challenge and the algorithms for the top ten teams are reviewed in this paper. Both quantitative and qualitative results revealed that deep learning models embedded with the topological continuity enhancement achieved superior performance in general. ATM'22 challenge holds as an open-call design, the training data and the gold standard evaluation are available upon successful registration via its homepage (https://atm22.grand-challenge.org/).


Asunto(s)
Enfermedades Pulmonares , Árboles , Humanos , Tomografía Computarizada por Rayos X/métodos , Procesamiento de Imagen Asistido por Computador/métodos , Algoritmos , Pulmón/diagnóstico por imagen
3.
Med Biol Eng Comput ; 61(10): 2649-2663, 2023 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-37420036

RESUMEN

Transformer-based methods have led to the revolutionizing of multiple computer vision tasks. Inspired by this, we propose a transformer-based network with a channel-enhanced attention module to explore contextual and spatial information in non-contrast (NC) and contrast-enhanced (CE) computed tomography (CT) images for pulmonary vessel segmentation and artery-vein separation. Our proposed network employs a 3D contextual transformer module in the encoder and decoder part and a double attention module in skip connection to effectively finish high-quality vessel and artery-vein segmentation. Extensive experiments are conducted on the in-house dataset and the ISICDM2021 challenge dataset. The in-house dataset includes 56 NC CT scans with vessel annotations and the challenge dataset consists of 14 NC and 14 CE CT scans with vessel and artery-vein annotations. For vessel segmentation, Dice is 0.840 for CE CT and 0.867 for NC CT. For artery-vein separation, the proposed method achieves a Dice of 0.758 of CE images and 0.602 of NC images. Quantitative and qualitative results demonstrated that the proposed method achieved high accuracy for pulmonary vessel segmentation and artery-vein separation. It provides useful support for further research associated with the vascular system in CT images. The code is available at https://github.com/wuyanan513/Pulmonary-Vessel-Segmentation-and-Artery-vein-Separation .


Asunto(s)
Suministros de Energía Eléctrica , Tomografía Computarizada por Rayos X , Arterias , Procesamiento de Imagen Asistido por Computador
4.
BMC Genomics ; 24(1): 27, 2023 Jan 17.
Artículo en Inglés | MEDLINE | ID: mdl-36650452

RESUMEN

BACKGROUND: As an economically important crop, tea is strongly nitrogen (N)-dependent. However, the physiological and molecular mechanisms underlying the response of N deficiency in tea are not fully understood. Tea cultivar "Chunlv2" [Camellia sinensis (L.) O. Kuntze] were cultured with a nutrient solution with 0 mM [N-deficiency] or 3 mM (Control) NH4NO3 in 6 L pottery pots containing clean river sands. RESULTS: N deficiency significantly decreased N content, dry weight, chlorophyll (Chl) content, L-theanine and the activities of N metabolism-related enzymes, but increased the content of total flavonoids and polyphenols in tea leaves. N deficiency delayed the sprouting time of tea buds. By using the RNA-seq technique and subsequent bioinformatics analysis, 3050 up-regulated and 2688 down-regulated differentially expressed genes (DEGs) were isolated in tea leaves in response to N deficiency. However, only 1025 genes were up-regulated and 744 down-regulated in roots. Gene ontology (GO) term enrichment analysis showed that 205 DEGs in tea leaves were enriched in seven GO terms and 152 DEGs in tea roots were enriched in 11 GO items based on P < 0.05. In tea leaves, most GO-enriched DEGs were involved in chlorophyll a/b binding activities, photosynthetic performance, and transport activities. But most of the DEGs in tea roots were involved in the metabolism of carbohydrates and plant hormones with regard to the GO terms of biological processes. N deficiency significantly increased the expression level of phosphate transporter genes, which indicated that N deficiency might impair phosphorus metabolism in tea leaves. Furthermore, some DEGs, such as probable anion transporter 3 and high-affinity nitrate transporter 2.7, might be of great potential in improving the tolerance of N deficiency in tea plants and further study could work on this area in the future. CONCLUSIONS: Our results indicated N deficiency inhibited the growth of tea plant, which might be due to altered N metabolism and expression levels of DEGs involved in the photosynthetic performance, transport activity and oxidation-reduction processes.


Asunto(s)
Camellia sinensis , Camellia sinensis/metabolismo , Clorofila A , Nitrógeno/metabolismo , Té/metabolismo , Hojas de la Planta/metabolismo , Proteínas de Plantas/genética , Proteínas de Plantas/metabolismo , Regulación de la Expresión Génica de las Plantas
5.
Phys Med Biol ; 67(24)2022 12 06.
Artículo en Inglés | MEDLINE | ID: mdl-36322995

RESUMEN

Objective.Diabetic retinopathy (DR) grading is primarily performed by assessing fundus images. Many types of lesions, such as microaneurysms, hemorrhages, and soft exudates, are available simultaneously in a single image. However, their sizes may be small, making it difficult to differentiate adjacent DR grades even using deep convolutional neural networks (CNNs). Recently, a vision transformer has shown comparable or even superior performance to CNNs, and it also learns different visual representations from CNNs. Inspired by this finding, we propose a two-path contextual transformer with Xception network (CoT-XNet) to improve the accuracy of DR grading.Approach.The representations learned by CoT through one path and those by the Xception network through another path are concatenated before the fully connected layer. Meanwhile, the dedicated pre-processing, data resampling, and test time augmentation strategies are implemented. The performance of CoT-XNet is evaluated in the publicly available datasets of DDR, APTOS2019, and EyePACS, which include over 50 000 images. Ablation experiments and comprehensive comparisons with various state-of-the-art (SOTA) models have also been performed.Main results.Our proposed CoT-XNet shows better performance than available SOTA models, and the accuracy and Kappa are 83.10% and 0.8496, 84.18% and 0.9000 and 84.10% and 0.7684 respectively, in the three datasets (listed above). Class activation maps of CoT and Xception networks are different and complementary in most images.Significance.By concatenating the different visual representations learned by CoT and Xception networks, CoT-XNet can accurately grade DR from fundus images and present good generalizability. CoT-XNet will promote the application of artificial intelligence-based systems in the DR screening of large-scale populations.


Asunto(s)
Diabetes Mellitus , Retinopatía Diabética , Humanos , Retinopatía Diabética/diagnóstico por imagen , Inteligencia Artificial
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA