Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 26
Filtrar
1.
Heliyon ; 10(10): e30779, 2024 May 30.
Artigo em Inglês | MEDLINE | ID: mdl-38779006

RESUMO

Background and objective: Spatial interaction between tumor-infiltrating lymphocytes (TILs) and tumor cells is valuable in predicting the effectiveness of immune response and prognosis amongst patients with lung adenocarcinoma (LUAD). Recent evidence suggests that the spatial distance between tumor cells and lymphocytes also influences the immune responses, but the distance analysis based on Hematoxylin and Eosin (H&E) -stained whole-slide images (WSIs) remains insufficient. To address this issue, we aim to explore the relationship between distance and prognosis prediction of patients with LUAD in this study. Methods: We recruited patients with resectable LUAD from three independent cohorts in this multi-center study. We proposed a simple but effective deep learning-driven workflow to automatically segment different cell types in the tumor region using the HoVer-Net model, and quantified the spatial distance (DIST) between tumor cells and lymphocytes based on H&E-stained WSIs. The association of DIST with disease-free survival (DFS) was explored in the discovery set (D1, n = 276) and the two validation sets (V1, n = 139; V2, n = 115). Results: In multivariable analysis, the low DIST group was associated with significantly better DFS in the discovery set (D1, HR, 0.61; 95 % CI, 0.40-0.94; p = 0.027) and the two validation sets (V1, HR, 0.54; 95 % CI, 0.32-0.91; p = 0.022; V2, HR, 0.44; 95 % CI, 0.24-0.81; p = 0.009). By integrating the DIST with clinicopathological factors, the integrated model (full model) had better discrimination for DFS in the discovery set (C-index, D1, 0.745 vs. 0.723) and the two validation sets (V1, 0.621 vs. 0.596; V2, 0.671 vs. 0.650). Furthermore, the computerized DIST was associated with immune phenotypes such as immune-desert and inflamed phenotypes. Conclusions: The integration of DIST with clinicopathological factors could improve the stratification performance of patients with resectable LUAD, was beneficial for the prognosis prediction of LUAD patients, and was also expected to assist physicians in individualized treatment.

2.
Comput Methods Programs Biomed ; 244: 107997, 2024 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-38176329

RESUMO

BACKGROUND AND OBJECTIVE: Liver cancer seriously threatens human health. In clinical diagnosis, contrast-enhanced computed tomography (CECT) images provide important supplementary information for accurate liver tumor segmentation. However, most of the existing methods of liver tumor automatic segmentation focus only on single-phase image features. And the existing multi-modal methods have limited segmentation effect due to the redundancy of fusion features. In addition, the spatial misalignment of multi-phase images causes feature interference. METHODS: In this paper, we propose a phase attention network (PA-Net) to adequately aggregate multi-phase information of CT images and improve segmentation performance for liver tumors. Specifically, we design a PA module to generate attention weight maps voxel by voxel to efficiently fuse multi-phase CT images features to avoid feature redundancy. In order to solve the problem of feature interference in the multi-phase image segmentation task, we design a new learning strategy and prove its effectiveness experimentally. RESULTS: We conduct comparative experiments on the in-house clinical dataset and achieve the SOTA segmentation performance on multi-phase methods. In addition, our method has improved the mean dice score by 3.3% compared with the single-phase method based on nnUNet, and our learning strategy has improved the mean dice score by 1.51% compared with the ML strategy. CONCLUSION: The experimental results show that our method is superior to the existing multi-phase liver tumor segmentation method, and provides a scheme for dealing with missing modalities in multi-modal tasks. In addition, our proposed learning strategy makes more effective use of arterial phase image information and is proven to be the most effective in liver tumor segmentation tasks using thick-layer CT images. The source code is released on (https://github.com/Houjunfeng203934/PA-Net).


Assuntos
Neoplasias Hepáticas , Humanos , Neoplasias Hepáticas/diagnóstico por imagem , Veias , Artérias , Tomografia Computadorizada por Raios X , Processamento de Imagem Assistida por Computador
3.
IEEE J Biomed Health Inform ; 28(3): 1472-1483, 2024 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-38090824

RESUMO

Stroke is a leading cause of disability and fatality in the world, with ischemic stroke being the most common type. Digital Subtraction Angiography images, the gold standard in the operation process, can accurately show the contours and blood flow of cerebral vessels. The segmentation of cerebral vessels in DSA images can effectively help physicians assess the lesions. However, due to the disturbances in imaging parameters and changes in imaging scale, accurate cerebral vessel segmentation in DSA images is still a challenging task. In this paper, we propose a novel Edge Regularization Network (ERNet) to segment cerebral vessels in DSA images. Specifically, ERNet employs the erosion and dilation processes on the original binary vessel annotation to generate pseudo-ground truths of False Negative and False Positive, which serve as constraints to refine the coarse predictions based on their mapping relationship with the original vessels. In addition, we exploit a Hybrid Fusion Module based on convolution and transformers to extract local features and build long-range dependencies. Moreover, to support and advance the open research in the field of ischemic stroke, we introduce FPDSA, the first pixel-level semantic segmentation dataset for cerebral vessels. Extensive experiments on FPDSA illustrate the leading performance of our ERNet.


Assuntos
AVC Isquêmico , Acidente Vascular Cerebral , Humanos , Angiografia Digital/métodos , Processamento de Imagem Assistida por Computador/métodos
4.
iScience ; 26(9): 107635, 2023 Sep 15.
Artigo em Inglês | MEDLINE | ID: mdl-37664636

RESUMO

The increased amount of tertiary lymphoid structures (TLSs) is associated with a favorable prognosis in patients with lung adenocarcinoma (LUAD). However, evaluating TLSs manually is an experience-dependent and time-consuming process, which limits its clinical application. In this multi-center study, we developed an automated computational workflow for quantifying the TLS density in the tumor region of routine hematoxylin and eosin (H&E)-stained whole-slide images (WSIs). The association between the computerized TLS density and disease-free survival (DFS) was further explored in 802 patients with resectable LUAD of three cohorts. Additionally, a Cox proportional hazard regression model, incorporating clinicopathological variables and the TLS density, was established to assess its prognostic ability. The computerized TLS density was an independent prognostic biomarker in patients with resectable LUAD. The integration of the TLS density with clinicopathological variables could support individualized clinical decision-making by improving prognostic stratification.

5.
Sensors (Basel) ; 23(16)2023 Aug 16.
Artigo em Inglês | MEDLINE | ID: mdl-37631742

RESUMO

Infrared and visible image fusion aims to generate a single fused image that not only contains rich texture details and salient objects, but also facilitates downstream tasks. However, existing works mainly focus on learning different modality-specific or shared features, and ignore the importance of modeling cross-modality features. To address these challenges, we propose Dual-branch Progressive learning for infrared and visible image fusion with a complementary self-Attention and Convolution (DPACFuse) network. On the one hand, we propose Cross-Modality Feature Extraction (CMEF) to enhance information interaction and the extraction of common features across modalities. In addition, we introduce a high-frequency gradient convolution operation to extract fine-grained information and suppress high-frequency information loss. On the other hand, to alleviate the CNN issues of insufficient global information extraction and computation overheads of self-attention, we introduce the ACmix, which can fully extract local and global information in the source image with a smaller computational overhead than pure convolution or pure self-attention. Extensive experiments demonstrated that the fused images generated by DPACFuse not only contain rich texture information, but can also effectively highlight salient objects. Additionally, our method achieved approximately 3% improvement over the state-of-the-art methods in MI, Qabf, SF, and AG evaluation indicators. More importantly, our fused images enhanced object detection and semantic segmentation by approximately 10%, compared to using infrared and visible images separately.

6.
Med Image Anal ; 88: 102867, 2023 08.
Artigo em Inglês | MEDLINE | ID: mdl-37348167

RESUMO

High throughput nuclear segmentation and classification of whole slide images (WSIs) is crucial to biological analysis, clinical diagnosis and precision medicine. With the advances of CNN algorithms and the continuously growing datasets, considerable progress has been made in nuclear segmentation and classification. However, few works consider how to reasonably deal with nuclear heterogeneity in the following two aspects: imbalanced data distribution and diversified morphology characteristics. The minority classes might be dominated by the majority classes due to the imbalanced data distribution and the diversified morphology characteristics may lead to fragile segmentation results. In this study, a cost-Sensitive MultI-task LEarning (SMILE) framework is conducted to tackle the data heterogeneity problem. Based on the most popular multi-task learning backbone in nuclei segmentation and classification, we propose a multi-task correlation attention (MTCA) to perform feature interaction of multiple high relevant tasks to learn better feature representation. A cost-sensitive learning strategy is proposed to solve the imbalanced data distribution by increasing the penalization for the error classification of the minority classes. Furthermore, we propose a novel post-processing step based on the coarse-to-fine marker-controlled watershed scheme to alleviate fragile segmentation when nuclei are with large size and unclear contour. Extensive experiments show that the proposed method achieves state-of-the-art performances on CoNSeP and MoNuSAC 2020 datasets. The code is available at: https://github.com/panxipeng/nuclear_segandcls.


Assuntos
Algoritmos , Aprendizagem , Humanos , Núcleo Celular , Processamento de Imagem Assistida por Computador , Medicina de Precisão
7.
Comput Methods Programs Biomed ; 238: 107617, 2023 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-37235970

RESUMO

BACKGROUND AND OBJECTIVE: A high degree of lymphocyte infiltration is related to superior outcomes amongst patients with lung adenocarcinoma. Recent evidence indicates that the spatial interactions between tumours and lymphocytes also influence the anti-tumour immune responses, but the spatial analysis at the cellular level remains insufficient. METHODS: We proposed an artificial intelligence-quantified Tumour-Lymphocyte Spatial Interaction score (TLSI-score) by calculating the ratio between the number of spatial adjacent tumour-lymphocyte and the number of tumour cells based on topology cell graph constructed using H&E-stained whole-slide images. The association of TLSI-score with disease-free survival (DFS) was explored in 529 patients with lung adenocarcinoma across three independent cohorts (D1, 275; V1, 139; V2, 115). RESULTS: After adjusting for pTNM stage and other clinicopathologic risk factors, a higher TLSI-score was independently associated with longer DFS than a low TLSI-score in the three cohorts [D1, adjusted hazard ratio (HR), 0.674; 95% confidence interval (CI) 0.463-0.983; p = 0.040; V1, adjusted HR, 0.408; 95% CI 0.223-0.746; p = 0.004; V2, adjusted HR, 0.294; 95% CI 0.130-0.666; p = 0.003]. By integrating the TLSI-score with clinicopathologic risk factors, the integrated model (full model) improves the prediction of DFS in three independent cohorts (C-index, D1, 0.716 vs. 0.701; V1, 0.666 vs. 0.645; V2, 0.708 vs. 0.662) CONCLUSIONS: TLSI-score shows the second highest relative contribution to the prognostic prediction model, next to the pTNM stage. TLSI-score can assist in the characterising of tumour microenvironment and is expected to promote individualized treatment and follow-up decision-making in clinical practice.


Assuntos
Adenocarcinoma de Pulmão , Adenocarcinoma , Neoplasias Pulmonares , Humanos , Intervalo Livre de Doença , Inteligência Artificial , Adenocarcinoma de Pulmão/cirurgia , Adenocarcinoma/cirurgia , Linfócitos , Prognóstico , Neoplasias Pulmonares/cirurgia , Estudos Retrospectivos , Microambiente Tumoral
8.
Transl Vis Sci Technol ; 12(4): 8, 2023 04 03.
Artigo em Inglês | MEDLINE | ID: mdl-37026984

RESUMO

Purpose: Accurate identification of corneal layers with in vivo confocal microscopy (IVCM) is essential for the correct assessment of corneal lesions. This project aims to obtain a reliable automated identification of corneal layers from IVCM images. Methods: A total of 7957 IVCM images were included for model training and testing. Scanning depth information and pixel information of IVCM images were used to build the classification system. Firstly, two base classifiers based on convolutional neural networks and K-nearest neighbors were constructed. Second, two hybrid strategies, namely weighted voting method and light gradient boosting machine (LightGBM) algorithm were used to fuse the results from the two base classifiers and obtain the final classification. Finally, the confidence of prediction results was stratified to help find out model errors. Results: Both two hybrid systems outperformed the two base classifiers. The weighted area under the curve, weighted precision, weighted recall, and weighted F1 score were 0.9841, 0.9096, 0.9145, and 0.9111 for weighted voting hybrid system, and were 0.9794, 0.9039, 0.9055, and 0.9034 for the light gradient boosting machine stacking hybrid system, respectively. More than one-half of the misclassified samples were found using the confidence stratification method. Conclusions: The proposed hybrid approach could effectively integrate the scanning depth and pixel information of IVCM images, allowing for the accurate identification of corneal layers for grossly normal IVCM images. The confidence stratification approach was useful to find out misclassification of the system. Translational Relevance: The proposed hybrid approach lays important groundwork for the automatic identification of the corneal layer for IVCM images.


Assuntos
Córnea , Transtornos da Visão , Humanos , Córnea/diagnóstico por imagem , Transtornos da Visão/patologia , Algoritmos , Microscopia Confocal/métodos , Redes Neurais de Computação
9.
IEEE Trans Med Imaging ; 42(8): 2451-2461, 2023 08.
Artigo em Inglês | MEDLINE | ID: mdl-37027751

RESUMO

Brain tumor segmentation (BTS) in magnetic resonance image (MRI) is crucial for brain tumor diagnosis, cancer management and research purposes. With the great success of the ten-year BraTS challenges as well as the advances of CNN and Transformer algorithms, a lot of outstanding BTS models have been proposed to tackle the difficulties of BTS in different technical aspects. However, existing studies hardly consider how to fuse the multi-modality images in a reasonable manner. In this paper, we leverage the clinical knowledge of how radiologists diagnose brain tumors from multiple MRI modalities and propose a clinical knowledge-driven brain tumor segmentation model, called CKD-TransBTS. Instead of directly concatenating all the modalities, we re-organize the input modalities by separating them into two groups according to the imaging principle of MRI. A dual-branch hybrid encoder with the proposed modality-correlated cross-attention block (MCCA) is designed to extract the multi-modality image features. The proposed model inherits the strengths from both Transformer and CNN with the local feature representation ability for precise lesion boundaries and long-range feature extraction for 3D volumetric images. To bridge the gap between Transformer and CNN features, we propose a Trans&CNN Feature Calibration block (TCFC) in the decoder. We compare the proposed model with six CNN-based models and six transformer-based models on the BraTS 2021 challenge dataset. Extensive experiments demonstrate that the proposed model achieves state-of-the-art brain tumor segmentation performance compared with all the competitors.


Assuntos
Neoplasias Encefálicas , Insuficiência Renal Crônica , Humanos , Neoplasias Encefálicas/diagnóstico por imagem , Encéfalo , Algoritmos , Calibragem , Processamento de Imagem Assistida por Computador
10.
IEEE Trans Med Imaging ; 42(6): 1696-1706, 2023 06.
Artigo em Inglês | MEDLINE | ID: mdl-37018705

RESUMO

Ultrasonography is an important routine examination for breast cancer diagnosis, due to its non-invasive, radiation-free and low-cost properties. However, the diagnostic accuracy of breast cancer is still limited due to its inherent limitations. Then, a precise diagnose using breast ultrasound (BUS) image would be significant useful. Many learning-based computer-aided diagnostic methods have been proposed to achieve breast cancer diagnosis/lesion classification. However, most of them require a pre-define region of interest (ROI) and then classify the lesion inside the ROI. Conventional classification backbones, such as VGG16 and ResNet50, can achieve promising classification results with no ROI requirement. But these models lack interpretability, thus restricting their use in clinical practice. In this study, we propose a novel ROI-free model for breast cancer diagnosis in ultrasound images with interpretable feature representations. We leverage the anatomical prior knowledge that malignant and benign tumors have different spatial relationships between different tissue layers, and propose a HoVer-Transformer to formulate this prior knowledge. The proposed HoVer-Trans block extracts the inter- and intra-layer spatial information horizontally and vertically. We conduct and release an open dataset GDPH&SYSUCC for breast cancer diagnosis in BUS. The proposed model is evaluated in three datasets by comparing with four CNN-based models and three vision transformer models via five-fold cross validation. It achieves state-of-the-art classification performance (GDPH&SYSUCC AUC: 0.924, ACC: 0.893, Spec: 0.836, Sens: 0.926) with the best model interpretability. In the meanwhile, our proposed model outperforms two senior sonographers on the breast cancer diagnosis when only one BUS image is given (GDPH&SYSUCC-AUC ours: 0.924 vs. reader1: 0.825 vs. reader2: 0.820).


Assuntos
Neoplasias da Mama , Feminino , Humanos , Neoplasias da Mama/diagnóstico por imagem , Ultrassonografia , Ultrassonografia Mamária , Diagnóstico por Computador/métodos
11.
Int Ophthalmol ; 43(7): 2203-2214, 2023 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-36595127

RESUMO

PURPOSE: Fungal keratitis is a common cause of blindness worldwide. Timely identification of the causative fungal genera is essential for clinical management. In vivo confocal microscopy (IVCM) provides useful information on pathogenic genera. This study attempted to apply deep learning (DL) to establish an automated method to identify pathogenic fungal genera using IVCM images. METHODS: Deep learning networks were trained, validated, and tested using a data set of 3364 IVCM images that collected from 100 eyes of 100 patients with culture-proven filamentous fungal keratitis. Two transfer learning approaches were investigated: one was a combined framework that extracted features by a DL network and adopted decision tree (DT) as a classifier; another was a complete supervised DL model which used DL-based fully connected layers to implement the classification. RESULTS: The DL classifier model revealed better performance compared with the DT classifier model in an independent testing set. The DL classifier model showed an area under the receiver operating characteristic curves (AUC) of 0.887 with an accuracy of 0.817, sensitivity of 0.791, specificity of 0.831, G-mean of 0.811, and F1 score of 0.749 in identifying Fusarium, and achieved an AUC of 0.827 with an accuracy of 0.757, sensitivity of 0.756, specificity of 0.759, G-mean of 0.757, and F1 score of 0.716 in identifying Aspergillus. CONCLUSION: The DL model can classify Fusarium and Aspergillus by learning effective features in IVCM images automatically. The automated IVCM image analysis suggests a noninvasive identification of Fusarium and Aspergillus with clear potential application in early diagnosis and management of fungal keratitis.


Assuntos
Úlcera da Córnea , Infecções Oculares Fúngicas , Ceratite , Humanos , Inteligência Artificial , Úlcera da Córnea/diagnóstico , Ceratite/diagnóstico , Ceratite/microbiologia , Fungos , Infecções Oculares Fúngicas/diagnóstico , Infecções Oculares Fúngicas/microbiologia , Microscopia Confocal/métodos
12.
J Transl Med ; 20(1): 595, 2022 12 14.
Artigo em Inglês | MEDLINE | ID: mdl-36517832

RESUMO

BACKGROUND: Tumor histomorphology analysis plays a crucial role in predicting the prognosis of resectable lung adenocarcinoma (LUAD). Computer-extracted image texture features have been previously shown to be correlated with outcome. However, a comprehensive, quantitative, and interpretable predictor remains to be developed. METHODS: In this multi-center study, we included patients with resectable LUAD from four independent cohorts. An automated pipeline was designed for extracting texture features from the tumor region in hematoxylin and eosin (H&E)-stained whole slide images (WSIs) at multiple magnifications. A multi-scale pathology image texture signature (MPIS) was constructed with the discriminative texture features in terms of overall survival (OS) selected by the LASSO method. The prognostic value of MPIS for OS was evaluated through univariable and multivariable analysis in the discovery set (n = 111) and the three external validation sets (V1, n = 115; V2, n = 116; and V3, n = 246). We constructed a Cox proportional hazards model incorporating clinicopathological variables and MPIS to assess whether MPIS could improve prognostic stratification. We also performed histo-genomics analysis to explore the associations between texture features and biological pathways. RESULTS: A set of eight texture features was selected to construct MPIS. In multivariable analysis, a higher MPIS was associated with significantly worse OS in the discovery set (HR 5.32, 95%CI 1.72-16.44; P = 0.0037) and the three external validation sets (V1: HR 2.63, 95%CI 1.10-6.29, P = 0.0292; V2: HR 2.99, 95%CI 1.34-6.66, P = 0.0075; V3: HR 1.93, 95%CI 1.15-3.23, P = 0.0125). The model that integrated clinicopathological variables and MPIS had better discrimination for OS compared to the clinicopathological variables-based model in the discovery set (C-index, 0.837 vs. 0.798) and the three external validation sets (V1: 0.704 vs. 0.679; V2: 0.728 vs. 0.666; V3: 0.696 vs. 0.669). Furthermore, the identified texture features were associated with biological pathways, such as cytokine activity, structural constituent of cytoskeleton, and extracellular matrix structural constituent. CONCLUSIONS: MPIS was an independent prognostic biomarker that was robust and interpretable. Integration of MPIS with clinicopathological variables improved prognostic stratification in resectable LUAD and might help enhance the quality of individualized postoperative care.


Assuntos
Adenocarcinoma de Pulmão , Neoplasias Pulmonares , Humanos , Prognóstico , Estudos Retrospectivos , Modelos de Riscos Proporcionais , Neoplasias Pulmonares/diagnóstico por imagem , Neoplasias Pulmonares/cirurgia
13.
iScience ; 25(12): 105605, 2022 Dec 22.
Artigo em Inglês | MEDLINE | ID: mdl-36505920

RESUMO

A high abundance of tumor-infiltrating lymphocytes (TILs) has a positive impact on the prognosis of patients with lung adenocarcinoma (LUAD). We aimed to develop and validate an artificial intelligence-driven pathological scoring system for assessing TILs on H&E-stained whole-slide images of LUAD. Deep learning-based methods were applied to calculate the densities of lymphocytes in cancer epithelium (DLCE) and cancer stroma (DLCS), and a risk score (WELL score) was built through linear weighting of DLCE and DLCS. Association between WELL score and patient outcome was explored in 793 patients with stage I-III LUAD in four cohorts. WELL score was an independent prognostic factor for overall survival and disease-free survival in the discovery cohort and validation cohorts. The prognostic prediction model-integrated WELL score demonstrated better discrimination performance than the clinicopathologic model in the four cohorts. This artificial intelligence-based workflow and scoring system could promote risk stratification for patients with resectable LUAD.

14.
Sensors (Basel) ; 22(21)2022 Oct 27.
Artigo em Inglês | MEDLINE | ID: mdl-36365942

RESUMO

Low-illumination images exhibit low brightness, blurry details, and color casts, which present us an unnatural visual experience and further have a negative effect on other visual applications. Data-driven approaches show tremendous potential for lighting up the image brightness while preserving its visual naturalness. However, these methods introduce hand-crafted holes and noise enlargement or over/under enhancement and color deviation. For mitigating these challenging issues, this paper presents a frequency division and multiscale learning network named FDMLNet, including two subnets, DetNet and StruNet. This design first applies the guided filter to separate the high and low frequencies of authentic images, then DetNet and StruNet are, respectively, developed to process them, to fully explore their information at different frequencies. In StruNet, a feasible feature extraction module (FFEM), grouped by multiscale learning block (MSL) and a dual-branch channel attention mechanism (DCAM), is injected to promote its multiscale representation ability. In addition, three FFEMs are connected in a new dense connectivity meant to utilize multilevel features. Extensive quantitative and qualitative experiments on public benchmarks demonstrate that our FDMLNet outperforms state-of-the-art approaches benefiting from its stronger multiscale feature expression and extraction ability.


Assuntos
Algoritmos , Aumento da Imagem , Aumento da Imagem/métodos
16.
IEEE J Biomed Health Inform ; 26(9): 4623-4634, 2022 09.
Artigo em Inglês | MEDLINE | ID: mdl-35788455

RESUMO

Vessel segmentation is critical for disease diagnosis and surgical planning. Recently, the vessel segmentation method based on deep learning has achieved outstanding performance. However, vessel segmentation remains challenging due to thin vessels with low contrast that easily lose spatial information in the traditional U-shaped segmentation network. To alleviate this problem, we propose a novel and straightforward full-resolution network (FR-UNet) that expands horizontally and vertically through a multiresolution convolution interactive mechanism while retaining full image resolution. In FR-UNet, the feature aggregation module integrates multiscale feature maps from adjacent stages to supplement high-level contextual information. The modified residual blocks continuously learn multiresolution representations to obtain a pixel-level accuracy prediction map. Moreover, we propose the dual-threshold iterative algorithm (DTI) to extract weak vessel pixels for improving vessel connectivity. The proposed method was evaluated on retinal vessel datasets (DRIVE, CHASE_DB1, and STARE) and coronary angiography datasets (DCA1 and CHUAC). The results demonstrate that FR-UNet outperforms state-of-the-art methods by achieving the highest Sen, AUC, F1, and IOU on most of the above-mentioned datasets with fewer parameters, and that DTI enhances vessel connectivity while greatly improving sensitivity. The code is available at: https://github.com/lseventeen/FR-UNet.


Assuntos
Algoritmos , Vasos Retinianos , Angiografia Coronária , Humanos , Processamento de Imagem Assistida por Computador/métodos , Vasos Retinianos/diagnóstico por imagem
17.
Biomed Res Int ; 2022: 7966553, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35845926

RESUMO

Automatic tissue segmentation in whole-slide images (WSIs) is a critical task in hematoxylin and eosin- (H&E-) stained histopathological images for accurate diagnosis and risk stratification of lung cancer. Patch classification and stitching the classification results can fast conduct tissue segmentation of WSIs. However, due to the tumour heterogeneity, large intraclass variability and small interclass variability make the classification task challenging. In this paper, we propose a novel bilinear convolutional neural network- (Bilinear-CNN-) based model with a bilinear convolutional module and a soft attention module to tackle this problem. This method investigates the intraclass semantic correspondence and focuses on the more distinguishable features that make feature output variations relatively large between interclass. The performance of the Bilinear-CNN-based model is compared with other state-of-the-art methods on the histopathological classification dataset, which consists of 107.7 k patches of lung cancer. We further evaluate our proposed algorithm on an additional dataset from colorectal cancer. Extensive experiments show that the performance of our proposed method is superior to that of previous state-of-the-art ones and the interpretability of our proposed method is demonstrated by Grad-CAM.


Assuntos
Processamento de Imagem Assistida por Computador , Neoplasias Pulmonares , Algoritmos , Atenção , Humanos , Processamento de Imagem Assistida por Computador/métodos , Neoplasias Pulmonares/diagnóstico por imagem , Redes Neurais de Computação
18.
Med Image Anal ; 80: 102481, 2022 08.
Artigo em Inglês | MEDLINE | ID: mdl-35653901

RESUMO

Cells/nuclei deliver massive information of microenvironment. An automatic nuclei segmentation approach can reduce pathologists' workload and allow precise of the microenvironment for biological and clinical researches. Existing deep learning models have achieved outstanding performance under the supervision of a large amount of labeled data. However, when data from the unseen domain comes, we still have to prepare a certain degree of manual annotations for training for each domain. Unfortunately, obtaining histopathological annotations is extremely difficult. It is high expertise-dependent and time-consuming. In this paper, we attempt to build a generalized nuclei segmentation model with less data dependency and more generalizability. To this end, we propose a meta multi-task learning (Meta-MTL) model for nuclei segmentation which requires fewer training samples. A model-agnostic meta-learning is applied as the outer optimization algorithm for the segmentation model. We introduce a contour-aware multi-task learning model as the inner model. A feature fusion and interaction block (FFIB) is proposed to allow feature communication across both tasks. Extensive experiments prove that our proposed Meta-MTL model can improve the model generalization and obtain a comparable performance with state-of-the-art models with fewer training samples. Our model can also perform fast adaptation on the unseen domain with only a few manual annotations. Code is available at https://github.com/ChuHan89/Meta-MTL4NucleiSegmentation.


Assuntos
Processamento de Imagem Assistida por Computador , Redes Neurais de Computação , Algoritmos , Humanos
19.
Eur Radiol ; 32(12): 8213-8225, 2022 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-35704112

RESUMO

OBJECTIVES: To investigate whether breast edema characteristics at preoperative T2-weighted imaging (T2WI) could help evaluate axillary lymph node (ALN) burden in patients with early-stage breast cancer. METHODS: This retrospective study included women with clinical T1 and T2 stage breast cancer and preoperative MRI examination in two independent cohorts from May 2014 to December 2020. Low (< 3 LNs+) and high (≥ 3 LNs+) pathological ALN (pALN) burden were recorded as endpoint. Breast edema score (BES) was evaluated at T2WI. Univariable and multivariable analyses were performed by the logistic regression model. The added predictive value of BES was examined utilizing the area under the curve (AUC), net reclassification improvement (NRI), and integrated discrimination improvement (IDI). RESULTS: A total of 1092 patients were included in this study. BES was identified as the independent predictor of pALN burden in primary (n = 677) and validation (n = 415) cohorts. The analysis using MRI-ALN status showed that BES significantly improved the predictive performance of pALN burden (AUC: 0.65 vs 0.71, p < 0.001; IDI = 0.045, p < 0.001; continuous NRI = 0.159, p = 0.050). These results were confirmed in the validation cohort (AUC: 0.64 vs 0.69, p = 0.009; IDI = 0.050, p < 0.001; continuous NRI = 0.213, p = 0.047). Furthermore, BES was positively correlated with biologically invasive clinicopathological factors (p < 0.05). CONCLUSIONS: In individuals with early-stage breast cancer, preoperative MRI characteristics of breast edema could be a promising predictor for pALN burden, which may aid in treatment planning. KEY POINTS: • In this retrospective study of 1092 patients with early-stage breast cancer from two cohorts, the MRI characteristic of breast edema has independent and additive predictive value for assessing axillary lymph node burden. • Breast edema characteristics at T2WI positively correlated with biologically invasive clinicopathological factors, which may be useful for preoperative diagnosis and treatment planning for individual patients with breast cancer.


Assuntos
Doenças Mamárias , Neoplasias da Mama , Humanos , Feminino , Estudos Retrospectivos , Neoplasias da Mama/complicações , Neoplasias da Mama/diagnóstico por imagem , Neoplasias da Mama/patologia , Metástase Linfática/patologia , Axila/patologia , Linfonodos/diagnóstico por imagem , Linfonodos/patologia , Doenças Mamárias/patologia , Imageamento por Ressonância Magnética/métodos , Edema/diagnóstico por imagem , Edema/patologia
20.
Med Image Anal ; 80: 102487, 2022 08.
Artigo em Inglês | MEDLINE | ID: mdl-35671591

RESUMO

Tissue-level semantic segmentation is a vital step in computational pathology. Fully-supervised models have already achieved outstanding performance with dense pixel-level annotations. However, drawing such labels on the giga-pixel whole slide images is extremely expensive and time-consuming. In this paper, we use only patch-level classification labels to achieve tissue semantic segmentation on histopathology images, finally reducing the annotation efforts. We propose a two-step model including a classification and a segmentation phases. In the classification phase, we propose a CAM-based model to generate pseudo masks by patch-level labels. In the segmentation phase, we achieve tissue semantic segmentation by our propose Multi-Layer Pseudo-Supervision. Several technical novelties have been proposed to reduce the information gap between pixel-level and patch-level annotations. As a part of this paper, we introduce a new weakly-supervised semantic segmentation (WSSS) dataset for lung adenocarcinoma (LUAD-HistoSeg). We conduct several experiments to evaluate our proposed model on two datasets. Our proposed model outperforms five state-of-the-art WSSS approaches. Note that we can achieve comparable quantitative and qualitative results with the fully-supervised model, with only around a 2% gap for MIoU and FwIoU. By comparing with manual labeling on a randomly sampled 100 patches dataset, patch-level labeling can greatly reduce the annotation time from hours to minutes. The source code and the released datasets are available at: https://github.com/ChuHan89/WSSS-Tissue.


Assuntos
Processamento de Imagem Assistida por Computador , Aprendizado de Máquina Supervisionado , Humanos , Processamento de Imagem Assistida por Computador/métodos , Semântica
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...