Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 38
Filtrar
1.
J Mol Biol ; 436(12): 168610, 2024 May 15.
Artigo em Inglês | MEDLINE | ID: mdl-38754773

RESUMO

The executors of organismal functions are proteins, and the transition from RNA to protein is subject to post-transcriptional regulation; therefore, considering both RNA and surface protein expression simultaneously can provide additional evidence of biological processes. Cellular indexing of transcriptomes and epitopes by sequencing (CITE-Seq) technology can measure both RNA and protein expression in single cells, but these experiments are expensive and time-consuming. Due to the lack of computational tools for predicting surface proteins, we used datasets obtained with CITE-seq technology to design a deep generative prediction method based on diffusion models and to find biological discoveries through the prediction results. In our method, the scDM, which predicts protein expression values from RNA expression values of individual cells, uses a novel way of encoding the data into a model and generates predicted samples by introducing Gaussian noise to gradually remove the noise to learn the data distribution during the modelling process. Comprehensive evaluation across different datasets demonstrated that our predictions yielded satisfactory results and further demonstrated the effectiveness of incorporating information from single-cell multiomics data into diffusion models for biological studies. We also found that new directions for discovering therapeutic drug targets could be provided by jointly analysing the predictive value of surface protein expression and cancer cell drug scores.

2.
Phys Med Biol ; 69(11)2024 May 14.
Artigo em Inglês | MEDLINE | ID: mdl-38636502

RESUMO

Medical image segmentation is a crucial field of computer vision. Obtaining correct pathological areas can help clinicians analyze patient conditions more precisely. We have observed that both CNN-based and attention-based neural networks often produce rough segmentation results around the edges of the regions of interest. This significantly impacts the accuracy of obtaining the pathological areas. Without altering the original data and model architecture, further refining the initial segmentation outcomes can effectively address this issue and lead to more satisfactory results. Recently, diffusion models have demonstrated outstanding results in image generation, showcasing their powerful ability to model distributions. We believe that this ability can greatly enhance the accuracy of the reshaping results. This research proposes ERSegDiff, a neural network based on the diffusion model for reshaping segmentation borders. The diffusion model is trained to fit the distribution of the target edge area and is then used to modify the segmentation edge to produce more accurate segmentation results. By incorporating prior knowledge into the diffusion model, we can help it more accurately simulate the edge probability distribution of the samples. Moreover, we introduce the edge concern module, which leverages attention mechanisms to produce feature weights and further refine the segmentation outcomes. To validate our approach, we employed the COVID-19 and ISIC-2018 datasets for lung segmentation and skin cancer segmentation tasks, respectively. Compared with the baseline model, ERSegDiff improved the dice score by 3%-4% and 2%-4%, respectively, and achieved state-of-the-art scores compared to several mainstream neural networks, such as swinUNETR.


Assuntos
Processamento de Imagem Assistida por Computador , Redes Neurais de Computação , Humanos , Processamento de Imagem Assistida por Computador/métodos , Difusão , COVID-19/diagnóstico por imagem
3.
Med Biol Eng Comput ; 62(5): 1427-1440, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38233683

RESUMO

In recent years, predicting gene mutations on whole slide imaging (WSI) has gained prominence. The primary challenge is extracting global information and achieving unbiased semantic aggregation. To address this challenge, we propose a novel Transformer-based aggregation model, employing a self-learning weight aggregation mechanism to mitigate semantic bias caused by the abundance of features in WSI. Additionally, we adopt a random patch training method, which enhances model learning richness by randomly extracting feature vectors from WSI, thus addressing the issue of limited data. To demonstrate the model's effectiveness in predicting gene mutations, we leverage the lung adenocarcinoma dataset from Shandong Provincial Hospital for prior knowledge learning. Subsequently, we assess TP53, CSMD3, LRP1B, and TTN gene mutations using lung adenocarcinoma tissue pathology images and clinical data from The Cancer Genome Atlas (TCGA). The results indicate a notable increase in the AUC (Area Under the ROC Curve) value, averaging 4%, attesting to the model's performance improvement. Our research offers an efficient model to explore the correlation between pathological image features and molecular characteristics in lung adenocarcinoma patients. This model introduces a novel approach to clinical genetic testing, expected to enhance the efficiency of identifying molecular features and genetic testing in lung adenocarcinoma patients, ultimately providing more accurate and reliable results for related studies.


Assuntos
Adenocarcinoma de Pulmão , Adenocarcinoma , Neoplasias Pulmonares , Humanos , Adenocarcinoma de Pulmão/genética , Mutação/genética , Adenocarcinoma/genética , Fontes de Energia Elétrica , Neoplasias Pulmonares/diagnóstico por imagem , Neoplasias Pulmonares/genética
4.
IEEE J Biomed Health Inform ; 28(3): 1587-1598, 2024 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-38215328

RESUMO

Accurate segmentation of brain tumors in MRI images is imperative for precise clinical diagnosis and treatment. However, existing medical image segmentation methods exhibit errors, which can be categorized into two types: random errors and systematic errors. Random errors, arising from various unpredictable effects, pose challenges in terms of detection and correction. Conversely, systematic errors, attributable to systematic effects, can be effectively addressed through machine learning techniques. In this paper, we propose a corrective diffusion model for accurate MRI brain tumor segmentation by correcting systematic errors. This marks the first application of the diffusion model for correcting systematic segmentation errors. Additionally, we introduce the Vector Quantized Variational Autoencoder (VQ-VAE) to compress the original data into a discrete coding codebook. This not only reduces the dimensionality of the training data but also enhances the stability of the correction diffusion model. Furthermore, we propose the Multi-Fusion Attention Mechanism, which can effectively enhances the segmentation performance of brain tumor images, and enhance the flexibility and reliability of the corrective diffusion model. Our model is evaluated on the BRATS2019, BRATS2020, and Jun Cheng datasets. Experimental results demonstrate the effectiveness of our model over state-of-the-art methods in brain tumor segmentation.


Assuntos
Neoplasias Encefálicas , Processamento de Imagem Assistida por Computador , Humanos , Reprodutibilidade dos Testes , Processamento de Imagem Assistida por Computador/métodos , Algoritmos , Imageamento por Ressonância Magnética/métodos , Neoplasias Encefálicas/diagnóstico por imagem , Encéfalo/diagnóstico por imagem
5.
Med Phys ; 51(2): 1178-1189, 2024 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-37528654

RESUMO

BACKGROUND: Accurate medical image segmentation is crucial for disease diagnosis and surgical planning. Transformer networks offer a promising alternative for medical image segmentation as they can learn global features through self-attention mechanisms. To further enhance performance, many researchers have incorporated more Transformer layers into their models. However, this approach often results in the model parameters increasing significantly, causing a potential rise in complexity. Moreover, the datasets of medical image segmentation usually have fewer samples, which leads to the risk of overfitting of the model. PURPOSE: This paper aims to design a medical image segmentation model that has fewer parameters and can effectively alleviate overfitting. METHODS: We design a MultiIB-Transformer structure consisting of a single Transformer layer and multiple information bottleneck (IB) blocks. The Transformer layer is used to capture long-distance spatial relationships to extract global feature information. The IB block is used to compress noise and improve model robustness. The advantage of this structure is that it only needs one Transformer layer to achieve the state-of-the-art (SOTA) performance, significantly reducing the number of model parameters. In addition, we designed a new skip connection structure. It only needs two 1× 1 convolutions, the high-resolution feature map can effectively have both semantic and spatial information, thereby alleviating the semantic gap. RESULTS: The proposed model is on the Breast UltraSound Images (BUSI) dataset, and the IoU and F1 evaluation indicators are 67.75 and 87.78. On the Synapse multi-organ segmentation dataset, the Param, Hausdorff Distance (HD) and Dice Similarity Cofficient (DSC) evaluation indicators are 22.30, 20.04 and 81.83. CONCLUSIONS: Our proposed model (MultiIB-TransUNet) achieved superior results with fewer parameters compared to other models.


Assuntos
Aprendizagem , Ultrassonografia Mamária , Feminino , Humanos , Ultrassonografia , Pesquisadores , Tomografia Computadorizada por Raios X , Processamento de Imagem Assistida por Computador
6.
Med Biol Eng Comput ; 62(3): 901-912, 2024 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-38087041

RESUMO

Breast cancer pathological image segmentation (BCPIS) holds significant value in assisting physicians with quantifying tumor regions and providing treatment guidance. However, achieving fine-grained semantic segmentation remains a major challenge for this technology. The complex and diverse morphologies of breast cancer tissue structures result in high costs for manual annotation, thereby limiting the sample size and annotation quality of the dataset. These practical issues have a significant impact on the segmentation performance. To overcome these challenges, this study proposes a semi-supervised learning model based on classification-guided segmentation. The model first utilizes a multi-scale convolutional network to extract rich semantic information and then employs a multi-expert cross-layer joint learning strategy, integrating a small number of labeled samples to iteratively provide the model with class-generated multi-cue pseudo-labels and real labels. Given the complexity of the breast cancer samples and the limited sample quantity, an innovative approach of augmenting additional unlabeled data was adopted to overcome this limitation. Experimental results demonstrate that, although the proposed model falls slightly behind supervised segmentation models, it still exhibits significant progress and innovation. The semi-supervised model in this study achieves outstanding performance, with an IoU (Intersection over Union) value of 71.53%. Compared to other semi-supervised methods, the model developed in this study demonstrates a performance advantage of approximately 3%. Furthermore, the research findings indicate a significant correlation between the classification and segmentation tasks in breast cancer pathological images, and the guidance of a multi-expert system can significantly enhance the fine-grained effects of semi-supervised semantic segmentation.


Assuntos
Neoplasias , Médicos , Humanos , Sistemas Inteligentes , Semântica , Aprendizado de Máquina Supervisionado , Processamento de Imagem Assistida por Computador
7.
Neurocomputing (Amst) ; 544: None, 2023 Aug 01.
Artigo em Inglês | MEDLINE | ID: mdl-37528990

RESUMO

Accurate segmentation of brain tumors from medical images is important for diagnosis and treatment planning, and it often requires multi-modal or contrast-enhanced images. However, in practice some modalities of a patient may be absent. Synthesizing the missing modality has a potential for filling this gap and achieving high segmentation performance. Existing methods often treat the synthesis and segmentation tasks separately or consider them jointly but without effective regularization of the complex joint model, leading to limited performance. We propose a novel brain Tumor Image Synthesis and Segmentation network (TISS-Net) that obtains the synthesized target modality and segmentation of brain tumors end-to-end with high performance. First, we propose a dual-task-regularized generator that simultaneously obtains a synthesized target modality and a coarse segmentation, which leverages a tumor-aware synthesis loss with perceptibility regularization to minimize the high-level semantic domain gap between synthesized and real target modalities. Based on the synthesized image and the coarse segmentation, we further propose a dual-task segmentor that predicts a refined segmentation and error in the coarse segmentation simultaneously, where a consistency between these two predictions is introduced for regularization. Our TISS-Net was validated with two applications: synthesizing FLAIR images for whole glioma segmentation, and synthesizing contrast-enhanced T1 images for Vestibular Schwannoma segmentation. Experimental results showed that our TISS-Net largely improved the segmentation accuracy compared with direct segmentation from the available modalities, and it outperformed state-of-the-art image synthesis-based segmentation methods.

8.
Front Public Health ; 11: 1118628, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-36817881

RESUMO

Introduction: Modifiable lifestyle factors are considered key to the control of cardiometabolic diseases. This study aimed to explore the association between multiple lifestyle factors and cardiometabolic multimorbidity. Methods: A total of 14,968 participants were included in this cross-sectional exploratory study (mean age 54.33 years, range 45-91; 49.6% male). Pearson's Chi-square test, logistic regression, and latent class analysis were employed. Results: We found that men with 4-5 high-risk lifestyle factors had a 2.54-fold higher risk (95% CI: 1.60-4.04) of developing multimorbidity compared to males with zero high-risk lifestyle factors. In an analysis of dietary behavior, we found that in women compared to men, over-eating (OR = 1.94, P < 0.001) and intra-meal water drinking (OR = 2.15, P < 0.001) were more likely to contribute to the development of cardiometabolic multimorbidity. In an analysis of taste preferences, men may be more sensitive to the effect of taste preferences and cardiometabolic multimorbidity risk, particularly for smoky (OR = 1.71, P < 0.001), hot (OR = 1.62, P < 0.001), and spicy (OR = 1.38, P < 0.001) tastes. Furthermore, "smoking and physical activity" and "physical activity and alcohol consumption" were men's most common high-risk lifestyle patterns. "Physical activity and dietary intake" were women's most common high-risk lifestyle patterns. A total of four common high-risk dietary behavior patterns were found in both males and females. Conclusions: This research reveals that the likelihood of cardiometabolic multimorbidity increases as high-risk lifestyle factors accumulate. Taste preferences and unhealthy dietary behaviors were found to be associated with an increased risk of developing cardiometabolic multimorbidity and this association differed between genders. Several common lifestyle and dietary behavior patterns suggest that patients with cardiometabolic multimorbidity may achieve better health outcomes if those with certain high-risk lifestyle patterns are identified and managed.


Assuntos
Doenças Cardiovasculares , Multimorbidade , Humanos , Masculino , Feminino , Pessoa de Meia-Idade , Idoso , Idoso de 80 Anos ou mais , Fatores de Risco , Estudos Transversais , Doenças Cardiovasculares/etiologia , Estilo de Vida
9.
Med Image Anal ; 83: 102687, 2023 01.
Artigo em Inglês | MEDLINE | ID: mdl-36436356

RESUMO

Breast cancer is one of the most common causes of death among women worldwide. Early signs of breast cancer can be an abnormality depicted on breast images (e.g., mammography or breast ultrasonography). However, reliable interpretation of breast images requires intensive labor and physicians with extensive experience. Deep learning is evolving breast imaging diagnosis by introducing a second opinion to physicians. However, most deep learning-based breast cancer analysis algorithms lack interpretability because of their black box nature, which means that domain experts cannot understand why the algorithms predict a label. In addition, most deep learning algorithms are formulated as a single-task-based model that ignores correlations between different tasks (e.g., tumor classification and segmentation). In this paper, we propose an interpretable multitask information bottleneck network (MIB-Net) to accomplish simultaneous breast tumor classification and segmentation. MIB-Net maximizes the mutual information between the latent representations and class labels while minimizing information shared by the latent representations and inputs. In contrast from existing models, our MIB-Net generates a contribution score map that offers an interpretable aid for physicians to understand the model's decision-making process. In addition, MIB-Net implements multitask learning and further proposes a dual prior knowledge guidance strategy to enhance deep task correlation. Our evaluations are carried out on three breast image datasets in different modalities. Our results show that the proposed framework is not only able to help physicians better understand the model's decisions but also improve breast tumor classification and segmentation accuracy over representative state-of-the-art models. Our code is available at https://github.com/jxw0810/MIB-Net.


Assuntos
Neoplasias da Mama , Feminino , Humanos , Neoplasias da Mama/diagnóstico por imagem
10.
Diagnostics (Basel) ; 12(12)2022 Dec 12.
Artigo em Inglês | MEDLINE | ID: mdl-36553140

RESUMO

In computer-aided diagnosis methods for breast cancer, deep learning has been shown to be an effective method to distinguish whether lesions are present in tissues. However, traditional methods only classify masses as benign or malignant, according to their presence or absence, without considering the contextual features between them and their adjacent tissues. Furthermore, for contrast-enhanced spectral mammography, the existing studies have only performed feature extraction on a single image per breast. In this paper, we propose a multi-input deep learning network for automatic breast cancer classification. Specifically, we simultaneously input four images of each breast with different feature information into the network. Then, we processed the feature maps in both horizontal and vertical directions, preserving the pixel-level contextual information within the neighborhood of the tumor during the pooling operation. Furthermore, we designed a novel loss function according to the information bottleneck theory to optimize our multi-input network and ensure that the common information in the multiple input images could be fully utilized. Our experiments on 488 images (256 benign and 232 malignant images) from 122 patients show that the method's accuracy, precision, sensitivity, specificity, and f1-score values are 0.8806, 0.8803, 0.8810, 0.8801, and 0.8806, respectively. The qualitative, quantitative, and ablation experiment results show that our method significantly improves the accuracy of breast cancer classification and reduces the false positive rate of diagnosis. It can reduce misdiagnosis rates and unnecessary biopsies, helping doctors determine accurate clinical diagnoses of breast cancer from multiple CESM images.

11.
Front Oncol ; 12: 943874, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36568197

RESUMO

Introduction: Breast cancer is a heterogeneous tumor. Tumor microenvironment (TME) has an important effect on the proliferation, metastasis, treatment, and prognosis of breast cancer. Methods: In this study, we calculated the relative proportion of tumor infiltrating immune cells (TIICs) in the breast cancer TME, and used the consensus clustering algorithm to cluster the breast cancer subtypes. We also developed a multi-layer perceptron (MLP) classifier based on a deep learning framework to detect breast cancer subtypes, which 70% of the breast cancer research cohort was used for the model training and 30% for validation. Results: By performing the K-means clustering algorithm, the research cohort was clustered into two subtypes. The Kaplan-Meier survival estimate analysis showed significant differences in the overall survival (OS) between the two identified subtypes. Estimating the difference in the relative proportion of TIICs showed that the two subtypes had significant differences in multiple immune cells, such as CD8, CD4, and regulatory T cells. Further, the expression level of immune checkpoint molecules (PDL1, CTLA4, LAG3, TIGIT, CD27, IDO1, ICOS) and tumor mutational burden (TMB) also showed significant differences between the two subtypes, indicating the clinical value of the two subtypes. Finally, we identified a 38-gene signature and developed a multilayer perceptron (MLP) classifier that combined multi-gene signature to identify breast cancer subtypes. The results showed that the classifier had an accuracy rate of 93.56% and can be robustly used for the breast cancer subtype diagnosis. Conclusion: Identification of breast cancer subtypes based on the immune signature in the tumor microenvironment can assist clinicians to effectively and accurately assess the progression of breast cancer and formulate different treatment strategies for different subtypes.

12.
Front Physiol ; 13: 946099, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36035486

RESUMO

Quantitative estimation of growth patterns is important for diagnosis of lung adenocarcinoma and prediction of prognosis. However, the growth patterns of lung adenocarcinoma tissue are very dependent on the spatial organization of cells. Deep learning for lung tumor histopathological image analysis often uses convolutional neural networks to automatically extract features, ignoring this spatial relationship. In this paper, a novel fully automated framework is proposed for growth pattern evaluation in lung adenocarcinoma. Specifically, the proposed method uses graph convolutional networks to extract cell structural features; that is, cells are extracted and graph structures are constructed based on histopathological image data without graph structure. A deep neural network is then used to extract the global semantic features of histopathological images to complement the cell structural features obtained in the previous step. Finally, the structural features and semantic features are fused to achieve growth pattern prediction. Experimental studies on several datasets validate our design, demonstrating that methods based on the spatial organization of cells are appropriate for the analysis of growth patterns.

13.
Int J Comput Assist Radiol Surg ; 17(4): 639-648, 2022 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-35149953

RESUMO

PURPOSE: Micropapillary adenocarcinoma is a distinctive histological subtype of lung adenocarcinoma with poor prognosis. Computer-aided diagnosis method has the potential to provide help for its early diagnosis. But the implementation of the existing methods largely relies on massive manually labeled data and consumes a lot of time and energy. To tackle these problems, we propose a framework that applies semi-supervised learning method to detect micropapillary adenocarcinoma, which aims to utilize labeled and unlabeled data better. METHODS: The framework consists of a teacher model and a student model. The teacher model is first obtained by using the labeled data. Then, it makes predictions on unlabeled data as pseudo-labels for students. Finally, high-quality pseudo-labels are selected and associated with the labeled data to train the student model. During the learning process of the student model, augmentation is added so that the student model generalizes better than the teacher model. RESULTS: Experiments are conducted on our own whole slide micropapillary lung adenocarcinoma histopathology image dataset and we selected 3527 patches for the experiment. In the supervised learning, our detector achieves a precision of 0.762 and recall of 0.884. In the semi-supervised learning, our method achieves a precision of 0.775 and recall of 0.896; it is superior to other methods. CONCLUSION: We proposed a semi-supervised learning framework for micropapillary adenocarcinoma detection, which has better performance in utilizing both labeled and unlabeled data. In addition, the detector we designed improves the detection accuracy and speed and achieves promising results in detecting micropapillary adenocarcinoma.


Assuntos
Adenocarcinoma de Pulmão , Neoplasias Pulmonares , Adenocarcinoma de Pulmão/diagnóstico , Diagnóstico por Computador , Humanos , Neoplasias Pulmonares/diagnóstico por imagem , Projetos de Pesquisa , Aprendizado de Máquina Supervisionado
14.
IEEE/ACM Trans Comput Biol Bioinform ; 19(6): 3272-3280, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-34559661

RESUMO

The T-cell epitope prediction has always been a long-term challenge in immunoinformatics and bioinformatics. Studying the specific recognition between T-cell receptor (TCR) and peptide-major histocompatibility complex (p-MHC) complexes can help us better understand the immune mechanism, it's also make a signification contribution in developing vaccines and targeted drugs. Meanwhile, more advanced methods are needed for distinguishing TCRs binding from different epitopes. In this paper, we introduce a hybrid model composed of bidirectional long short-term memory networks (BiLSTM), attention and convolutional neural networks (CNN) that can identified the binding of TCRs to epitopes. The BiLSTM can more completely extract amino acid forward and backward information in the sequence, and attention mechanism can focus on amino acids at certain positions from complex sequences to capture the most important feature, then CNN was used to further extract salient features to predict the binding of TCR-epitope. In McPAS dataset, the AUC value (the area under ROC curve) of naive TCR-epitope binding is 0.974 and specific TCR-epitope binding is 0.887. The model has achieved better prediction results than other existing models (TCRGP, ERGO, NetTCR), and some experiments are used to analyze the advantages of our model. The algorithm is available at https://github.com/bijingshu/BiAttCNN.git.


Assuntos
Peptídeos , Receptores de Antígenos de Linfócitos T , Receptores de Antígenos de Linfócitos T/metabolismo , Epitopos de Linfócito T/química , Redes Neurais de Computação , Algoritmos
15.
Med Phys ; 49(2): 966-977, 2022 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-34860417

RESUMO

PURPOSE: Contrast-enhanced spectral mammography (CESM) is an effective tool for diagnosing breast cancer with the benefit of its multiple types of images. However, few models simultaneously utilize this feature in deep learning-based breast cancer classification methods. To combine multiple features of CESM and thus aid physicians in making accurate diagnoses, we propose a hybrid approach by taking advantages of both fusion and classification models. METHODS: We evaluated the proposed method on a CESM dataset obtained from 95 patients between ages ranging from 21 to 74 years, with a total of 760 images. The framework consists of two main parts: a generative adversarial network based image fusion module and a Res2Net-based classification module. The aim of the fusion module is to generate a fused image that combines the characteristics of dual-energy subtracted (DES) and low-energy (LE) images, and the classification module is developed to classify the fused image into benign or malignant. RESULTS: Based on the experimental results, the fused images contained complementary information of the images of both types (DES and LE), whereas the model for classification achieved accurate classification results. In terms of qualitative indicators, the entropy of the fused images was 2.63, and the classification model achieved an accuracy of 94.784%, precision of 95.016%, recall of 95.912%, specificity of 0.945, F1_score of 0.955, and area under curve of 0.947 on the test dataset, respectively. CONCLUSIONS: We conducted extensive comparative experiments and analyses on our in-house dataset, and demonstrated that our method produces promising results in the fusion of CESM images and is more accurate than the state-of-the-art methods in classification of fused CESM.


Assuntos
Neoplasias da Mama , Meios de Contraste , Adulto , Idoso , Mama/diagnóstico por imagem , Neoplasias da Mama/diagnóstico por imagem , Feminino , Humanos , Mamografia , Pessoa de Meia-Idade , Adulto Jovem
16.
Front Oncol ; 12: 1044026, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36698401

RESUMO

Introduction: Manual inspection of histopathological images is important in clinical cancer diagnosis. Pathologists implement pathological diagnosis and prognostic evaluation through the microscopic examination of histopathological slices. This entire process is time-consuming, laborious, and challenging for pathologists. The modern use of whole-slide imaging, which scans histopathology slides to digital slices, and analysis using computer-aided diagnosis is an essential problem. Methods: To solve the problem of difficult labeling of histopathological data, and improve the flexibility of histopathological analysis in clinical applications, we herein propose a semi-supervised learning algorithm coupled with consistency regularization strategy, called"Semi- supervised Histopathology Analysis Network"(Semi-His-Net), for automated normal-versus-tumor and subtype classifications. Specifically, when inputted disturbing versions of the same image, the model should predict similar outputs. Based on this, the model itself can assign artificial labels to unlabeled data for subsequent model training, thereby effectively reducing the labeled data required for training. Results: Our Semi-His-Net is able to classify patches from breast cancer histopathological images into normal tissue and three other different tumor subtypes, achieving an accuracy was 90%. The average AUC of cross-classification between tumors reached 0.893. Discussion: To overcome the limitations of visual inspection by pathologists for histopathology images, such as long time and low repeatability, we have developed a deep learning-based framework (Semi-His-Net) for automatic classification subdivision of the subtypes contained in the whole pathological images. This learning-based framework has great potential to improve the efficiency and repeatability of histopathological image diagnosis.

17.
Biochem Biophys Res Commun ; 560: 199-204, 2021 06 30.
Artigo em Inglês | MEDLINE | ID: mdl-34000469

RESUMO

The specific identification and elimination of cancer cells has been a great challenge in the past few decades. In this study, the circular dichroism (CD) of cells was measured by a self-designed special system through the folate-conjugated chiral nano-sensor. A novel method was established to recognize cancer cells from normal cells according to the chirality of cells based on their CD signals. After a period of interaction between the nano-sensor and cells, the sharp weakening of CD signals was induced in cancer cells but normal cells remained unchanged. The biocompatibility of the nano-sensor was evaluated and the result showed that it exhibited significant cytotoxic activity against cancer cells while no obvious damage on normal cells. Notably, the research indicated that the nano-sensor may selectively cause apoptosis in cancer cells, and thus, have the potential to act as an antitumor agent.


Assuntos
Compostos de Cádmio , Neoplasias/terapia , Pontos Quânticos/química , Sulfetos , Telúrio , Apoptose , Neoplasias da Mama/terapia , Linhagem Celular Tumoral , Dicroísmo Circular , Feminino , Ácido Fólico , Humanos
18.
Int J Comput Assist Radiol Surg ; 16(6): 979-988, 2021 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-33966155

RESUMO

PURPOSE: CESM (contrast-enhanced spectral mammography) is an efficient tool for detecting breast cancer because of its image characteristics. However, among most deep learning-based methods for breast cancer classification, few models can integrate both its multiview and multimodal features. To effectively utilize the image features of CESM and thus help physicians to improve the accuracy of diagnosis, we propose a multiview multimodal network (MVMM-Net). METHODS: The experiment is carried out to evaluate the in-house CESM images dataset taken from 95 patients aged 21-74 years with 760 images. The framework consists of three main stages: the input of the model, image feature extraction, and image classification. The first stage is to preprocess the CESM to utilize its multiview and multimodal features effectively. In the feature extraction stage, a deep learning-based network is used to extract CESM images features. The last stage is to integrate different features for classification using the MVMM-Net model. RESULTS: According to the experimental results, the proposed method based on the Res2Net50 framework achieves an accuracy of 96.591%, sensitivity of 96.396%, specificity of 96.350%, precision of 96.833%, F1_score of 0.966, and AUC of 0.966 on the test set. Comparative experiments illustrate that the classification performance of the model can be improved by using multiview multimodal features. CONCLUSION: We proposed a deep learning classification model that combines multiple features of CESM. The results of the experiment indicate that our method is more precise than the state-of-the-art methods and produces accurate results for the classification of CESM images.


Assuntos
Neoplasias da Mama/diagnóstico , Mama/diagnóstico por imagem , Meios de Contraste/farmacologia , Mamografia/métodos , Imagem Multimodal/métodos , Adulto , Idoso , Feminino , Humanos , Pessoa de Meia-Idade , Adulto Jovem
19.
Eur Radiol ; 31(9): 7162-7171, 2021 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-33665717

RESUMO

OBJECTIVES: The aim of this study was to determine the invasiveness of ground-glass nodules (GGNs) using a 3D multi-task deep learning network. METHODS: We propose a novel architecture based on 3D multi-task learning to determine the invasiveness of GGNs. In total, 770 patients with 909 GGNs who underwent lung CT scans were enrolled. The patients were divided into the training (n = 626) and test sets (n = 144). In the test set, invasiveness was classified using deep learning into three categories: atypical adenomatous hyperplasia (AAH) and adenocarcinoma in situ (AIS), minimally invasive adenocarcinoma (MIA), and invasive pulmonary adenocarcinoma (IA). Furthermore, binary classifications (AAH/AIS/MIA vs. IA) were made by two thoracic radiologists and compared with the deep learning results. RESULTS: In the three-category classification task, the sensitivity, specificity, and accuracy were 65.41%, 82.21%, and 64.9%, respectively. In the binary classification task, the sensitivity, specificity, accuracy, and area under the ROC curve (AUC) values were 69.57%, 95.24%, 87.42%, and 0.89, respectively. In the visual assessment of GGN invasiveness of binary classification by the two thoracic radiologists, the sensitivity, specificity, and accuracy of the senior and junior radiologists were 58.93%, 90.51%, and 81.35% and 76.79%, 55.47%, and 61.66%, respectively. CONCLUSIONS: The proposed multi-task deep learning model achieved good classification results in determining the invasiveness of GGNs. This model may help to select patients with invasive lesions who need surgery and the proper surgical methods. KEY POINTS: • The proposed multi-task model has achieved good classification results for the invasiveness of GGNs. • The proposed network includes a classification and segmentation branch to learn global and regional features, respectively. • The multi-task model could assist doctors in selecting patients with invasive lesions who need surgery and choosing appropriate surgical methods.


Assuntos
Adenocarcinoma in Situ , Adenocarcinoma de Pulmão , Adenocarcinoma , Neoplasias Pulmonares , Adenocarcinoma/diagnóstico por imagem , Humanos , Neoplasias Pulmonares/diagnóstico por imagem , Invasividade Neoplásica , Estudos Retrospectivos
20.
IEEE Trans Cybern ; 51(4): 2153-2165, 2021 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-31869812

RESUMO

Automatic pancreas segmentation is crucial to the diagnostic assessment of diabetes or pancreatic cancer. However, the relatively small size of the pancreas in the upper body, as well as large variations of its location and shape in retroperitoneum, make the segmentation task challenging. To alleviate these challenges, in this article, we propose a cascaded multitask 3-D fully convolution network (FCN) to automatically segment the pancreas. Our cascaded network is composed of two parts. The first part focuses on fast locating the region of the pancreas, and the second part uses a multitask FCN with dense connections to refine the segmentation map for fine voxel-wise segmentation. In particular, our multitask FCN with dense connections is implemented to simultaneously complete tasks of the voxel-wise segmentation and skeleton extraction from the pancreas. These two tasks are complementary, that is, the extracted skeleton provides rich information about the shape and size of the pancreas in retroperitoneum, which can boost the segmentation of pancreas. The multitask FCN is also designed to share the low- and mid-level features across the tasks. A feature consistency module is further introduced to enhance the connection and fusion of different levels of feature maps. Evaluations on two pancreas datasets demonstrate the robustness of our proposed method in correctly segmenting the pancreas in various settings. Our experimental results outperform both baseline and state-of-the-art methods. Moreover, the ablation study shows that our proposed parts/modules are critical for effective multitask learning.


Assuntos
Interpretação de Imagem Assistida por Computador/métodos , Imageamento Tridimensional/métodos , Redes Neurais de Computação , Pâncreas/diagnóstico por imagem , Humanos , Neoplasias Pancreáticas/diagnóstico por imagem
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA