Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 10 de 10
Filtrar
Mais filtros








Base de dados
Intervalo de ano de publicação
1.
Biomed Eng Online ; 23(1): 52, 2024 Jun 08.
Artigo em Inglês | MEDLINE | ID: mdl-38851691

RESUMO

Accurate segmentation of multiple organs in the head, neck, chest, and abdomen from medical images is an essential step in computer-aided diagnosis, surgical navigation, and radiation therapy. In the past few years, with a data-driven feature extraction approach and end-to-end training, automatic deep learning-based multi-organ segmentation methods have far outperformed traditional methods and become a new research topic. This review systematically summarizes the latest research in this field. We searched Google Scholar for papers published from January 1, 2016 to December 31, 2023, using keywords "multi-organ segmentation" and "deep learning", resulting in 327 papers. We followed the PRISMA guidelines for paper selection, and 195 studies were deemed to be within the scope of this review. We summarized the two main aspects involved in multi-organ segmentation: datasets and methods. Regarding datasets, we provided an overview of existing public datasets and conducted an in-depth analysis. Concerning methods, we categorized existing approaches into three major classes: fully supervised, weakly supervised and semi-supervised, based on whether they require complete label information. We summarized the achievements of these methods in terms of segmentation accuracy. In the discussion and conclusion section, we outlined and summarized the current trends in multi-organ segmentation.


Assuntos
Aprendizado Profundo , Processamento de Imagem Assistida por Computador , Humanos , Processamento de Imagem Assistida por Computador/métodos , Automação
2.
Front Oncol ; 14: 1275769, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38746682

RESUMO

Background: Whole Slide Image (WSI) analysis, driven by deep learning algorithms, has the potential to revolutionize tumor detection, classification, and treatment response prediction. However, challenges persist, such as limited model generalizability across various cancer types, the labor-intensive nature of patch-level annotation, and the necessity of integrating multi-magnification information to attain a comprehensive understanding of pathological patterns. Methods: In response to these challenges, we introduce MAMILNet, an innovative multi-scale attentional multi-instance learning framework for WSI analysis. The incorporation of attention mechanisms into MAMILNet contributes to its exceptional generalizability across diverse cancer types and prediction tasks. This model considers whole slides as "bags" and individual patches as "instances." By adopting this approach, MAMILNet effectively eliminates the requirement for intricate patch-level labeling, significantly reducing the manual workload for pathologists. To enhance prediction accuracy, the model employs a multi-scale "consultation" strategy, facilitating the aggregation of test outcomes from various magnifications. Results: Our assessment of MAMILNet encompasses 1171 cases encompassing a wide range of cancer types, showcasing its effectiveness in predicting complex tasks. Remarkably, MAMILNet achieved impressive results in distinct domains: for breast cancer tumor detection, the Area Under the Curve (AUC) was 0.8872, with an Accuracy of 0.8760. In the realm of lung cancer typing diagnosis, it achieved an AUC of 0.9551 and an Accuracy of 0.9095. Furthermore, in predicting drug therapy responses for ovarian cancer, MAMILNet achieved an AUC of 0.7358 and an Accuracy of 0.7341. Conclusion: The outcomes of this study underscore the potential of MAMILNet in driving the advancement of precision medicine and individualized treatment planning within the field of oncology. By effectively addressing challenges related to model generalization, annotation workload, and multi-magnification integration, MAMILNet shows promise in enhancing healthcare outcomes for cancer patients. The framework's success in accurately detecting breast tumors, diagnosing lung cancer types, and predicting ovarian cancer therapy responses highlights its significant contribution to the field and paves the way for improved patient care.

3.
Bioengineering (Basel) ; 11(5)2024 May 09.
Artigo em Inglês | MEDLINE | ID: mdl-38790338

RESUMO

In the study of the deep learning classification of medical images, deep learning models are applied to analyze images, aiming to achieve the goals of assisting diagnosis and preoperative assessment. Currently, most research classifies and predicts normal and cancer cells by inputting single-parameter images into trained models. However, for ovarian cancer (OC), identifying its different subtypes is crucial for predicting disease prognosis. In particular, the need to distinguish high-grade serous carcinoma from clear cell carcinoma preoperatively through non-invasive means has not been fully addressed. This study proposes a deep learning (DL) method based on the fusion of multi-parametric magnetic resonance imaging (mpMRI) data, aimed at improving the accuracy of preoperative ovarian cancer subtype classification. By constructing a new deep learning network architecture that integrates various sequence features, this architecture achieves the high-precision prediction of the typing of high-grade serous carcinoma and clear cell carcinoma, achieving an AUC of 91.62% and an AP of 95.13% in the classification of ovarian cancer subtypes.

4.
Am J Pathol ; 2024 May 17.
Artigo em Inglês | MEDLINE | ID: mdl-38762117

RESUMO

The evaluation of morphologic features, such as inflammation, gastric atrophy, and intestinal metaplasia, is crucial for diagnosing gastritis. However, artificial intelligence analysis for nontumor diseases like gastritis is limited. Previous deep learning models have omitted important morphologic indicators and cannot simultaneously diagnose gastritis indicators or provide interpretable labels. To address this, an attention-based multi-instance multilabel learning network (AMMNet) was developed to simultaneously achieve the multilabel diagnosis of activity, atrophy, and intestinal metaplasia with only slide-level weak labels. To evaluate AMMNet's real-world performance, a diagnostic test was designed to observe improvements in junior pathologists' diagnostic accuracy and efficiency with and without AMMNet assistance. In this study of 1096 patients from seven independent medical centers, AMMNet performed well in assessing activity [area under the curve (AUC), 0.93], atrophy (AUC, 0.97), and intestinal metaplasia (AUC, 0.93). The false-negative rates of these indicators were only 0.04, 0.08, and 0.18, respectively, and junior pathologists had lower false-negative rates with model assistance (0.15 versus 0.10). Furthermore, AMMNet reduced the time required per whole slide image from 5.46 to only 2.85 minutes, enhancing diagnostic efficiency. In block-level clustering analysis, AMMNet effectively visualized task-related patches within whole slide images, improving interpretability. These findings highlight AMMNet's effectiveness in accurately evaluating gastritis morphologic indicators on multicenter data sets. Using multi-instance multilabel learning strategies to support routine diagnostic pathology deserves further evaluation.

5.
IEEE J Biomed Health Inform ; 28(2): 964-975, 2024 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-37494153

RESUMO

Histopathology image classification is an important clinical task, and current deep learning-based whole-slide image (WSI) classification methods typically cut WSIs into small patches and cast the problem as multi-instance learning. The mainstream approach is to train a bag-level classifier, but their performance on both slide classification and positive patch localization is limited because the instance-level information is not fully explored. In this article, we propose a negative instance-guided, self-distillation framework to directly train an instance-level classifier end-to-end. Instead of depending only on the self-supervised training of the teacher and the student classifiers in a typical self-distillation framework, we input the true negative instances into the student classifier to guide the classifier to better distinguish positive and negative instances. In addition, we propose a prediction bank to constrain the distribution of pseudo instance labels generated by the teacher classifier to prevent the self-distillation from falling into the degeneration of classifying all instances as negative. We conduct extensive experiments and analysis on three publicly available pathological datasets: CAMELYON16, PANDA, and TCGA, as well as an in-house pathological dataset for cervical cancer lymph node metastasis prediction. The results show that our method outperforms existing methods by a large margin. Code will be publicly available.


Assuntos
Autogestão , Neoplasias do Colo do Útero , Humanos , Feminino , Destilação , Processamento de Imagem Assistida por Computador , Metástase Linfática
6.
IEEE Trans Med Imaging ; 42(12): 3919-3931, 2023 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-37738201

RESUMO

Unsupervised domain adaptation (UDA) aims to train a model on a labeled source domain and adapt it to an unlabeled target domain. In medical image segmentation field, most existing UDA methods rely on adversarial learning to address the domain gap between different image modalities. However, this process is complicated and inefficient. In this paper, we propose a simple yet effective UDA method based on both frequency and spatial domain transfer under a multi-teacher distillation framework. In the frequency domain, we introduce non-subsampled contourlet transform for identifying domain-invariant and domain-variant frequency components (DIFs and DVFs) and replace the DVFs of the source domain images with those of the target domain images while keeping the DIFs unchanged to narrow the domain gap. In the spatial domain, we propose a batch momentum update-based histogram matching strategy to minimize the domain-variant image style bias. Additionally, we further propose a dual contrastive learning module at both image and pixel levels to learn structure-related information. Our proposed method outperforms state-of-the-art methods on two cross-modality medical image segmentation datasets (cardiac and abdominal). Codes are avaliable at https://github.com/slliuEric/FSUDA.


Assuntos
Coração , Processamento de Imagem Assistida por Computador , Movimento (Física)
7.
Mod Pathol ; 36(12): 100316, 2023 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-37634868

RESUMO

We developed a deep learning framework to accurately predict the lymph node status of patients with cervical cancer based on hematoxylin and eosin-stained pathological sections of the primary tumor. In total, 1524 hematoxylin and eosin-stained whole slide images (WSIs) of primary cervical tumors from 564 patients were used in this retrospective, proof-of-concept study. Primary tumor sections (1161 WSIs) were obtained from 405 patients who underwent radical cervical cancer surgery at the Fudan University Shanghai Cancer Center (FUSCC) between 2008 and 2014; 165 and 240 patients were negative and positive for lymph node metastasis, respectively (including 166 with positive pelvic lymph nodes alone and 74 with positive pelvic and para-aortic lymph nodes). We constructed and trained a multi-instance deep convolutional neural network based on a multiscale attention mechanism, in which an internal independent test set (100 patients, 228 WSIs) from the FUSCC cohort and an external independent test set (159 patients, 363 WSIs) from the Cervical Squamous Cell Carcinoma and Endocervical Adenocarcinoma cohort of the Cancer Genome Atlas program database were used to evaluate the predictive performance of the network. In predicting the occurrence of lymph node metastasis, our network achieved areas under the receiver operating characteristic curve of 0.87 in the cross-validation set, 0.84 in the internal independent test set of the FUSCC cohort, and 0.75 in the external test set of the Cervical Squamous Cell Carcinoma and Endocervical Adenocarcinoma cohort of the Cancer Genome Atlas program. For patients with positive pelvic lymph node metastases, we retrained the network to predict whether they also had para-aortic lymph node metastases. Our network achieved areas under the receiver operating characteristic curve of 0.91 in the cross-validation set and 0.88 in the test set of the FUSCC cohort. Deep learning analysis based on pathological images of primary foci is very likely to provide new ideas for preoperatively assessing cervical cancer lymph node status; its true value must be validated with cervical biopsy specimens and large multicenter datasets.


Assuntos
Adenocarcinoma , Carcinoma de Células Escamosas , Aprendizado Profundo , Neoplasias do Colo do Útero , Feminino , Humanos , Carcinoma de Células Escamosas/patologia , Neoplasias do Colo do Útero/patologia , Metástase Linfática/patologia , Estudos Retrospectivos , Amarelo de Eosina-(YS) , Hematoxilina , China , Linfonodos/patologia , Adenocarcinoma/patologia
8.
Radiol Med ; 128(6): 726-733, 2023 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-37233906

RESUMO

Computer-aided diagnosis of chest X-ray (CXR) images can help reduce the huge workload of radiologists and avoid the inter-observer variability in large-scale early disease screening. Recently, most state-of-the-art studies employ deep learning techniques to address this problem through multi-label classification. However, existing methods still suffer from low classification accuracy and poor interpretability for each diagnostic task. This study aims to propose a novel transformer-based deep learning model for automated CXR diagnosis with high performance and reliable interpretability. We introduce a novel transformer architecture into this problem and utilize the unique query structure of transformer to capture the global and local information of the images and the correlation between labels. In addition, we propose a new loss function to help the model find correlations between the labels in CXR images. To achieve accurate and reliable interpretability, we generate heatmaps using the proposed transformer model and compare with the true pathogenic regions labeled by the physicians. The proposed model achieves a mean AUC of 0.831 on chest X-ray 14 and 0.875 on PadChest dataset, which outperforms existing state-of-the-art methods. The attention heatmaps show that our model could focus on the exact corresponding areas of related truly labeled pathogenic regions. The proposed model effectively improves the performance of CXR multi-label classification and the interpretability of label correlations, thus providing new evidence and methods for automated clinical diagnosis.


Assuntos
Diagnóstico por Computador , Radiologistas , Humanos , Raios X , Radiografia , Tórax
9.
Phys Med Biol ; 67(20)2022 10 14.
Artigo em Inglês | MEDLINE | ID: mdl-36084627

RESUMO

Histopathological images contain abundant phenotypic information and pathological patterns, which are the gold standards for disease diagnosis and essential for the prediction of patient prognosis and treatment outcome. In recent years, computer-automated analysis techniques for histopathological images have been urgently required in clinical practice, and deep learning methods represented by convolutional neural networks have gradually become the mainstream in the field of digital pathology. However, obtaining large numbers of fine-grained annotated data in this field is a very expensive and difficult task, which hinders the further development of traditional supervised algorithms based on large numbers of annotated data. More recent studies have started to liberate from the traditional supervised paradigm, and the most representative ones are the studies on weakly supervised learning paradigm based on weak annotation, semi-supervised learning paradigm based on limited annotation, and self-supervised learning paradigm based on pathological image representation learning. These new methods have led a new wave of automatic pathological image diagnosis and analysis targeted at annotation efficiency. With a survey of over 130 papers, we present a comprehensive and systematic review of the latest studies on weakly supervised learning, semi-supervised learning, and self-supervised learning in the field of computational pathology from both technical and methodological perspectives. Finally, we present the key challenges and future trends for these techniques.


Assuntos
Aprendizado Profundo , Algoritmos , Humanos , Processamento de Imagem Assistida por Computador/métodos , Redes Neurais de Computação , Aprendizado de Máquina Supervisionado
10.
Brain Sci ; 12(7)2022 Jun 29.
Artigo em Inglês | MEDLINE | ID: mdl-35884664

RESUMO

Intravenous thrombolysis is the most commonly used drug therapy for patients with acute ischemic stroke, which is often accompanied by complications of intracerebral hemorrhage transformation (HT). This study proposed to build a reliable model for pretreatment prediction of HT. Specifically, 5400 radiomics features were extracted from 20 regions of interest (ROIs) of multiparametric MRI images of 71 patients. Furthermore, a minimal set of all-relevant features were selected by LASSO from all ROIs and used to build a radiomics model through the random forest (RF). To explore the significance of normal ROIs, we built a model only based on abnormal ROIs. In addition, a model combining clinical factors and radiomics features was further built. Finally, the models were tested on an independent validation cohort. The radiomics model with 14 All-ROIs features achieved pretreatment prediction of HT (AUC = 0.871, accuracy = 0.848), which significantly outperformed the model with only 14 Abnormal-ROIs features (AUC = 0.831, accuracy = 0.818). Besides, combining clinical factors with radiomics features further benefited the prediction performance (AUC = 0.911, accuracy = 0.894). So, we think that the combined model can greatly assist doctors in diagnosis. Furthermore, we find that even if there were no lesions in the normal ROIs, they also provide characteristic information for the prediction of HT.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA