RESUMEN
Immunohistochemistry (IHC) is used to guide treatment decisions in multiple cancer types. For treatment with checkpoint inhibitors, programmed death ligand 1 (PD-L1) IHC is used as a companion diagnostic. However, the scoring of PD-L1 is complicated by its expression in cancer and immune cells. Separation of cancer and noncancer regions is needed to calculate tumor proportion scores (TPS) of PD-L1, which is based on the percentage of PD-L1-positive cancer cells. Evaluation of PD-L1 expression requires highly experienced pathologists and is often challenging and time-consuming. Here, we used a multi-institutional cohort of 77 lung cancer cases stained centrally with the PD-L1 22C3 clone. We developed a 4-step pipeline for measuring TPS that includes the coregistration of hematoxylin and eosin, PD-L1, and negative control (NC) digital slides for exclusion of necrosis, segmentation of cancer regions, and quantification of PD-L1+ cells. As cancer segmentation is a challenging step for TPS generation, we trained DeepLab V3 in the Visiopharm software package to outline cancer regions in PD-L1 and NC images and evaluated the model performance by mean intersection over union (mIoU) against manual outlines. Only 14 cases were required to accomplish a mIoU of 0.82 for cancer segmentation in hematoxylin-stained NC cases. For PD-L1-stained slides, a model trained on PD-L1 tiles augmented by registered NC tiles achieved a mIoU of 0.79. In segmented cancer regions from whole slide images, the digital TPS achieved an accuracy of 75% against the manual TPS scores from the pathology report. Major reasons for algorithmic inaccuracies include the inclusion of immune cells in cancer outlines and poor nuclear segmentation of cancer cells. Our transparent and stepwise approach and performance metrics can be applied to any IHC assay to provide pathologists with important insights on when to apply and how to evaluate commercial automated IHC scoring systems.
Asunto(s)
Antígeno B7-H1 , Inmunohistoquímica , Neoplasias Pulmonares , Aprendizaje Automático , Humanos , Antígeno B7-H1/metabolismo , Antígeno B7-H1/análisis , Inmunohistoquímica/métodos , Neoplasias Pulmonares/metabolismo , Neoplasias Pulmonares/patología , Inteligencia Artificial , Biomarcadores de Tumor/metabolismo , Biomarcadores de Tumor/análisisRESUMEN
Colorectal cancer (CRC) is a common malignant tumor that seriously threatens human health. CRC presents a formidable challenge in terms of accurate identification due to its indistinct boundaries. With the widespread adoption of convolutional neural networks (CNNs) in image processing, leveraging CNNs for automatic classification and segmentation holds immense potential for enhancing the efficiency of colorectal cancer recognition and reducing treatment costs. This paper explores the imperative necessity for applying CNNs in clinical diagnosis of CRC. It provides an elaborate overview on research advancements pertaining to CNNs and their improved models in CRC classification and segmentation. Furthermore, this work summarizes the ideas and common methods for optimizing network performance and discusses the challenges faced by CNNs as well as future development trends in their application towards CRC classification and segmentation, thereby promoting their utilization within clinical diagnosis.
Asunto(s)
Neoplasias Colorrectales , Procesamiento de Imagen Asistido por Computador , Redes Neurales de la Computación , Humanos , Neoplasias Colorrectales/diagnóstico , Procesamiento de Imagen Asistido por Computador/métodos , AlgoritmosRESUMEN
In the domain of Computer-Aided Diagnosis (CAD) systems, the accurate identification of cancer lesions is paramount, given the life-threatening nature of cancer and the complexities inherent in its manifestation. This task is particularly arduous due to the often vague boundaries of cancerous regions, compounded by the presence of noise and the heterogeneity in the appearance of lesions, making precise segmentation a critical yet challenging endeavor. This study introduces an innovative, an iterative feedback mechanism tailored for the nuanced detection of cancer lesions in a variety of medical imaging modalities, offering a refining phase to adjust detection results. The core of our approach is the elimination of the need for an initial segmentation mask, a common limitation in iterative-based segmentation methods. Instead, we utilize a novel system where the feedback for refining segmentation is derived directly from the encoder-decoder architecture of our neural network model. This shift allows for more dynamic and accurate lesion identification. To further enhance the accuracy of our CAD system, we employ a multi-scale feedback attention mechanism to guide and refine predicted mask subsequent iterations. In parallel, we introduce a sophisticated weighted feedback loss function. This function synergistically combines global and iteration-specific loss considerations, thereby refining parameter estimation and improving the overall precision of the segmentation. We conducted comprehensive experiments across three distinct categories of medical imaging: colonoscopy, ultrasonography, and dermoscopic images. The experimental results demonstrate that our method not only competes favorably with but also surpasses current state-of-the-art methods in various scenarios, including both standard and challenging out-of-domain tasks. This evidences the robustness and versatility of our approach in accurately identifying cancer lesions across a spectrum of medical imaging contexts. Our source code can be found at https://github.com/dewamsa/EfficientFeedbackNetwork.
Asunto(s)
Redes Neurales de la Computación , Humanos , Neoplasias/diagnóstico por imagen , Retroalimentación , Interpretación de Imagen Asistida por Computador/métodos , Diagnóstico por Computador/métodos , AlgoritmosRESUMEN
BACKGROUND AND OBJECTIVE: The effective segmentation of esophageal squamous carcinoma lesions in CT scans is significant for auxiliary diagnosis and treatment. However, accurate lesion segmentation is still a challenging task due to the irregular form of the esophagus and small size, the inconsistency of spatio-temporal structure, and low contrast of esophagus and its peripheral tissues in medical images. The objective of this study is to improve the segmentation effect of esophageal squamous cell carcinoma lesions. METHODS: It is critical for a segmentation network to effectively extract 3D discriminative features to distinguish esophageal cancers from some visually closed adjacent esophageal tissues and organs. In this work, an efficient HRU-Net architecture (High-Resolution U-Net) was exploited for esophageal cancer and esophageal carcinoma segmentation in CT slices. Based on the idea of localization first and segmentation later, the HRU-Net locates the esophageal region before segmentation. In addition, an Resolution Fusion Module (RFM) was designed to integrate the information of adjacent resolution feature maps to obtain strong semantic information, as well as preserve the high-resolution features. RESULTS: Compared with the other five typical methods, the devised HRU-Net is capable of generating superior segmentation results. CONCLUSIONS: Our proposed HRU-NET improves the accuracy of segmentation for squamous esophageal cancer. Compared to other models, our model performs the best. The designed method may improve the efficiency of clinical diagnosis of esophageal squamous cell carcinoma lesions.
Asunto(s)
Neoplasias Esofágicas , Redes Neurales de la Computación , Tomografía Computarizada por Rayos X , Humanos , Neoplasias Esofágicas/diagnóstico por imagen , Neoplasias Esofágicas/radioterapia , Tomografía Computarizada por Rayos X/métodos , Carcinoma de Células Escamosas de Esófago/diagnóstico por imagen , Carcinoma de Células Escamosas de Esófago/radioterapia , Algoritmos , Procesamiento de Imagen Asistido por Computador/métodos , Imagenología Tridimensional/métodosRESUMEN
Liver cancer from abdominal CT images must be accurately segmented for the purpose of diagnosis with treatment planning. But, the similarity in gray values between the liver and the surrounding tissues poses a challenge. To address this, a novel sparse deep belief network coupled with extended local fuzzy active contour model-based liver cancer segmentation from abdomen CT images (SDBN-ELFAC-LCS-CT) is proposed. This method incorporates dynamic adaptive pooling and residual modules in SDBN to improve the feature selection and generalization ability. Additionally, the 3D reconstruction is performed to refine segmentation results. The proposed SDBN-ELFAC-LCS-CT approach is implemented in MATLAB. The performance of the proposed SDBN-ELFAC-LCS-CT achieves dice coefficients that were up to 96.16% higher and 75.88%, 88.75%, and 71.16% lower. Volumetric overlap error compared with existing models, like basic ensembles of vanilla-style deep learning modes, increases liver segmentation from CT imageries (BEVS-LCS-CT), an incorporated 3 dimensional sparse deep belief network along enriched seagull optimization approach for liver segmentation (3DBN-ESOA-LCS-CT) and iterative convolutional encoder-decoder network and multiple scale context learning for segmenting liver (ICEDN-LCS-CT), respectively.
Asunto(s)
Abdomen , Neoplasias Hepáticas , Humanos , Neoplasias Hepáticas/diagnóstico por imagen , Tomografía Computarizada por Rayos X/métodos , Procesamiento de Imagen Asistido por Computador/métodosRESUMEN
The precise prostate gland and prostate cancer (PCa) segmentations enable the fusion of magnetic resonance imaging (MRI) and ultrasound imaging (US) to guide robotic prostate biopsy systems. This precise segmentation, applied to preoperative MRI images, is crucial for accurate image registration and automatic localization of the biopsy target. Nevertheless, describing local prostate lesions in MRI remains a challenging and time-consuming task, even for experienced physicians. Therefore, this research work develops a parallel dual-pyramid network that combines convolutional neural networks (CNN) and tokenized multi-layer perceptron (MLP) for automatic segmentation of the prostate gland and clinically significant PCa (csPCa) in MRI. The proposed network consists of two stages. The first stage focuses on prostate segmentation, while the second stage uses a prior partition from a previous stage to detect the cancerous regions. Both stages share a similar network architecture, combining CNN and tokenized MLP as the feature extraction backbone to creating a pyramid-structured network for feature encoding and decoding. By employing CNN layers of different scales, the network generates scale-aware local semantic features, which are integrated into feature maps and inputted into an MLP layer from a global perspective. This facilitates the complementarity between local and global information, capturing richer semantic features. Additionally, the network incorporates an interactive hybrid attention module to enhance the perception of the target area. Experimental results demonstrate the superiority of the proposed network over other state-of-the-art image segmentation methods for segmenting the prostate gland and csPCa tissue in MRI images.
Asunto(s)
Próstata , Neoplasias de la Próstata , Masculino , Humanos , Próstata/diagnóstico por imagen , Procesamiento de Imagen Asistido por Computador/métodos , Imagen por Resonancia Magnética/métodos , Redes Neurales de la Computación , Neoplasias de la Próstata/diagnóstico por imagenRESUMEN
Prostate Cancer is one of the most frequently occurring cancers in men, with a low survival rate if not early diagnosed. PI-RADS reading has a high false positive rate, thus increasing the diagnostic incurred costs and patient discomfort. Deep learning (DL) models achieve a high segmentation performance, although require a large model size and complexity. Also, DL models lack of feature interpretability and are perceived as "black-boxes" in the medical field. PCa-RadHop pipeline is proposed in this work, aiming to provide a more transparent feature extraction process using a linear model. It adopts the recently introduced Green Learning (GL) paradigm, which offers a small model size and low complexity. PCa-RadHop consists of two stages: Stage-1 extracts data-driven radiomics features from the bi-parametric Magnetic Resonance Imaging (bp-MRI) input and predicts an initial heatmap. To reduce the false positive rate, a subsequent stage-2 is introduced to refine the predictions by including more contextual information and radiomics features from each already detected Region of Interest (ROI). Experiments on the largest publicly available dataset, PI-CAI, show a competitive performance standing of the proposed method among other deep DL models, achieving an area under the curve (AUC) of 0.807 among a cohort of 1,000 patients. Moreover, PCa-RadHop maintains orders of magnitude smaller model size and complexity.
Asunto(s)
Imagen por Resonancia Magnética , Neoplasias de la Próstata , Humanos , Neoplasias de la Próstata/diagnóstico por imagen , Masculino , Imagen por Resonancia Magnética/métodos , Aprendizaje Profundo , Interpretación de Imagen Asistida por Computador/métodos , AlgoritmosRESUMEN
This work aims to perform a cross-site validation of automated segmentation for breast cancers in MRI and to compare the performance to radiologists. A three-dimensional (3D) U-Net was trained to segment cancers in dynamic contrast-enhanced axial MRIs using a large dataset from Site 1 (n = 15,266; 449 malignant and 14,817 benign). Performance was validated on site-specific test data from this and two additional sites, and common publicly available testing data. Four radiologists from each of the three clinical sites provided two-dimensional (2D) segmentations as ground truth. Segmentation performance did not differ between the network and radiologists on the test data from Sites 1 and 2 or the common public data (median Dice score Site 1, network 0.86 vs. radiologist 0.85, n = 114; Site 2, 0.91 vs. 0.91, n = 50; common: 0.93 vs. 0.90). For Site 3, an affine input layer was fine-tuned using segmentation labels, resulting in comparable performance between the network and radiologist (0.88 vs. 0.89, n = 42). Radiologist performance differed on the common test data, and the network numerically outperformed 11 of the 12 radiologists (median Dice: 0.85-0.94, n = 20). In conclusion, a deep network with a novel supervised harmonization technique matches radiologists' performance in MRI tumor segmentation across clinical sites. We make code and weights publicly available to promote reproducible AI in radiology.
RESUMEN
An early detection of lung tumors is critical for better treatment results, and CT scans can reveal lumps in the lungs which are too small to be picked up by conventional X-rays. CT imaging has advantages, but it also exposes a person to radiation from ions, which raises the possibility of malignancy, particularly when the imaging procedure is done. Access to expensive-quality CT scans and the related sophisticated analytic tools might be restricted in environments with fewer resources due to their high cost and limited availability. It will need an array of creative technological innovations to overcome such weaknesses. This paper aims to design a heuristic and deep learning-aided lung cancer classification using CT images. The collected images are undergone for segmentation, which is performed by Shuffling Atrous Convolutional (SAC) based ResUnet++ (SACRUnet++). Finally, the lung cancer classification is performed by the Adaptive Residual Attention Network (ARAN) by inputting the segmented images. Here the parameters of ARAN are optimally tuned using the Improved Garter Snake Optimization Algorithm (IGSOA). The developed lung cancer classification performance is compared to conventional lung cancer classification models and it showed high accuracy.
Asunto(s)
Aprendizaje Profundo , Neoplasias Pulmonares , Tomografía Computarizada por Rayos X , Neoplasias Pulmonares/diagnóstico por imagen , Neoplasias Pulmonares/clasificación , Humanos , Algoritmos , Procesamiento de Imagen Asistido por Computador/métodosRESUMEN
Prostate cancer is one of the deadest cancers among human beings. To better diagnose the prostate cancer, prostate lesion segmentation becomes a very important work, but its progress is very slow due to the prostate lesions small in size, irregular in shape, and blurred in contour. Therefore, automatic prostate lesion segmentation from mp-MRI is a great significant work and a challenging task. However, the most existing multi-step segmentation methods based on voxel-level classification are time-consuming, may introduce errors in different steps and lead to error accumulation. To decrease the computation time, harness richer 3D spatial features, and fuse the multi-level contextual information of mp-MRI, we present an automatic segmentation method in which all steps are optimized conjointly as one step to form our end-to-end convolutional neural network. The proposed end-to-end network DMSA-V-Net consists of two parts: (1) a 3D V-Net is used as the backbone network, it is the first attempt in employing 3D convolutional neural network for CS prostate lesion segmentation, (2) a deep multi-scale attention mechanism is introduced into the 3D V-Net which can highly focus on the ROI while suppressing the redundant background. As a merit, the attention can adaptively re-align the context information between the feature maps at different scales and the saliency maps in high-levels. We performed experiments based on five cross-fold validation with data including 97 patients. The results show that the Dice and sensitivity are 0.7014 and 0.8652 respectively, which demonstrates that our segmentation approach is more significant and accurate compared to other methods.
Asunto(s)
Próstata , Neoplasias de la Próstata , Masculino , Humanos , Redes Neurales de la Computación , Imagenología Tridimensional/métodos , Imagen por Resonancia Magnética/métodos , Procesamiento de Imagen Asistido por Computador/métodosRESUMEN
BACKGROUND: Computer-aided diagnosis is of great significance to improve the diagnostic accuracy of pancreatic cancer that has an insidious course and does not have obvious symptoms at first. However, segmentation of pancreatic cancer is challenging because the tumors vary in size with the smallest tumor having a size of about 0.5 c m $cm$ in diameter, and most of them have irregular shapes and unclear boundaries. PURPOSE: In this study, we developed a deep learning architecture Multi-Scale Channel Attention Unet (MSCA-Unet) for pancreatic tumor segmentation and collected CT images of 419 patients from The Affiliated Hospital of Qingdao University and a public dataset. We embedded the multi-scale network into the encoder to extract semantic information at different scales and the decoder to provide supplemental information to overcome the loss of information in the upsampling and the drift of the localized tumor due to the upsampling and skip connections. METHODS: We adopted the channel attention unit after the multi-scale convolution to emphasize the informative channels, which was observed to have the effects of accelerating the positioning process, reducing false positives, and improving the accuracy of outlining very small, irregular pancreatic tumors. RESULTS: Our results show that our network outperformed the other current mainstream segmentation networks and achieved a Dice index of 68.03%, a Jaccard of 59.31%, and an FPR of 1.36% on the private dataset Task-01 without data pre-processing. Compared with the other pancreatic tumor segmentation networks on the public dataset Task-02, our network produced the best Dice index, 80.12%, with the assistance of the data pre-processing scheme. CONCLUSIONS: This study strategically utilizes the multi-scale convolution and channel attention mechanism of the architecture to provide a dedicated network for segmentation of small and irregular pancreatic tumors.
Asunto(s)
Neoplasias Pancreáticas , Humanos , Neoplasias Pancreáticas/diagnóstico por imagen , Diagnóstico por Computador , Universidades , Procesamiento de Imagen Asistido por Computador , Neoplasias PancreáticasRESUMEN
Lung cancer presents one of the leading causes of mortalities for people around the world. Lung image analysis and segmentation are one of the primary steps used for early diagnosis of cancer. Handcrafted medical imaging segmentation presents a very time-consuming task for radiation oncologists. To address this problem, we propose in this work to develop a full and entire system used for early diagnosis of lung cancer in CT scan imaging. The proposed lung cancer diagnosis system is composed of two main parts: the first part is used for segmentation developed on top of the UNETR network, and the second part is a classification part used to classify the output segmentation part, either benign or malignant, developed on top of the self-supervised network. The proposed system presents a powerful tool for early diagnosing and combatting lung cancer using 3D-input CT scan data. Extensive experiments have been performed to contribute to better segmentation and classification results. Training and testing experiments have been performed using the Decathlon dataset. Experimental results have been conducted to new state-of-the-art performances: segmentation accuracy of 97.83%, and 98.77% as classification accuracy. The proposed system presents a new powerful tool to use for early diagnosing and combatting lung cancer using 3D-input CT scan data.
RESUMEN
This study aimed to investigate the robustness of a deep learning (DL) fusion model for low training-to-test ratio (TTR) datasets in the segmentation of gross tumor volumes (GTVs) in three-dimensional planning computed tomography (CT) images for lung cancer stereotactic body radiotherapy (SBRT). A total of 192 patients with lung cancer (solid tumor, 118; part-solid tumor, 53; ground-glass opacity, 21) who underwent SBRT were included in this study. Regions of interest in the GTVs were cropped based on GTV centroids from planning CT images. Three DL models, 3D U-Net, V-Net, and dense V-Net, were trained to segment the GTV regions. Nine fusion models were constructed with logical AND, logical OR, and voting of the two or three outputs of the three DL models. TTR was defined as the ratio of the number of cases in a training dataset to that in a test dataset. The Dice similarity coefficients (DSCs) and Hausdorff distance (HD) of the 12 models were assessed with TTRs of 1.00 (training data: validation data: test data = 40:20:40), 0.791 (35:20:45), 0.531 (31:10:59), 0.291 (20:10:70), and 0.116 (10:5:85). The voting fusion model achieved the highest DSCs of 0.829 to 0.798 for all TTRs among the 12 models, whereas the other models showed DSCs of 0.818 to 0.804 for a TTR of 1.00 and 0.788 to 0.742 for a TTR of 0.116, and an HD of 5.40 ± 3.00 to 6.07 ± 3.26 mm better than any single DL models. The findings suggest that the proposed voting fusion model is a robust approach for low TTR datasets in segmenting GTVs in planning CT images of lung cancer SBRT.
Asunto(s)
Aprendizaje Profundo , Neoplasias Pulmonares , Humanos , Neoplasias Pulmonares/diagnóstico , Conjuntos de Datos como Asunto , Simulación por Computador , Masculino , Femenino , Adulto , Persona de Mediana Edad , Anciano , Anciano de 80 o más AñosRESUMEN
Normal lung cells incur genetic damage over time, which causes unchecked cell growth and ultimately leads to lung cancer. Nearly 85% of lung cancer cases are caused by smoking, but there exists factual evidence that beta-carotene supplements and arsenic in water may raise the risk of developing the illness. Asbestos, polycyclic aromatic hydrocarbons, arsenic, radon gas, nickel, chromium and hereditary factors represent various lung cancer-causing agents. Therefore, deep learning approaches are employed to quicken the crucial procedure of diagnosing lung cancer. The effectiveness of these methods has increased when used to examine cancer histopathology slides. Initially, the data is gathered from the standard benchmark dataset. Further, the pre-processing of the collected images is accomplished using the Gabor filter method. The segmentation of these pre-processed images is done through the modified expectation maximization (MEM) algorithm method. Next, using the histogram of oriented gradient (HOG) scheme, the features are extracted from these segmented images. Finally, the classification of lung cancer is performed by the improved graph neural network (IGNN), where the parameter optimization of graph neural network (GNN) is done by the green anaconda optimization (GAO) algorithm in order to derive the accuracy maximization as the major objective function. This IGNN classifies lung cancer into normal, adeno carcinoma and squamous cell carcinoma as the final output. On comparison with existing methods with respect to distinct performance measures, the simulation findings reveal the betterment of the introduced method.
Asunto(s)
Arsénico , Boidae , Neoplasias Pulmonares , Humanos , Animales , Neoplasias Pulmonares/diagnóstico por imagen , Neoplasias Pulmonares/patología , Redes Neurales de la Computación , AlgoritmosRESUMEN
Automatically and accurately annotating tumor in dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI), which provides a noninvasive in vivo method to evaluate tumor vasculature architectures based on contrast accumulation and washout, is a crucial step in computer-aided breast cancer diagnosis and treatment. However, it remains challenging due to the varying sizes, shapes, appearances and densities of tumors caused by the high heterogeneity of breast cancer, and the high dimensionality and ill-posed artifacts of DCE-MRI. In this paper, we propose a hybrid hemodynamic knowledge-powered and feature reconstruction-guided scheme that integrates pharmacokinetics prior and feature refinement to generate sufficiently adequate features in DCE-MRI for breast cancer segmentation. The pharmacokinetics prior expressed by time intensity curve (TIC) is incorporated into the scheme through objective function called dynamic contrast-enhanced prior (DCP) loss. It contains contrast agent kinetic heterogeneity prior knowledge, which is important to optimize our model parameters. Besides, we design a spatial fusion module (SFM) embedded in the scheme to exploit intra-slices spatial structural correlations, and deploy a spatial-kinetic fusion module (SKFM) to effectively leverage the complementary information extracted from spatial-kinetic space. Furthermore, considering that low spatial resolution often leads to poor image quality in DCE-MRI, we integrate a reconstruction autoencoder into the scheme to refine feature maps in an unsupervised manner. We conduct extensive experiments to validate the proposed method and show that our approach can outperform recent state-of-the-art segmentation methods on breast cancer DCE-MRI dataset. Moreover, to explore the generalization for other segmentation tasks on dynamic imaging, we also extend the proposed method to brain segmentation in DSC-MRI sequence. Our source code will be released on https://github.com/AI-medical-diagnosis-team-of-JNU/DCEDuDoFNet.
Asunto(s)
Neoplasias de la Mama , Humanos , Femenino , Neoplasias de la Mama/diagnóstico por imagen , Neoplasias de la Mama/patología , Medios de Contraste , Interpretación de Imagen Asistida por Computador/métodos , Algoritmos , Reproducibilidad de los Resultados , Imagen por Resonancia Magnética/métodos , HemodinámicaRESUMEN
Automated segmentation of pancreatic cancer is vital for clinical diagnosis and treatment. However, the small size and inconspicuous boundaries limit the segmentation performance, which is further exacerbated for deep learning techniques with the few training samples due to the high threshold of image acquisition and annotation. To alleviate this issue caused by the small-scale dataset, we collect idle multi-parametric MRIs of pancreatic cancer from different studies to construct a relatively large dataset for enhancing the CT pancreatic cancer segmentation. Therefore, we propose a deep learning segmentation model with the dual meta-learning framework for pancreatic cancer. It can integrate the common knowledge of tumors obtained from idle MRIs and salient knowledge from CT images, making high-level features more discriminative. Specifically, the random intermediate modalities between MRIs and CT are first generated to smoothly fill in the gaps in visual appearance and provide rich intermediate representations for ensuing meta-learning scheme. Subsequently, we employ intermediate modalities-based model-agnostic meta-learning to capture and transfer commonalities. At last, a meta-optimizer is utilized to adaptively learn the salient features within CT data, thus alleviating the interference due to internal differences. Comprehensive experimental results demonstrated our method achieved the promising segmentation performance, with a max Dice score of 64.94% on our private dataset, and outperformed state-of-the-art methods on a public pancreatic cancer CT dataset. The proposed method is an effective pancreatic cancer segmentation framework, which can be easily integrated into other segmentation networks and thus promises to be a potential paradigm for alleviating data scarcity challenges using idle data.
Asunto(s)
Procesamiento de Imagen Asistido por Computador , Neoplasias Pancreáticas , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Imagen por Resonancia Magnética , Neoplasias Pancreáticas/diagnóstico por imagen , Neoplasias PancreáticasRESUMEN
Breast cancer is the leading cause of death for women globally. In clinical practice, pathologists visually scan over enormous amounts of gigapixel microscopic tissue slide images, which is a tedious and challenging task. In breast cancer diagnosis, micro-metastases and especially isolated tumor cells are extremely difficult to detect and are easily neglected because tiny metastatic foci might be missed in visual examinations by medical doctors. However, the literature poorly explores the detection of isolated tumor cells, which could be recognized as a viable marker to determine the prognosis for T1NoMo breast cancer patients. To address these issues, we present a deep learning-based framework for efficient and robust lymph node metastasis segmentation in routinely used histopathological hematoxylin−eosin-stained (H−E) whole-slide images (WSI) in minutes, and a quantitative evaluation is conducted using 188 WSIs, containing 94 pairs of H−E-stained WSIs and immunohistochemical CK(AE1/AE3)-stained WSIs, which are used to produce a reliable and objective reference standard. The quantitative results demonstrate that the proposed method achieves 89.6% precision, 83.8% recall, 84.4% F1-score, and 74.9% mIoU, and that it performs significantly better than eight deep learning approaches, including two recently published models (v3_DCNN and Xception-65), and three variants of Deeplabv3+ with three different backbones, namely, U-Net, SegNet, and FCN, in precision, recall, F1-score, and mIoU (p<0.001). Importantly, the proposed system is shown to be capable of identifying tiny metastatic foci in challenging cases, for which there are high probabilities of misdiagnosis in visual inspection, while the baseline approaches tend to fail in detecting tiny metastatic foci. For computational time comparison, the proposed method takes 2.4 min for processing a WSI utilizing four NVIDIA Geforce GTX 1080Ti GPU cards and 9.6 min using a single NVIDIA Geforce GTX 1080Ti GPU card, and is notably faster than the baseline methods (4-times faster than U-Net and SegNet, 5-times faster than FCN, 2-times faster than the 3 different variants of Deeplabv3+, 1.4-times faster than v3_DCNN, and 41-times faster than Xception-65).
RESUMEN
The recent rise of deep learning (DL) and its promising capabilities in capturing non-explicit detail from large datasets have attracted substantial research attention in the field of medical image processing. DL provides grounds for technological development of computer-aided diagnosis and segmentation in radiology and radiation oncology. Amongst the anatomical locations where recent auto-segmentation algorithms have been employed, the pelvis remains one of the most challenging due to large intra- and inter-patient soft-tissue variabilities. This review provides a comprehensive, non-systematic and clinically-oriented overview of 74 DL-based segmentation studies, published between January 2016 and December 2020, for bladder, prostate, cervical and rectal cancers on computed tomography (CT) and magnetic resonance imaging (MRI), highlighting the key findings, challenges and limitations.
RESUMEN
Hepatocellular carcinoma (HCC), as the most common type of primary malignant liver cancer, has become a leading cause of cancer deaths in recent years. Accurate segmentation of HCC lesions is critical for tumor load assessment, surgery planning, and postoperative examination. As the appearance of HCC lesions varies greatly across patients, traditional manual segmentation is a very tedious and time-consuming process, the accuracy of which is also difficult to ensure. Therefore, a fully automated and reliable HCC segmentation system is in high demand. In this work, we present a novel hybrid neural network based on multi-task learning and ensemble learning techniques for accurate HCC segmentation of hematoxylin and eosin (H&E)-stained whole slide images (WSIs). First, three task-specific branches are integrated to enlarge the feature space, based on which the network is able to learn more general features and thus reduce the risk of overfitting. Second, an ensemble learning scheme is leveraged to perform feature aggregation, in which selective kernel modules (SKMs) and spatial and channel-wise squeeze-and-excitation modules (scSEMs) are adopted for capturing the features from different spaces and scales. Our proposed method achieves state-of-the-art performance on three publicly available datasets, with segmentation accuracies of 0.797, 0.923, and 0.765 in the PAIP, CRAG, and UHCMC&CWRU datasets, respectively, which demonstrates its effectiveness in addressing the HCC segmentation problem. To the best of our knowledge, this is also the first work on the pixel-wise HCC segmentation of H&E-stained WSIs.
Asunto(s)
Carcinoma Hepatocelular , Neoplasias Hepáticas , Carcinoma Hepatocelular/diagnóstico por imagen , Humanos , Procesamiento de Imagen Asistido por Computador , Neoplasias Hepáticas/diagnóstico por imagen , Redes Neurales de la Computación , Coloración y EtiquetadoRESUMEN
This paper addresses the problem of liver cancer segmentation in Whole Slide Images (WSIs). We propose a multi-scale image processing method based on an automatic end-to-end deep neural network algorithm for the segmentation of cancerous areas. A seven-level gaussian pyramid representation of the histopathological image was built to provide the texture information at different scales. In this work, several neural architectures were compared using the original image level for the training procedure. The proposed method is based on U-Net applied to seven levels of various resolutions (pyramidal subsampling). The predictions in different levels are combined through a voting mechanism. The final segmentation result is generated at the original image level. Partial color normalization and the weighted overlapping method were applied in preprocessing and prediction separately. The results show the effectiveness of the proposed multi-scale approach which achieved better scores than state-of-the-art methods.