Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 11 de 11
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
J Imaging Inform Med ; 2024 Jul 09.
Artículo en Inglés | MEDLINE | ID: mdl-38980627

RESUMEN

Accurate image classification and retrieval are of importance for clinical diagnosis and treatment decision-making. The recent contrastive language-image pre-training (CLIP) model has shown remarkable proficiency in understanding natural images. Drawing inspiration from CLIP, pathology-dedicated CLIP (PathCLIP) has been developed, utilizing over 200,000 image and text pairs in training. While the performance the PathCLIP is impressive, its robustness under a wide range of image corruptions remains unknown. Therefore, we conduct an extensive evaluation to analyze the performance of PathCLIP on various corrupted images from the datasets of osteosarcoma and WSSS4LUAD. In our experiments, we introduce eleven corruption types including brightness, contrast, defocus, resolution, saturation, hue, markup, deformation, incompleteness, rotation, and flipping at various settings. Through experiments, we find that PathCLIP surpasses OpenAI-CLIP and the pathology language-image pre-training (PLIP) model in zero-shot classification. It is relatively robust to image corruptions including contrast, saturation, incompleteness, and orientation factors. Among the eleven corruptions, hue, markup, deformation, defocus, and resolution can cause relatively severe performance fluctuation of the PathCLIP. This indicates that ensuring the quality of images is crucial before conducting a clinical test. Additionally, we assess the robustness of PathCLIP in the task of image-to-image retrieval, revealing that PathCLIP performs less effectively than PLIP on osteosarcoma but performs better on WSSS4LUAD under diverse corruptions. Overall, PathCLIP presents impressive zero-shot classification and retrieval performance for pathology images, but appropriate care needs to be taken when using it.

2.
Artículo en Inglés | MEDLINE | ID: mdl-38848032

RESUMEN

PURPOSE: In pathology images, different stains highlight different glomerular structures, so a supervised deep learning-based glomerular instance segmentation model trained on individual stains performs poorly on other stains. However, it is difficult to obtain a training set with multiple stains because the labeling of pathology images is very time-consuming and tedious. Therefore, in this paper, we proposed an unsupervised stain augmentation-based method for segmentation of glomerular instances. METHODS: In this study, we successfully realized the conversion between different staining methods such as PAS, MT and PASM by contrastive unpaired translation (CUT), thus improving the staining diversity of the training set. Moreover, we replaced the backbone of mask R-CNN with swin transformer to further improve the efficiency of feature extraction and thus achieve better performance in instance segmentation task. RESULTS: To validate the method presented in this paper, we constructed a dataset from 216 WSIs of the three stains in this study. After conducting in-depth experiments, we verified that the instance segmentation method based on stain augmentation outperforms existing methods across all metrics for PAS, PASM, and MT stains. Furthermore, ablation experiments are performed in this paper to further demonstrate the effectiveness of the proposed module. CONCLUSION: This study successfully demonstrated the potential of unsupervised stain augmentation to improve glomerular segmentation in pathology analysis. Future research could extend this approach to other complex segmentation tasks in the pathology image domain to further explore the potential of applying stain augmentation techniques in different domains of pathology image analysis.

3.
Mod Pathol ; 37(2): 100398, 2024 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-38043788

RESUMEN

Immunohistochemistry (IHC) is a well-established and commonly used staining method for clinical diagnosis and biomedical research. In most IHC images, the target protein is conjugated with a specific antibody and stained using diaminobenzidine (DAB), resulting in a brown coloration, whereas hematoxylin serves as a blue counterstain for cell nuclei. The protein expression level is quantified through the H-score, calculated from DAB staining intensity within the target cell region. Traditionally, this process requires evaluation by 2 expert pathologists, which is both time consuming and subjective. To enhance the efficiency and accuracy of this process, we have developed an automatic algorithm for quantifying the H-score of IHC images. To characterize protein expression in specific cell regions, a deep learning model for region recognition was trained based on hematoxylin staining only, achieving pixel accuracy for each class ranging from 0.92 to 0.99. Within the desired area, the algorithm categorizes DAB intensity of each pixel as negative, weak, moderate, or strong staining and calculates the final H-score based on the percentage of each intensity category. Overall, this algorithm takes an IHC image as input and directly outputs the H-score within a few seconds, significantly enhancing the speed of IHC image analysis. This automated tool provides H-score quantification with precision and consistency comparable to experienced pathologists but at a significantly reduced cost during IHC diagnostic workups. It holds significant potential to advance biomedical research reliant on IHC staining for protein expression quantification.


Asunto(s)
Aprendizaje Profundo , Humanos , Inmunohistoquímica , Hematoxilina/metabolismo , Algoritmos , Núcleo Celular/metabolismo
4.
Semin Diagn Pathol ; 40(2): 109-119, 2023 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-36890029

RESUMEN

Over the past decade, many new cancer treatments have been developed and made available to patients. However, in most cases, these treatments only benefit a specific subgroup of patients, making the selection of treatment for a specific patient an essential but challenging task for oncologists. Although some biomarkers were found to associate with treatment response, manual assessment is time-consuming and subjective. With the rapid developments and expanded implementation of artificial intelligence (AI) in digital pathology, many biomarkers can be quantified automatically from histopathology images. This approach allows for a more efficient and objective assessment of biomarkers, aiding oncologists in formulating personalized treatment plans for cancer patients. This review presents an overview and summary of the recent studies on biomarker quantification and treatment response prediction using hematoxylin-eosin (H&E) stained pathology images. These studies have shown that an AI-based digital pathology approach can be practical and will become increasingly important in improving the selection of cancer treatments for patients.


Asunto(s)
Aprendizaje Profundo , Neoplasias , Humanos , Inteligencia Artificial , Medicina de Precisión/métodos , Neoplasias/terapia , Neoplasias/patología
5.
Front Bioinform ; 3: 1296667, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-38323039

RESUMEN

Introduction: Prostate cancer is a highly heterogeneous disease, presenting varying levels of aggressiveness and response to treatment. Angiogenesis is one of the hallmarks of cancer, providing oxygen and nutrient supply to tumors. Micro vessel density has previously been correlated with higher Gleason score and poor prognosis. Manual segmentation of blood vessels (BVs) In microscopy images is challenging, time consuming and may be prone to inter-rater variabilities. In this study, an automated pipeline is presented for BV detection and distribution analysis in multiplexed prostate cancer images. Methods: A deep learning model was trained to segment BVs by combining CD31, CD34 and collagen IV images. In addition, the trained model was used to analyze the size and distribution patterns of BVs in relation to disease progression in a cohort of prostate cancer patients (N = 215). Results: The model was capable of accurately detecting and segmenting BVs, as compared to ground truth annotations provided by two reviewers. The precision (P), recall (R) and dice similarity coefficient (DSC) were equal to 0.93 (SD 0.04), 0.97 (SD 0.02) and 0.71 (SD 0.07) with respect to reviewer 1, and 0.95 (SD 0.05), 0.94 (SD 0.07) and 0.70 (SD 0.08) with respect to reviewer 2, respectively. BV count was significantly associated with 5-year recurrence (adjusted p = 0.0042), while both count and area of blood vessel were significantly associated with Gleason grade (adjusted p = 0.032 and 0.003 respectively). Discussion: The proposed methodology is anticipated to streamline and standardize BV analysis, offering additional insights into the biology of prostate cancer, with broad applicability to other cancers.

6.
Cancers (Basel) ; 14(5)2022 Feb 25.
Artículo en Inglés | MEDLINE | ID: mdl-35267505

RESUMEN

With the remarkable success of digital histopathology, we have witnessed a rapid expansion of the use of computational methods for the analysis of digital pathology and biopsy image patches. However, the unprecedented scale and heterogeneous patterns of histopathological images have presented critical computational bottlenecks requiring new computational histopathology tools. Recently, deep learning technology has been extremely successful in the field of computer vision, which has also boosted considerable interest in digital pathology applications. Deep learning and its extensions have opened several avenues to tackle many challenging histopathological image analysis problems including color normalization, image segmentation, and the diagnosis/prognosis of human cancers. In this paper, we provide a comprehensive up-to-date review of the deep learning methods for digital H&E-stained pathology image analysis. Specifically, we first describe recent literature that uses deep learning for color normalization, which is one essential research direction for H&E-stained histopathological image analysis. Followed by the discussion of color normalization, we review applications of the deep learning method for various H&E-stained image analysis tasks such as nuclei and tissue segmentation. We also summarize several key clinical studies that use deep learning for the diagnosis and prognosis of human cancers from H&E-stained histopathological images. Finally, online resources and open research problems on pathological image analysis are also provided in this review for the convenience of researchers who are interested in this exciting field.

7.
Diagn Pathol ; 15(1): 100, 2020 Jul 28.
Artículo en Inglés | MEDLINE | ID: mdl-32723384

RESUMEN

BACKGROUND: Multiplex immunohistochemistry (mIHC) permits the labeling of six or more distinct cell types within a single histologic tissue section. The classification of each cell type requires detection of the unique colored chromogens localized to cells expressing biomarkers of interest. The most comprehensive and reproducible method to evaluate such slides is to employ digital pathology and image analysis pipelines to whole-slide images (WSIs). Our suite of deep learning tools quantitatively evaluates the expression of six biomarkers in mIHC WSIs. These methods address the current lack of readily available methods to evaluate more than four biomarkers and circumvent the need for specialized instrumentation to spectrally separate different colors. The use case application for our methods is a study that investigates tumor immune interactions in pancreatic ductal adenocarcinoma (PDAC) with a customized mIHC panel. METHODS: Six different colored chromogens were utilized to label T-cells (CD3, CD4, CD8), B-cells (CD20), macrophages (CD16), and tumor cells (K17) in formalin-fixed paraffin-embedded (FFPE) PDAC tissue sections. We leveraged pathologist annotations to develop complementary deep learning-based methods: (1) ColorAE is a deep autoencoder which segments stained objects based on color; (2) U-Net is a convolutional neural network (CNN) trained to segment cells based on color, texture and shape; and ensemble methods that employ both ColorAE and U-Net, collectively referred to as (3) ColorAE:U-Net. We assessed the performance of our methods using: structural similarity and DICE score to evaluate segmentation results of ColorAE against traditional color deconvolution; F1 score, sensitivity, positive predictive value, and DICE score to evaluate the predictions from ColorAE, U-Net, and ColorAE:U-Net ensemble methods against pathologist-generated ground truth. We then used prediction results for spatial analysis (nearest neighbor). RESULTS: We observed that (1) the performance of ColorAE is comparable to traditional color deconvolution for single-stain IHC images (note: traditional color deconvolution cannot be used for mIHC); (2) ColorAE and U-Net are complementary methods that detect 6 different classes of cells with comparable performance; (3) combinations of ColorAE and U-Net into ensemble methods outperform using either ColorAE and U-Net alone; and (4) ColorAE:U-Net ensemble methods can be employed for detailed analysis of the tumor microenvironment (TME). We developed a suite of scalable deep learning methods to analyze 6 distinctly labeled cell populations in mIHC WSIs. We evaluated our methods and found that they reliably detected and classified cells in the PDAC tumor microenvironment. We also present a use case, wherein we apply the ColorAE:U-Net ensemble method across 3 mIHC WSIs and use the predictions to quantify all stained cell populations and perform nearest neighbor spatial analysis. Thus, we provide proof of concept that these methods can be employed to quantitatively describe the spatial distribution immune cells within the tumor microenvironment. These complementary deep learning methods are readily deployable for use in clinical research studies.


Asunto(s)
Biomarcadores de Tumor/análisis , Aprendizaje Profundo , Procesamiento de Imagen Asistido por Computador/métodos , Inmunohistoquímica/métodos , Carcinoma Ductal Pancreático/inmunología , Carcinoma Ductal Pancreático/patología , Humanos , Neoplasias Pancreáticas/inmunología , Neoplasias Pancreáticas/patología
8.
Distrib Parallel Databases ; 37(2): 251-272, 2019 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-31217669

RESUMEN

Recent advancements in systematic analysis of high resolution whole slide images have increase efficiency of diagnosis, prognosis and prediction of cancer and important diseases. Due to the enormous sizes and dimensions of whole slide images, the analysis requires extensive computing resources which are not commonly available. Images have to be tiled for processing due to computer memory limitations, which lead to inaccurate results due to the ignorance of boundary crossing objects. Thus, we propose a generic and highly scalable cloud-based image analysis framework for whole slide images. The framework enables parallelized integration of image analysis steps, such as segmentation and aggregation of micro-structures in a single pipeline, and generation of final objects manageable by databases. The core concept relies on the abstraction of objects in whole slide images as different classes of spatial geometries, which in turn can be handled as text based records in MapReduce. The framework applies an overlapping partitioning scheme on images, and provides parallelization of tiling and image segmentation based on MapReduce architecture. It further provides robust object normalization, graceful handling of boundary objects with an efficient spatial indexing based matching method to generate accurate results. Our experiments on Amazon EMR show that MaReIA is highly scalable, generic and extremely cost effective by benchmark tests.

9.
Pattern Recognit ; 86: 188-200, 2019 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-30631215

RESUMEN

We propose a sparse Convolutional Autoencoder (CAE) for simultaneous nucleus detection and feature extraction in histopathology tissue images. Our CAE detects and encodes nuclei in image patches in tissue images into sparse feature maps that encode both the location and appearance of nuclei. A primary contribution of our work is the development of an unsupervised detection network by using the characteristics of histopathology image patches. The pretrained nucleus detection and feature extraction modules in our CAE can be fine-tuned for supervised learning in an end-to-end fashion. We evaluate our method on four datasets and achieve state-of-the-art results. In addition, we are able to achieve comparable performance with only 5% of the fully- supervised annotation cost.

10.
Cytometry A ; 91(11): 1078-1087, 2017 11.
Artículo en Inglés | MEDLINE | ID: mdl-28976721

RESUMEN

Neoadjuvant treatment (NAT) of breast cancer (BCa) is an option for patients with the locally advanced disease. It has been compared with standard adjuvant therapy with the aim of improving prognosis and surgical outcome. Moreover, the response of the tumor to the therapy provides useful information for patient management. The pathological examination of the tissue sections after surgery is the gold-standard to estimate the residual tumor and the assessment of cellularity is an important component of tumor burden assessment. In the current clinical practice, tumor cellularity is manually estimated by pathologists on hematoxylin and eosin (H&E) stained slides, the quality, and reliability of which might be impaired by inter-observer variability which potentially affects prognostic power assessment in NAT trials. This procedure is also qualitative and time-consuming. In this paper, we describe a method of automatically assessing cellularity. A pipeline to automatically segment nuclei figures and estimate residual cancer cellularity from within patches and whole slide images (WSIs) of BCa was developed. We have compared the performance of our proposed pipeline in estimating residual cancer cellularity with that of two expert pathologists. We found an intra-class agreement coefficient (ICC) of 0.89 (95% CI of [0.70, 0.95]) between pathologists, 0.74 (95% CI of [0.70, 0.77]) between pathologist #1 and proposed method, and 0.75 (95% CI of [0.71, 0.79]) between pathologist #2 and proposed method. We have also successfully applied our proposed technique on a WSI to locate areas with high concentration of residual cancer. The main advantage of our approach is that it is fully automatic and can be used to find areas with high cellularity in WSIs. This provides a first step in developing an automatic technique for post-NAT tumor response assessment from pathology slides. © 2017 International Society for Advancement of Cytometry.


Asunto(s)
Neoplasias de la Mama/patología , Rastreo Celular/métodos , Recurrencia Local de Neoplasia/patología , Neoplasia Residual/patología , Neoplasias de la Mama/diagnóstico , Neoplasias de la Mama/tratamiento farmacológico , Eosina Amarillenta-(YS)/farmacología , Femenino , Hematoxilina/farmacología , Humanos , Terapia Neoadyuvante , Recurrencia Local de Neoplasia/diagnóstico , Recurrencia Local de Neoplasia/tratamiento farmacológico , Pronóstico , Reproducibilidad de los Resultados
11.
Int J Comput Biol Drug Des ; 9(1-2): 102-119, 2016.
Artículo en Inglés | MEDLINE | ID: mdl-27034719

RESUMEN

Three-dimensional (3D) high resolution microscopic images have high potential for improving the understanding of both normal and disease processes where structural changes or spatial relationship of disease features are significant. In this paper, we develop a complete framework applicable to 3D pathology analytical imaging, with an application to whole slide images of sequential liver slices for 3D vessel structure analysis. The analysis workflow consists of image registration, segmentation, vessel cross-section association, interpolation, and volumetric rendering. To identify biologically-meaningful correspondence across adjacent slides, we formulate a similarity function for four association cases. The optimal solution is then obtained by constrained Integer Programming. We quantitatively and qualitatively compare our vessel reconstruction results with human annotations. Validation results indicate a satisfactory concordance as measured both by region-based and distance-based metrics. These results demonstrate a promising 3D vessel analysis framework for whole slide images of liver tissue sections.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA