Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Más filtros

Banco de datos
Tipo del documento
País de afiliación
Intervalo de año de publicación
1.
Lab Invest ; 97(12): 1508-1515, 2017 12.
Artículo en Inglés | MEDLINE | ID: mdl-28805805

RESUMEN

Pathologists have had increasing responsibility for quantitating immunohistochemistry (IHC) biomarkers with the expectation of high between-reader reproducibility due to clinical decision-making especially for patient therapy. Digital imaging-based quantitation of IHC clinical slides offers a potential aid for improvement; however, its clinical adoption is limited potentially due to a conventional field-of-view annotation approach. In this study, we implemented a novel solely morphology-based whole tumor section annotation strategy to maximize image analysis quantitation results between readers. We first compare the field-of-view image analysis annotation approach to digital and manual-based modalities across multiple clinical studies (~120 cases per study) and biomarkers (ER, PR, HER2, Ki-67, and p53 IHC) and then compare a subset of the same cases (~40 cases each from the ER, PR, HER2, and Ki-67 studies) using whole tumor section annotation approach to understand incremental value of all modalities. Between-reader results for each biomarker in relation to conventional scoring modalities showed similar concordance as manual read: ER field-of-view image analysis: 95.3% (95% CI 92.0-98.2%) vs digital read: 92.0% (87.8-95.8%) vs manual read: 94.9% (91.4-97.8%); PR field-of-view image analysis: 94.1% (90.3-97.2%) vs digital read: 94.0% (90.2-97.1%) vs manual read: 94.4% (90.9-97.2%); Ki-67 field-of-view image analysis: 86.8% (82.1-91.4%) vs digital read: 76.6% (70.9-82.2%) vs manual read: 85.6% (80.4-90.4%); p53 field-of-view image analysis: 81.7% (76.4-86.8%) vs digital read: 80.6% (75.0-86.0%) vs manual read: 78.8% (72.2-83.3%); and HER2 field-of-view image analysis: 93.8% (90.0-97.2%) vs digital read: 91.0 (86.6-94.9%) vs manual read: 87.2% (82.1-91.9%). Subset implementation and analysis on the same cases using whole tumor section image analysis approach showed significant improvement between pathologists over field-of-view image analysis and manual read (HER2 100% (97-100%), P=0.013 field-of-view image analysis and 0.013 manual read; Ki-67 100% (96.9-100%), P=0.040 and 0.012; ER 98.3% (94.1-99.5%), p=0.232 and 0.181; and PR 96.6% (91.5-98.7%), p=0.012 and 0.257). Overall, whole tumor section image analysis significantly improves between-pathologist's reproducibility and is the optimal approach for clinical-based image analysis algorithms.


Asunto(s)
Biomarcadores de Tumor/análisis , Neoplasias de la Mama/química , Neoplasias de la Mama/diagnóstico por imagen , Interpretación de Imagen Asistida por Computador/métodos , Inmunohistoquímica/métodos , Biomarcadores de Tumor/química , Femenino , Humanos , Antígeno Ki-67/análisis , Antígeno Ki-67/química , Proteína p53 Supresora de Tumor/análisis , Proteína p53 Supresora de Tumor/química
2.
ArXiv ; 2024 Feb 08.
Artículo en Inglés | MEDLINE | ID: mdl-38351940

RESUMEN

Together with the molecular knowledge of genes and proteins, biological images promise to significantly enhance the scientific understanding of complex cellular systems and to advance predictive and personalized therapeutic products for human health. For this potential to be realized, quality-assured image data must be shared among labs at a global scale to be compared, pooled, and reanalyzed, thus unleashing untold potential beyond the original purpose for which the data was generated. There are two broad sets of requirements to enable image data sharing in the life sciences. One set of requirements is articulated in the companion White Paper entitled "Enabling Global Image Data Sharing in the Life Sciences," which is published in parallel and addresses the need to build the cyberinfrastructure for sharing the digital array data (arXiv:2401.13023 [q-bio.OT], https://doi.org/10.48550/arXiv.2401.13023). In this White Paper, we detail a broad set of requirements, which involves collecting, managing, presenting, and propagating contextual information essential to assess the quality, understand the content, interpret the scientific implications, and reuse image data in the context of the experimental details. We start by providing an overview of the main lessons learned to date through international community activities, which have recently made considerable progress toward generating community standard practices for imaging Quality Control (QC) and metadata. We then provide a clear set of recommendations for amplifying this work. The driving goal is to address remaining challenges, and democratize access to common practices and tools for a spectrum of biomedical researchers, regardless of their expertise, access to resources, and geographical location.

3.
J Pathol Inform ; 2: S3, 2011.
Artículo en Inglés | MEDLINE | ID: mdl-22811959

RESUMEN

A computer-assisted system for histological prostate cancer diagnosis can assist pathologists in two stages: (i) to locate cancer regions in a large digitized tissue biopsy, and (ii) to assign Gleason grades to the regions detected in stage 1. Most previous studies on this topic have primarily addressed the second stage by classifying the preselected tissue regions. In this paper, we address the first stage by presenting a cancer detection approach for the whole slide tissue image. We propose a novel method to extract a cytological feature, namely the presence of cancer nuclei (nuclei with prominent nucleoli) in the tissue, and apply this feature to detect the cancer regions. Additionally, conventional image texture features which have been widely used in the literature are also considered. The performance comparison among the proposed cytological textural feature combination method, the texture-based method and the cytological feature-based method demonstrates the robustness of the extracted cytological feature. At a false positive rate of 6%, the proposed method is able to achieve a sensitivity of 78% on a dataset including six training images (each of which has approximately 4,000×7,000 pixels) and 1 1 whole-slide test images (each of which has approximately 5,000×23,000 pixels). All images are at 20X magnification.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA