RESUMO
BACKGROUND: The Ki67 Index has been extensively studied as a prognostic biomarker in breast cancer. However, its clinical adoption is largely hampered by the lack of a standardized method to assess Ki67 that limits inter-laboratory reproducibility. It is important to standardize the computation of the Ki67 Index before it can be effectively used in clincial practice. METHOD: In this study, we develop a systematic approach towards standardization of the Ki67 Index. We first create the ground truth consisting of tumor positive and tumor negative nuclei by registering adjacent breast tissue sections stained with Ki67 and H&E. The registration is followed by segmentation of positive and negative nuclei within tumor regions from Ki67 images. The true Ki67 Index is then approximated with a linear model of the area of positive to the total area of tumor nuclei. RESULTS: When tested on 75 images of Ki67 stained breast cancer biopsies, the proposed method resulted in an average root mean square error of 3.34. In comparison, an expert pathologist resulted in an average root mean square error of 9.98 and an existing automated approach produced an average root mean square error of 5.64. CONCLUSIONS: We show that it is possible to approximate the true Ki67 Index accurately without detecting individual nuclei and also statically demonstrate the weaknesses of commonly adopted approaches that use both tumor and non-tumor regions together while compensating for the latter with higher order approximations.
Assuntos
Biomarcadores Tumorais/genética , Neoplasias da Mama/genética , Antígeno Ki-67/genética , Prognóstico , Biópsia , Neoplasias da Mama/diagnóstico por imagem , Neoplasias da Mama/patologia , Proliferação de Células/genética , Feminino , Humanos , Processamento de Imagem Assistida por ComputadorRESUMO
The development of whole slide scanners has revolutionized the field of digital pathology. Unfortunately, whole slide scanners often produce images with out-of-focus/blurry areas that limit the amount of tissue available for a pathologist to make accurate diagnosis/prognosis. Moreover, these artifacts hamper the performance of computerized image analysis systems. These areas are typically identified by visual inspection, which leads to a subjective evaluation causing high intra- and inter-observer variability. Moreover, this process is both tedious, and time-consuming. The aim of this study is to develop a deep learning based software called, DeepFocus, which can automatically detect and segment blurry areas in digital whole slide images to address these problems. DeepFocus is built on TensorFlow, an open source library that exploits data flow graphs for efficient numerical computation. DeepFocus was trained by using 16 different H&E and IHC-stained slides that were systematically scanned on nine different focal planes, generating 216,000 samples with varying amounts of blurriness. When trained and tested on two independent datasets, DeepFocus resulted in an average accuracy of 93.2% (± 9.6%), which is a 23.8% improvement over an existing method. DeepFocus has the potential to be integrated with whole slide scanners to automatically re-scan problematic areas, hence improving the overall image quality for pathologists and image analysis algorithms.
Assuntos
Aprendizado Profundo , Redes Neurais de Computação , Algoritmos , Área Sob a Curva , Processamento de Imagem Assistida por Computador , Curva ROCRESUMO
In pathology, Immunohistochemical staining (IHC) of tissue sections is regularly used to diagnose and grade malignant tumors. Typically, IHC stain interpretation is rendered by a trained pathologist using a manual method, which consists of counting each positively- and negatively-stained cell under a microscope. The manual enumeration suffers from poor reproducibility even in the hands of expert pathologists. To facilitate this process, we propose a novel method to create artificial datasets with the known ground truth which allows us to analyze the recall, precision, accuracy, and intra- and inter-observer variability in a systematic manner, enabling us to compare different computer analysis approaches. Our method employs a conditional Generative Adversarial Network that uses a database of Ki67 stained tissues of breast cancer patients to generate synthetic digital slides. Our experiments show that synthetic images are indistinguishable from real images. Six readers (three pathologists and three image analysts) tried to differentiate 15 real from 15 synthetic images and the probability that the average reader would be able to correctly classify an image as synthetic or real more than 50% of the time was only 44.7%.
Assuntos
Antígenos de Neoplasias/análise , Processamento de Imagem Assistida por Computador/métodos , Antígeno Ki-67/análise , Redes Neurais de Computação , Imagens de Fantasmas , Feminino , Humanos , Imuno-Histoquímica , Variações Dependentes do ObservadorRESUMO
Automatic and accurate detection of positive and negative nuclei from images of immunostained tissue biopsies is critical to the success of digital pathology. The evaluation of most nuclei detection algorithms relies on manually generated ground truth prepared by pathologists, which is unfortunately time-consuming and suffers from inter-pathologist variability. In this work, we developed a digital immunohistochemistry (IHC) phantom that can be used for evaluating computer algorithms for enumeration of IHC positive cells. Our phantom development consists of two main steps, 1) extraction of the individual as well as nuclei clumps of both positive and negative nuclei from real WSI images, and 2) systematic placement of the extracted nuclei clumps on an image canvas. The resulting images are visually similar to the original tissue images. We created a set of 42 images with different concentrations of positive and negative nuclei. These images were evaluated by four board certified pathologists in the task of estimating the ratio of positive to total number of nuclei. The resulting concordance correlation coefficients (CCC) between the pathologist and the true ratio range from 0.86 to 0.95 (point estimates). The same ratio was also computed by an automated computer algorithm, which yielded a CCC value of 0.99. Reading the phantom data with known ground truth, the human readers show substantial variability and lower average performance than the computer algorithm in terms of CCC. This shows the limitation of using a human reader panel to establish a reference standard for the evaluation of computer algorithms, thereby highlighting the usefulness of the phantom developed in this work. Using our phantom images, we further developed a function that can approximate the true ratio from the area of the positive and negative nuclei, hence avoiding the need to detect individual nuclei. The predicted ratios of 10 held-out images using the function (trained on 32 images) are within ±2.68% of the true ratio. Moreover, we also report the evaluation of a computerized image analysis method on the synthetic tissue dataset.
Assuntos
Núcleo Celular/patologia , Processamento de Imagem Assistida por Computador/métodos , Imuno-Histoquímica/métodos , Algoritmos , Humanos , Imagens de Fantasmas , Reprodutibilidade dos TestesRESUMO
Introduction With the growing popularity of telemedicine and tele-diagnostics, clinical validation of new devices is essential. This study sought to investigate whether high-definition digital still images of the eardrum provide sufficient information to make a correct diagnosis, as compared with the gold standard view provided by clinical microscopy. Methods Twelve fellowship-trained ear physicians (neurotologists) reviewed the same set of 210 digital otoscope eardrum images. Participants diagnosed each image as normal or, if abnormal, they selected from seven types of ear pathology. Diagnostic percentage correct for each pathology was compared with a gold standard of diagnosis using clinical microscopy with adjunct audiometry and/or tympanometry. Participants also rated their degree of confidence for each diagnosis. Results Overall correctness of diagnosis for ear pathologies ranged from 48.6-100%, depending on the type of pathology. Neurotologists were 72% correct in identifying eardrums as normal. Reviewers' confidence in diagnosis varied substantially among types of pathology, as well as among participants. Discussion High-definition digital still images of eardrums provided sufficient information for neurotologists to make correct diagnoses for some pathologies. However, some diagnoses, such as middle ear effusion, were more difficult to diagnose when based only on a still image. Levels of confidence of reviewers did not generally correlate with diagnostic ability.
Assuntos
Otopatias/diagnóstico , Microscopia/métodos , Otoscopia/métodos , Membrana Timpânica/patologia , Testes de Impedância Acústica/métodos , Meato Acústico Externo/patologia , Feminino , Humanos , Neuro-Otologia/instrumentação , Otite Média com Derrame/diagnóstico , Otolaringologia/instrumentação , TelemedicinaRESUMO
Pathology Image Informatics Platform (PIIP) is an NCI/NIH sponsored project intended for managing, annotating, sharing, and quantitatively analyzing digital pathology imaging data. It expands on an existing, freely available pathology image viewer, Sedeen. The goal of this project is to develop and embed some commonly used image analysis applications into the Sedeen viewer to create a freely available resource for the digital pathology and cancer research communities. Thus far, new plugins have been developed and incorporated into the platform for out of focus detection, region of interest transformation, and IHC slide analysis. Our biomarker quantification and nuclear segmentation algorithms, written in MATLAB, have also been integrated into the viewer. This article describes the viewing software and the mechanism to extend functionality by plugins, brief descriptions of which are provided as examples, to guide users who want to use this platform. PIIP project materials, including a video describing its usage and applications, and links for the Sedeen Viewer, plug-ins, and user manuals are freely available through the project web page: http://pathiip.org Cancer Res; 77(21); e83-86. ©2017 AACR.