Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 25
Filtrar
1.
Am J Surg Pathol ; 2024 May 29.
Artigo em Inglês | MEDLINE | ID: mdl-38809272

RESUMO

The detection of lymph node metastases is essential for breast cancer staging, although it is a tedious and time-consuming task where the sensitivity of pathologists is suboptimal. Artificial intelligence (AI) can help pathologists detect lymph node metastases, which could help alleviate workload issues. We studied how pathologists' performance varied when aided by AI. An AI algorithm was trained using more than 32 000 breast sentinel lymph node whole slide images (WSIs) matched with their corresponding pathology reports from more than 8000 patients. The algorithm highlighted areas suspicious of harboring metastasis. Three pathologists were asked to review a dataset comprising 167 breast sentinel lymph node WSIs, of which 69 harbored cancer metastases of different sizes, enriched for challenging cases. Ninety-eight slides were benign. The pathologists read the dataset twice, both digitally, with and without AI assistance, randomized for slide and reading orders to reduce bias, separated by a 3-week washout period. Their slide-level diagnosis was recorded, and they were timed during their reads. The average reading time per slide was 129 seconds during the unassisted phase versus 58 seconds during the AI-assisted phase, resulting in an overall efficiency gain of 55% (P<0.001). These efficiency gains are applied to both benign and malignant WSIs. Two of the 3 reading pathologists experienced significant sensitivity improvements, from 74.5% to 93.5% (P≤0.006). This study highlights that AI can help pathologists shorten their reading times by more than half and also improve their metastasis detection rate.

2.
Lancet Digit Health ; 6(2): e114-e125, 2024 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-38135556

RESUMO

BACKGROUND: The rising global cancer burden has led to an increasing demand for imaging tests such as [18F]fluorodeoxyglucose ([18F]FDG)-PET-CT. To aid imaging specialists in dealing with high scan volumes, we aimed to train a deep learning artificial intelligence algorithm to classify [18F]FDG-PET-CT scans of patients with lymphoma with or without hypermetabolic tumour sites. METHODS: In this retrospective analysis we collected 16 583 [18F]FDG-PET-CTs of 5072 patients with lymphoma who had undergone PET-CT before or after treatment at the Memorial Sloa Kettering Cancer Center, New York, NY, USA. Using maximum intensity projection (MIP), three dimensional (3D) PET, and 3D CT data, our ResNet34-based deep learning model (Lymphoma Artificial Reader System [LARS]) for [18F]FDG-PET-CT binary classification (Deauville 1-3 vs 4-5), was trained on 80% of the dataset, and tested on 20% of this dataset. For external testing, 1000 [18F]FDG-PET-CTs were obtained from a second centre (Medical University of Vienna, Vienna, Austria). Seven model variants were evaluated, including MIP-based LARS-avg (optimised for accuracy) and LARS-max (optimised for sensitivity), and 3D PET-CT-based LARS-ptct. Following expert curation, areas under the curve (AUCs), accuracies, sensitivities, and specificities were calculated. FINDINGS: In the internal test cohort (3325 PET-CTs, 1012 patients), LARS-avg achieved an AUC of 0·949 (95% CI 0·942-0·956), accuracy of 0·890 (0·879-0·901), sensitivity of 0·868 (0·851-0·885), and specificity of 0·913 (0·899-0·925); LARS-max achieved an AUC of 0·949 (0·942-0·956), accuracy of 0·868 (0·858-0·879), sensitivity of 0·909 (0·896-0·924), and specificity of 0·826 (0·808-0·843); and LARS-ptct achieved an AUC of 0·939 (0·930-0·948), accuracy of 0·875 (0·864-0·887), sensitivity of 0·836 (0·817-0·855), and specificity of 0·915 (0·901-0·927). In the external test cohort (1000 PET-CTs, 503 patients), LARS-avg achieved an AUC of 0·953 (0·938-0·966), accuracy of 0·907 (0·888-0·925), sensitivity of 0·874 (0·843-0·904), and specificity of 0·949 (0·921-0·960); LARS-max achieved an AUC of 0·952 (0·937-0·965), accuracy of 0·898 (0·878-0·916), sensitivity of 0·899 (0·871-0·926), and specificity of 0·897 (0·871-0·922); and LARS-ptct achieved an AUC of 0·932 (0·915-0·948), accuracy of 0·870 (0·850-0·891), sensitivity of 0·827 (0·793-0·863), and specificity of 0·913 (0·889-0·937). INTERPRETATION: Deep learning accurately distinguishes between [18F]FDG-PET-CT scans of lymphoma patients with and without hypermetabolic tumour sites. Deep learning might therefore be potentially useful to rule out the presence of metabolically active disease in such patients, or serve as a second reader or decision support tool. FUNDING: National Institutes of Health-National Cancer Institute Cancer Center Support Grant.


Assuntos
Aprendizado Profundo , Linfoma , Estados Unidos , Humanos , Tomografia por Emissão de Pósitrons combinada à Tomografia Computadorizada/métodos , Fluordesoxiglucose F18 , Estudos Retrospectivos , Inteligência Artificial , Compostos Radiofarmacêuticos , Linfoma/diagnóstico por imagem
3.
J Pathol Inform ; 14: 100160, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-36536772

RESUMO

Deep learning has been widely used to analyze digitized hematoxylin and eosin (H&E)-stained histopathology whole slide images. Automated cancer segmentation using deep learning can be used to diagnose malignancy and to find novel morphological patterns to predict molecular subtypes. To train pixel-wise cancer segmentation models, manual annotation from pathologists is generally a bottleneck due to its time-consuming nature. In this paper, we propose Deep Interactive Learning with a pretrained segmentation model from a different cancer type to reduce manual annotation time. Instead of annotating all pixels from cancer and non-cancer regions on giga-pixel whole slide images, an iterative process of annotating mislabeled regions from a segmentation model and training/finetuning the model with the additional annotation can reduce the time. Especially, employing a pretrained segmentation model can further reduce the time than starting annotation from scratch. We trained an accurate ovarian cancer segmentation model with a pretrained breast segmentation model by 3.5 hours of manual annotation which achieved intersection-over-union of 0.74, recall of 0.86, and precision of 0.84. With automatically extracted high-grade serous ovarian cancer patches, we attempted to train an additional classification deep learning model to predict BRCA mutation. The segmentation model and code have been released at https://github.com/MSKCC-Computational-Pathology/DMMN-ovary.

4.
Arch Pathol Lab Med ; 147(10): 1178-1185, 2023 10 01.
Artigo em Inglês | MEDLINE | ID: mdl-36538386

RESUMO

CONTEXT.­: Prostate cancer diagnosis rests on accurate assessment of tissue by a pathologist. The application of artificial intelligence (AI) to digitized whole slide images (WSIs) can aid pathologists in cancer diagnosis, but robust, diverse evidence in a simulated clinical setting is lacking. OBJECTIVE.­: To compare the diagnostic accuracy of pathologists reading WSIs of prostatic biopsy specimens with and without AI assistance. DESIGN.­: Eighteen pathologists, 2 of whom were genitourinary subspecialists, evaluated 610 prostate needle core biopsy WSIs prepared at 218 institutions, with the option for deferral. Two evaluations were performed sequentially for each WSI: initially without assistance, and immediately thereafter aided by Paige Prostate (PaPr), a deep learning-based system that provides a WSI-level binary classification of suspicious for cancer or benign and pinpoints the location that has the greatest probability of harboring cancer on suspicious WSIs. Pathologists' changes in sensitivity and specificity between the assisted and unassisted modalities were assessed, together with the impact of PaPr output on the assisted reads. RESULTS.­: Using PaPr, pathologists improved their sensitivity and specificity across all histologic grades and tumor sizes. Accuracy gains on both benign and cancerous WSIs could be attributed to PaPr, which correctly classified 100% of the WSIs showing corrected diagnoses in the PaPr-assisted phase. CONCLUSIONS.­: This study demonstrates the effectiveness and safety of an AI tool for pathologists in simulated diagnostic practice, bridging the gap between computational pathology research and its clinical application, and resulted in the first US Food and Drug Administration authorization of an AI system in pathology.


Assuntos
Inteligência Artificial , Neoplasias da Próstata , Masculino , Humanos , Próstata/patologia , Interpretação de Imagem Assistida por Computador/métodos , Neoplasias da Próstata/diagnóstico , Neoplasias da Próstata/patologia , Biópsia por Agulha
5.
Am J Pathol ; 193(3): 341-349, 2023 03.
Artigo em Inglês | MEDLINE | ID: mdl-36563747

RESUMO

Osteosarcoma is the most common primary bone cancer, whose standard treatment includes pre-operative chemotherapy followed by resection. Chemotherapy response is used for prognosis and management of patients. Necrosis is routinely assessed after chemotherapy from histology slides on resection specimens, where necrosis ratio is defined as the ratio of necrotic tumor/overall tumor. Patients with necrosis ratio ≥90% are known to have a better outcome. Manual microscopic review of necrosis ratio from multiple glass slides is semiquantitative and can have intraobserver and interobserver variability. In this study, an objective and reproducible deep learning-based approach was proposed to estimate necrosis ratio with outcome prediction from scanned hematoxylin and eosin whole slide images (WSIs). To conduct the study, 103 osteosarcoma cases with 3134 WSIs were collected. Deep Multi-Magnification Network was trained to segment multiple tissue subtypes, including viable tumor and necrotic tumor at a pixel level and to calculate case-level necrosis ratio from multiple WSIs. Necrosis ratio estimated by the segmentation model highly correlates with necrosis ratio from pathology reports manually assessed by experts. Furthermore, patients were successfully stratified to predict overall survival with P = 2.4 × 10-6 and progression-free survival with P = 0.016. This study indicates that deep learning can support pathologists as an objective tool to analyze osteosarcoma from histology for assessing treatment response and predicting patient outcome.


Assuntos
Neoplasias Ósseas , Aprendizado Profundo , Osteossarcoma , Humanos , Neoplasias Ósseas/tratamento farmacológico , Neoplasias Ósseas/patologia , Prognóstico , Necrose/patologia , Osteossarcoma/tratamento farmacológico , Osteossarcoma/patologia
6.
J Invest Dermatol ; 142(1): 97-103, 2022 01.
Artigo em Inglês | MEDLINE | ID: mdl-34265329

RESUMO

Basal cell carcinoma (BCC) is the most common skin cancer, with over 2 million cases diagnosed annually in the United States. Conventionally, BCC is diagnosed by naked eye examination and dermoscopy. Suspicious lesions are either removed or biopsied for histopathological confirmation, thus lowering the specificity of noninvasive BCC diagnosis. Recently, reflectance confocal microscopy, a noninvasive diagnostic technique that can image skin lesions at cellular level resolution, has shown to improve specificity in BCC diagnosis and reduced the number needed to biopsy by 2-3 times. In this study, we developed and evaluated a deep learning-based artificial intelligence model to automatically detect BCC in reflectance confocal microscopy images. The proposed model achieved an area under the curve for the receiver operator characteristic curve of 89.7% (stack level) and 88.3% (lesion level), a performance on par with that of reflectance confocal microscopy experts. Furthermore, the model achieved an area under the curve of 86.1% on a held-out test set from international collaborators, demonstrating the reproducibility and generalizability of the proposed automated diagnostic approach. These results provide a clear indication that the clinical deployment of decision support systems for the detection of BCC in reflectance confocal microscopy images has the potential for optimizing the evaluation and diagnosis of patients with skin cancer.


Assuntos
Carcinoma Basocelular/diagnóstico , Aprendizado Profundo/normas , Neoplasias Cutâneas/diagnóstico , Adulto , Idoso , Idoso de 80 Anos ou mais , Inteligência Artificial , Automação , Biópsia , Dermoscopia/métodos , Feminino , Humanos , Masculino , Microscopia Confocal , Pessoa de Meia-Idade , Modelos Biológicos , Exame Físico , Reprodutibilidade dos Testes
7.
J Am Med Inform Assoc ; 28(9): 1874-1884, 2021 08 13.
Artigo em Inglês | MEDLINE | ID: mdl-34260720

RESUMO

OBJECTIVE: Broad adoption of digital pathology (DP) is still lacking, and examples for DP connecting diagnostic, research, and educational use cases are missing. We blueprint a holistic DP solution at a large academic medical center ubiquitously integrated into clinical workflows; researchapplications including molecular, genetic, and tissue databases; and educational processes. MATERIALS AND METHODS: We built a vendor-agnostic, integrated viewer for reviewing, annotating, sharing, and quality assurance of digital slides in a clinical or research context. It is the first homegrown viewer cleared by New York State provisional approval in 2020 for primary diagnosis and remote sign-out during the COVID-19 (coronavirus disease 2019) pandemic. We further introduce an interconnected Honest Broker for BioInformatics Technology (HoBBIT) to systematically compile and share large-scale DP research datasets including anonymized images, redacted pathology reports, and clinical data of patients with consent. RESULTS: The solution has been operationally used over 3 years by 926 pathologists and researchers evaluating 288 903 digital slides. A total of 51% of these were reviewed within 1 month after scanning. Seamless integration of the viewer into 4 hospital systems clearly increases the adoption of DP. HoBBIT directly impacts the translation of knowledge in pathology into effective new health measures, including artificial intelligence-driven detection models for prostate cancer, basal cell carcinoma, and breast cancer metastases, developed and validated on thousands of cases. CONCLUSIONS: We highlight major challenges and lessons learned when going digital to provide orientation for other pathologists. Building interconnected solutions will not only increase adoption of DP, but also facilitate next-generation computational pathology at scale for enhanced cancer research.


Assuntos
COVID-19 , Informática Médica/tendências , Neoplasias , Patologia Clínica , Centros Médicos Acadêmicos , Inteligência Artificial , COVID-19/diagnóstico , Humanos , Masculino , Neoplasias/diagnóstico , Pandemias , Patologia Clínica/tendências
8.
J Pathol Inform ; 12: 9, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34012713

RESUMO

BACKGROUND: The development of artificial intelligence (AI) in pathology frequently relies on digitally annotated whole slide images (WSI). The creation of these annotations - manually drawn by pathologists in digital slide viewers - is time consuming and expensive. At the same time, pathologists routinely annotate glass slides with a pen to outline cancerous regions, for example, for molecular assessment of the tissue. These pen annotations are currently considered artifacts and excluded from computational modeling. METHODS: We propose a novel method to segment and fill hand-drawn pen annotations and convert them into a digital format to make them accessible for computational models. Our method is implemented in Python as an open source, publicly available software tool. RESULTS: Our method is able to extract pen annotations from WSI and save them as annotation masks. On a data set of 319 WSI with pen markers, we validate our algorithm segmenting the annotations with an overall Dice metric of 0.942, Precision of 0.955, and Recall of 0.943. Processing all images takes 15 min in contrast to 5 h manual digital annotation time. Further, the approach is robust against text annotations. CONCLUSIONS: We envision that our method can take advantage of already pen-annotated slides in scenarios in which the annotations would be helpful for training computational models. We conclude that, considering the large archives of many pathology departments that are currently being digitized, our method will help to collect large numbers of training samples from those data.

9.
Mod Pathol ; 34(8): 1487-1494, 2021 08.
Artigo em Inglês | MEDLINE | ID: mdl-33903728

RESUMO

The surgical margin status of breast lumpectomy specimens for invasive carcinoma and ductal carcinoma in situ (DCIS) guides clinical decisions, as positive margins are associated with higher rates of local recurrence. The "cavity shave" method of margin assessment has the benefits of allowing the surgeon to orient shaved margins intraoperatively and the pathologist to assess one inked margin per specimen. We studied whether a deep convolutional neural network, a deep multi-magnification network (DMMN), could accurately segment carcinoma from benign tissue in whole slide images (WSIs) of shave margin slides, and therefore serve as a potential screening tool to improve the efficiency of microscopic evaluation of these specimens. Applying the pretrained DMMN model, or the initial model, to a validation set of 408 WSIs (348 benign, 60 with carcinoma) achieved an area under the curve (AUC) of 0.941. After additional manual annotations and fine-tuning of the model, the updated model achieved an AUC of 0.968 with sensitivity set at 100% and corresponding specificity of 78%. We applied the initial model and updated model to a testing set of 427 WSIs (374 benign, 53 with carcinoma) which showed AUC values of 0.900 and 0.927, respectively. Using the pixel classification threshold selected from the validation set, the model achieved a sensitivity of 92% and specificity of 78%. The four false-negative classifications resulted from two small foci of DCIS (1 mm, 0.5 mm) and two foci of well-differentiated invasive carcinoma (3 mm, 1.5 mm). This proof-of-principle study demonstrates that a DMMN machine learning model can segment invasive carcinoma and DCIS in surgical margin specimens with high accuracy and has the potential to be used as a screening tool for pathologic assessment of these specimens.


Assuntos
Neoplasias da Mama/patologia , Carcinoma Ductal de Mama/patologia , Aprendizado Profundo , Interpretação de Imagem Assistida por Computador/métodos , Margens de Excisão , Carcinoma Intraductal não Infiltrante/patologia , Feminino , Humanos , Mastectomia Segmentar , Neoplasia Residual/diagnóstico
10.
J Pathol ; 254(2): 147-158, 2021 06.
Artigo em Inglês | MEDLINE | ID: mdl-33904171

RESUMO

Artificial intelligence (AI)-based systems applied to histopathology whole-slide images have the potential to improve patient care through mitigation of challenges posed by diagnostic variability, histopathology caseload, and shortage of pathologists. We sought to define the performance of an AI-based automated prostate cancer detection system, Paige Prostate, when applied to independent real-world data. The algorithm was employed to classify slides into two categories: benign (no further review needed) or suspicious (additional histologic and/or immunohistochemical analysis required). We assessed the sensitivity, specificity, positive predictive values (PPVs), and negative predictive values (NPVs) of a local pathologist, two central pathologists, and Paige Prostate in the diagnosis of 600 transrectal ultrasound-guided prostate needle core biopsy regions ('part-specimens') from 100 consecutive patients, and to ascertain the impact of Paige Prostate on diagnostic accuracy and efficiency. Paige Prostate displayed high sensitivity (0.99; CI 0.96-1.0), NPV (1.0; CI 0.98-1.0), and specificity (0.93; CI 0.90-0.96) at the part-specimen level. At the patient level, Paige Prostate displayed optimal sensitivity (1.0; CI 0.93-1.0) and NPV (1.0; CI 0.91-1.0) at a specificity of 0.78 (CI 0.64-0.89). The 27 part-specimens considered by Paige Prostate as suspicious, whose final diagnosis was benign, were found to comprise atrophy (n = 14), atrophy and apical prostate tissue (n = 1), apical/benign prostate tissue (n = 9), adenosis (n = 2), and post-atrophic hyperplasia (n = 1). Paige Prostate resulted in the identification of four additional patients whose diagnoses were upgraded from benign/suspicious to malignant. Additionally, this AI-based test provided an estimated 65.5% reduction of the diagnostic time for the material analyzed. Given its optimal sensitivity and NPV, Paige Prostate has the potential to be employed for the automated identification of patients whose histologic slides could forgo full histopathologic review. In addition to providing incremental improvements in diagnostic accuracy and efficiency, this AI-based system identified patients whose prostate cancers were not initially diagnosed by three experienced histopathologists. © 2021 The Authors. The Journal of Pathology published by John Wiley & Sons, Ltd. on behalf of The Pathological Society of Great Britain and Ireland.


Assuntos
Inteligência Artificial , Neoplasias da Próstata/diagnóstico , Idoso , Idoso de 80 Anos ou mais , Biópsia , Biópsia com Agulha de Grande Calibre , Humanos , Aprendizado de Máquina , Masculino , Pessoa de Meia-Idade , Patologistas , Próstata/patologia , Neoplasias da Próstata/patologia
11.
Comput Med Imaging Graph ; 88: 101866, 2021 03.
Artigo em Inglês | MEDLINE | ID: mdl-33485058

RESUMO

Pathologic analysis of surgical excision specimens for breast carcinoma is important to evaluate the completeness of surgical excision and has implications for future treatment. This analysis is performed manually by pathologists reviewing histologic slides prepared from formalin-fixed tissue. In this paper, we present Deep Multi-Magnification Network trained by partial annotation for automated multi-class tissue segmentation by a set of patches from multiple magnifications in digitized whole slide images. Our proposed architecture with multi-encoder, multi-decoder, and multi-concatenation outperforms other single and multi-magnification-based architectures by achieving the highest mean intersection-over-union, and can be used to facilitate pathologists' assessments of breast cancer.


Assuntos
Neoplasias da Mama , Redes Neurais de Computação , Mama , Neoplasias da Mama/diagnóstico por imagem , Feminino , Humanos
13.
Mod Pathol ; 33(10): 2058-2066, 2020 10.
Artigo em Inglês | MEDLINE | ID: mdl-32393768

RESUMO

Prostate cancer (PrCa) is the second most common cancer among men in the United States. The gold standard for detecting PrCa is the examination of prostate needle core biopsies. Diagnosis can be challenging, especially for small, well-differentiated cancers. Recently, machine learning algorithms have been developed for detecting PrCa in whole slide images (WSIs) with high test accuracy. However, the impact of these artificial intelligence systems on pathologic diagnosis is not known. To address this, we investigated how pathologists interact with Paige Prostate Alpha, a state-of-the-art PrCa detection system, in WSIs of prostate needle core biopsies stained with hematoxylin and eosin. Three AP-board certified pathologists assessed 304 anonymized prostate needle core biopsy WSIs in 8 hours. The pathologists classified each WSI as benign or cancerous. After ~4 weeks, pathologists were tasked with re-reviewing each WSI with the aid of Paige Prostate Alpha. For each WSI, Paige Prostate Alpha was used to perform cancer detection and, for WSIs where cancer was detected, the system marked the area where cancer was detected with the highest probability. The original diagnosis for each slide was rendered by genitourinary pathologists and incorporated any ancillary studies requested during the original diagnostic assessment. Against this ground truth, the pathologists and Paige Prostate Alpha were measured. Without Paige Prostate Alpha, pathologists had an average sensitivity of 74% and an average specificity of 97%. With Paige Prostate Alpha, the average sensitivity for pathologists significantly increased to 90% with no statistically significant change in specificity. With Paige Prostate Alpha, pathologists more often correctly classified smaller, lower grade tumors, and spent less time analyzing each WSI. Future studies will investigate if similar benefit is yielded when such a system is used to detect other forms of cancer in a setting that more closely emulates real practice.


Assuntos
Aprendizado Profundo , Diagnóstico por Computador/métodos , Interpretação de Imagem Assistida por Computador/métodos , Patologia Clínica/métodos , Neoplasias da Próstata/diagnóstico , Biópsia com Agulha de Grande Calibre , Humanos , Masculino
14.
Mod Pathol ; 33(11): 2169-2185, 2020 11.
Artigo em Inglês | MEDLINE | ID: mdl-32467650

RESUMO

Pathologists are responsible for rapidly providing a diagnosis on critical health issues. Challenging cases benefit from additional opinions of pathologist colleagues. In addition to on-site colleagues, there is an active worldwide community of pathologists on social media for complementary opinions. Such access to pathologists worldwide has the capacity to improve diagnostic accuracy and generate broader consensus on next steps in patient care. From Twitter we curate 13,626 images from 6,351 tweets from 25 pathologists from 13 countries. We supplement the Twitter data with 113,161 images from 1,074,484 PubMed articles. We develop machine learning and deep learning models to (i) accurately identify histopathology stains, (ii) discriminate between tissues, and (iii) differentiate disease states. Area Under Receiver Operating Characteristic (AUROC) is 0.805-0.996 for these tasks. We repurpose the disease classifier to search for similar disease states given an image and clinical covariates. We report precision@k = 1 = 0.7618 ± 0.0018 (chance 0.397 ± 0.004, mean ±stdev ). The classifiers find that texture and tissue are important clinico-visual features of disease. Deep features trained only on natural images (e.g., cats and dogs) substantially improved search performance, while pathology-specific deep features and cell nuclei features further improved search to a lesser extent. We implement a social media bot (@pathobot on Twitter) to use the trained classifiers to aid pathologists in obtaining real-time feedback on challenging cases. If a social media post containing pathology text and images mentions the bot, the bot generates quantitative predictions of disease state (normal/artifact/infection/injury/nontumor, preneoplastic/benign/low-grade-malignant-potential, or malignant) and lists similar cases across social media and PubMed. Our project has become a globally distributed expert system that facilitates pathological diagnosis and brings expertise to underserved regions or hospitals with less expertise in a particular disease. This is the first pan-tissue pan-disease (i.e., from infection to malignancy) method for prediction and search on social media, and the first pathology study prospectively tested in public on social media. We will share data through http://pathobotology.org . We expect our project to cultivate a more connected world of physicians and improve patient care worldwide.


Assuntos
Aprendizado Profundo , Patologia , Mídias Sociais , Algoritmos , Humanos , Patologistas
15.
Nat Med ; 25(8): 1301-1309, 2019 08.
Artigo em Inglês | MEDLINE | ID: mdl-31308507

RESUMO

The development of decision support systems for pathology and their deployment in clinical practice have been hindered by the need for large manually annotated datasets. To overcome this problem, we present a multiple instance learning-based deep learning system that uses only the reported diagnoses as labels for training, thereby avoiding expensive and time-consuming pixel-wise manual annotations. We evaluated this framework at scale on a dataset of 44,732 whole slide images from 15,187 patients without any form of data curation. Tests on prostate cancer, basal cell carcinoma and breast cancer metastases to axillary lymph nodes resulted in areas under the curve above 0.98 for all cancer types. Its clinical application would allow pathologists to exclude 65-75% of slides while retaining 100% sensitivity. Our results show that this system has the ability to train accurate classification models at unprecedented scale, laying the foundation for the deployment of computational decision support systems in clinical practice.


Assuntos
Neoplasias da Mama/patologia , Carcinoma Basocelular/patologia , Aprendizado Profundo , Neoplasias da Próstata/patologia , Sistemas de Apoio a Decisões Clínicas , Feminino , Humanos , Masculino , Gradação de Tumores
16.
Acad Radiol ; 25(8): 1038-1045, 2018 08.
Artigo em Inglês | MEDLINE | ID: mdl-29428210

RESUMO

RATIONALE AND OBJECTIVES: The objective of this study was to develop and validate a predictive magnetic resonance imaging (MRI) activity score for ileocolonic Crohn disease activity based on both subjective and semiautomatic MRI features. MATERIALS AND METHODS: An MRI activity score (the "virtual gastrointestinal tract [VIGOR]" score) was developed from 27 validated magnetic resonance enterography datasets, including subjective radiologist observation of mural T2 signal and semiautomatic measurements of bowel wall thickness, excess volume, and dynamic contrast enhancement (initial slope of increase). A second subjective score was developed based on only radiologist observations. For validation, two observers applied both scores and three existing scores to a prospective dataset of 106 patients (59 women, median age 33) with known Crohn disease, using the endoscopic Crohn's Disease Endoscopic Index of Severity (CDEIS) as a reference standard. RESULTS: The VIGOR score (17.1 × initial slope of increase + 0.2 × excess volume + 2.3 × mural T2) and other activity scores all had comparable correlation to the CDEIS scores (observer 1: r = 0.58 and 0.59, and observer 2: r = 0.34-0.40 and 0.43-0.51, respectively). The VIGOR score, however, improved interobserver agreement compared to the other activity scores (intraclass correlation coefficient = 0.81 vs 0.44-0.59). A diagnostic accuracy of 80%-81% was seen for the VIGOR score, similar to the other scores. CONCLUSIONS: The VIGOR score achieves comparable accuracy to conventional MRI activity scores, but with significantly improved reproducibility, favoring its use for disease monitoring and therapy evaluation.


Assuntos
Colo/diagnóstico por imagem , Doença de Crohn/diagnóstico por imagem , Íleo/diagnóstico por imagem , Interpretação de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética , Adulto , Feminino , Humanos , Masculino , Variações Dependentes do Observador , Estudos Prospectivos , Reprodutibilidade dos Testes , Índice de Gravidade de Doença
17.
Sci Rep ; 6: 24146, 2016 Apr 07.
Artigo em Inglês | MEDLINE | ID: mdl-27052161

RESUMO

Recent large-scale genome analyses of human tissue samples have uncovered a high degree of genetic alterations and tumour heterogeneity in most tumour entities, independent of morphological phenotypes and histopathological characteristics. Assessment of genetic copy-number variation (CNV) and tumour heterogeneity by fluorescence in situ hybridization (ISH) provides additional tissue morphology at single-cell resolution, but it is labour intensive with limited throughput and high inter-observer variability. We present an integrative method combining bright-field dual-colour chromogenic and silver ISH assays with an image-based computational workflow (ISHProfiler), for accurate detection of molecular signals, high-throughput evaluation of CNV, expressive visualization of multi-level heterogeneity (cellular, inter- and intra-tumour heterogeneity), and objective quantification of heterogeneous genetic deletions (PTEN) and amplifications (19q12, HER2) in diverse human tumours (prostate, endometrial, ovarian and gastric), using various tissue sizes and different scanners, with unprecedented throughput and reproducibility.


Assuntos
Variações do Número de Cópias de DNA , Heterogeneidade Genética , Predisposição Genética para Doença/genética , Hibridização in Situ Fluorescente/métodos , Mutação , Neoplasias/genética , Idoso , Biologia Computacional/métodos , Neoplasias do Endométrio/genética , Neoplasias do Endométrio/metabolismo , Neoplasias do Endométrio/patologia , Feminino , Humanos , Imuno-Histoquímica , Estimativa de Kaplan-Meier , Masculino , Pessoa de Meia-Idade , Estadiamento de Neoplasias , Neoplasias/metabolismo , Neoplasias/patologia , Neoplasias Ovarianas/genética , Neoplasias Ovarianas/metabolismo , Neoplasias Ovarianas/patologia , PTEN Fosfo-Hidrolase/genética , PTEN Fosfo-Hidrolase/metabolismo , Neoplasias da Próstata/genética , Neoplasias da Próstata/metabolismo , Neoplasias da Próstata/patologia , Receptor ErbB-2/genética , Receptor ErbB-2/metabolismo , Neoplasias Gástricas/genética , Neoplasias Gástricas/metabolismo , Neoplasias Gástricas/patologia
18.
Comput Med Imaging Graph ; 46 Pt 2: 197-208, 2015 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-26362074

RESUMO

Computerized evaluation of histological preparations of prostate tissues involves identification of tissue components such as stroma (ST), benign/normal epithelium (BN) and prostate cancer (PCa). Image classification approaches have been developed to identify and classify glandular regions in digital images of prostate tissues; however their success has been limited by difficulties in cellular segmentation and tissue heterogeneity. We hypothesized that utilizing image pixels to generate intensity histograms of hematoxylin (H) and eosin (E) stains deconvoluted from H&E images numerically captures the architectural difference between glands and stroma. In addition, we postulated that joint histograms of local binary patterns and local variance (LBPxVAR) can be used as sensitive textural features to differentiate benign/normal tissue from cancer. Here we utilized a machine learning approach comprising of a support vector machine (SVM) followed by a random forest (RF) classifier to digitally stratify prostate tissue into ST, BN and PCa areas. Two pathologists manually annotated 210 images of low- and high-grade tumors from slides that were selected from 20 radical prostatectomies and digitized at high-resolution. The 210 images were split into the training (n=19) and test (n=191) sets. Local intensity histograms of H and E were used to train a SVM classifier to separate ST from epithelium (BN+PCa). The performance of SVM prediction was evaluated by measuring the accuracy of delineating epithelial areas. The Jaccard J=59.5 ± 14.6 and Rand Ri=62.0 ± 7.5 indices reported a significantly better prediction when compared to a reference method (Chen et al., Clinical Proteomics 2013, 10:18) based on the averaged values from the test set. To distinguish BN from PCa we trained a RF classifier with LBPxVAR and local intensity histograms and obtained separate performance values for BN and PCa: JBN=35.2 ± 24.9, OBN=49.6 ± 32, JPCa=49.5 ± 18.5, OPCa=72.7 ± 14.8 and Ri=60.6 ± 7.6 in the test set. Our pixel-based classification does not rely on the detection of lumens, which is prone to errors and has limitations in high-grade cancers and has the potential to aid in clinical studies in which the quantification of tumor content is necessary to prognosticate the course of the disease. The image data set with ground truth annotation is available for public use to stimulate further research in this area.


Assuntos
Células Epiteliais/patologia , Microscopia/métodos , Reconhecimento Automatizado de Padrão/métodos , Neoplasias da Próstata/patologia , Neoplasias da Próstata/cirurgia , Células Estromais/patologia , Algoritmos , Humanos , Aumento da Imagem/métodos , Interpretação de Imagem Assistida por Computador/métodos , Aprendizado de Máquina , Masculino , Prostatectomia/métodos , Reprodutibilidade dos Testes , Sensibilidade e Especificidade , Resultado do Tratamento
19.
J Pathol Inform ; 4(Suppl): S2, 2013.
Artigo em Inglês | MEDLINE | ID: mdl-23766938

RESUMO

BACKGROUND: Histological tissue analysis often involves manual cell counting and staining estimation of cancerous cells. These assessments are extremely time consuming, highly subjective and prone to error, since immunohistochemically stained cancer tissues usually show high variability in cell sizes, morphological structures and staining quality. To facilitate reproducible analysis in clinical practice as well as for cancer research, objective computer assisted staining estimation is highly desirable. METHODS: We employ machine learning algorithms as randomized decision trees and support vector machines for nucleus detection and classification. Superpixels as segmentation over the tissue image are classified into foreground and background and thereafter into malignant and benign, learning from the user's feedback. As a fast alternative without nucleus classification, the existing color deconvolution method is incorporated. RESULTS: Our program TMARKER connects already available workflows for computational pathology and immunohistochemical tissue rating with modern active learning algorithms from machine learning and computer vision. On a test dataset of human renal clear cell carcinoma and prostate carcinoma, the performance of the used algorithms is equivalent to two independent pathologists for nucleus detection and classification. CONCLUSION: We present a novel, free and operating system independent software package for computational cell counting and staining estimation, supporting IHC stained tissue analysis in clinic and for research. Proprietary toolboxes for similar tasks are expensive, bound to specific commercial hardware (e.g. a microscope) and mostly not quantitatively validated in terms of performance and reproducibility. We are confident that the presented software package will proof valuable for the scientific community and we anticipate a broader application domain due to the possibility to interactively learn models for new image types.

20.
EMBO Mol Med ; 4(8): 808-24, 2012 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-22678923

RESUMO

Type II endometrial carcinomas are a highly aggressive group of tumour subtypes that are frequently associated with inactivation of the TP53 tumour suppressor gene. We show that mice with endometrium-specific deletion of Trp53 initially exhibited histological changes that are identical to known precursor lesions of type II endometrial carcinomas in humans and later developed carcinomas representing all type II subtypes. The mTORC1 signalling pathway was frequently activated in these precursor lesions and tumours, suggesting a genetic cooperation between this pathway and Trp53 deficiency in tumour initiation. Consistent with this idea, analyses of 521 human endometrial carcinomas identified frequent mTORC1 pathway activation in type I as well as type II endometrial carcinoma subtypes. mTORC1 pathway activation and p53 expression or mutation status each independently predicted poor patient survival. We suggest that molecular alterations in p53 and the mTORC1 pathway play different roles in the initiation of the different endometrial cancer subtypes, but that combined p53 inactivation and mTORC1 pathway activation are unifying pathogenic features among histologically diverse subtypes of late stage aggressive endometrial tumours.


Assuntos
Neoplasias do Endométrio/patologia , Proteína Supressora de Tumor p53/metabolismo , Animais , Modelos Animais de Doenças , Neoplasias do Endométrio/mortalidade , Feminino , Deleção de Genes , Humanos , Alvo Mecanístico do Complexo 1 de Rapamicina , Camundongos , Complexos Multiproteicos , Proteínas/metabolismo , Análise de Sobrevida , Serina-Treonina Quinases TOR , Proteína Supressora de Tumor p53/genética
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA