Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 41
Filtrar
Mais filtros

Base de dados
País/Região como assunto
Tipo de documento
Intervalo de ano de publicação
1.
Am J Pathol ; 193(3): 341-349, 2023 03.
Artigo em Inglês | MEDLINE | ID: mdl-36563747

RESUMO

Osteosarcoma is the most common primary bone cancer, whose standard treatment includes pre-operative chemotherapy followed by resection. Chemotherapy response is used for prognosis and management of patients. Necrosis is routinely assessed after chemotherapy from histology slides on resection specimens, where necrosis ratio is defined as the ratio of necrotic tumor/overall tumor. Patients with necrosis ratio ≥90% are known to have a better outcome. Manual microscopic review of necrosis ratio from multiple glass slides is semiquantitative and can have intraobserver and interobserver variability. In this study, an objective and reproducible deep learning-based approach was proposed to estimate necrosis ratio with outcome prediction from scanned hematoxylin and eosin whole slide images (WSIs). To conduct the study, 103 osteosarcoma cases with 3134 WSIs were collected. Deep Multi-Magnification Network was trained to segment multiple tissue subtypes, including viable tumor and necrotic tumor at a pixel level and to calculate case-level necrosis ratio from multiple WSIs. Necrosis ratio estimated by the segmentation model highly correlates with necrosis ratio from pathology reports manually assessed by experts. Furthermore, patients were successfully stratified to predict overall survival with P = 2.4 × 10-6 and progression-free survival with P = 0.016. This study indicates that deep learning can support pathologists as an objective tool to analyze osteosarcoma from histology for assessing treatment response and predicting patient outcome.


Assuntos
Neoplasias Ósseas , Aprendizado Profundo , Osteossarcoma , Humanos , Neoplasias Ósseas/tratamento farmacológico , Neoplasias Ósseas/patologia , Prognóstico , Necrose/patologia , Osteossarcoma/tratamento farmacológico , Osteossarcoma/patologia
2.
J Pathol ; 254(2): 147-158, 2021 06.
Artigo em Inglês | MEDLINE | ID: mdl-33904171

RESUMO

Artificial intelligence (AI)-based systems applied to histopathology whole-slide images have the potential to improve patient care through mitigation of challenges posed by diagnostic variability, histopathology caseload, and shortage of pathologists. We sought to define the performance of an AI-based automated prostate cancer detection system, Paige Prostate, when applied to independent real-world data. The algorithm was employed to classify slides into two categories: benign (no further review needed) or suspicious (additional histologic and/or immunohistochemical analysis required). We assessed the sensitivity, specificity, positive predictive values (PPVs), and negative predictive values (NPVs) of a local pathologist, two central pathologists, and Paige Prostate in the diagnosis of 600 transrectal ultrasound-guided prostate needle core biopsy regions ('part-specimens') from 100 consecutive patients, and to ascertain the impact of Paige Prostate on diagnostic accuracy and efficiency. Paige Prostate displayed high sensitivity (0.99; CI 0.96-1.0), NPV (1.0; CI 0.98-1.0), and specificity (0.93; CI 0.90-0.96) at the part-specimen level. At the patient level, Paige Prostate displayed optimal sensitivity (1.0; CI 0.93-1.0) and NPV (1.0; CI 0.91-1.0) at a specificity of 0.78 (CI 0.64-0.89). The 27 part-specimens considered by Paige Prostate as suspicious, whose final diagnosis was benign, were found to comprise atrophy (n = 14), atrophy and apical prostate tissue (n = 1), apical/benign prostate tissue (n = 9), adenosis (n = 2), and post-atrophic hyperplasia (n = 1). Paige Prostate resulted in the identification of four additional patients whose diagnoses were upgraded from benign/suspicious to malignant. Additionally, this AI-based test provided an estimated 65.5% reduction of the diagnostic time for the material analyzed. Given its optimal sensitivity and NPV, Paige Prostate has the potential to be employed for the automated identification of patients whose histologic slides could forgo full histopathologic review. In addition to providing incremental improvements in diagnostic accuracy and efficiency, this AI-based system identified patients whose prostate cancers were not initially diagnosed by three experienced histopathologists. © 2021 The Authors. The Journal of Pathology published by John Wiley & Sons, Ltd. on behalf of The Pathological Society of Great Britain and Ireland.


Assuntos
Inteligência Artificial , Neoplasias da Próstata/diagnóstico , Idoso , Idoso de 80 Anos ou mais , Biópsia , Biópsia com Agulha de Grande Calibre , Humanos , Aprendizado de Máquina , Masculino , Pessoa de Meia-Idade , Patologistas , Próstata/patologia , Neoplasias da Próstata/patologia
3.
Mod Pathol ; 34(8): 1487-1494, 2021 08.
Artigo em Inglês | MEDLINE | ID: mdl-33903728

RESUMO

The surgical margin status of breast lumpectomy specimens for invasive carcinoma and ductal carcinoma in situ (DCIS) guides clinical decisions, as positive margins are associated with higher rates of local recurrence. The "cavity shave" method of margin assessment has the benefits of allowing the surgeon to orient shaved margins intraoperatively and the pathologist to assess one inked margin per specimen. We studied whether a deep convolutional neural network, a deep multi-magnification network (DMMN), could accurately segment carcinoma from benign tissue in whole slide images (WSIs) of shave margin slides, and therefore serve as a potential screening tool to improve the efficiency of microscopic evaluation of these specimens. Applying the pretrained DMMN model, or the initial model, to a validation set of 408 WSIs (348 benign, 60 with carcinoma) achieved an area under the curve (AUC) of 0.941. After additional manual annotations and fine-tuning of the model, the updated model achieved an AUC of 0.968 with sensitivity set at 100% and corresponding specificity of 78%. We applied the initial model and updated model to a testing set of 427 WSIs (374 benign, 53 with carcinoma) which showed AUC values of 0.900 and 0.927, respectively. Using the pixel classification threshold selected from the validation set, the model achieved a sensitivity of 92% and specificity of 78%. The four false-negative classifications resulted from two small foci of DCIS (1 mm, 0.5 mm) and two foci of well-differentiated invasive carcinoma (3 mm, 1.5 mm). This proof-of-principle study demonstrates that a DMMN machine learning model can segment invasive carcinoma and DCIS in surgical margin specimens with high accuracy and has the potential to be used as a screening tool for pathologic assessment of these specimens.


Assuntos
Neoplasias da Mama/patologia , Carcinoma Ductal de Mama/patologia , Aprendizado Profundo , Interpretação de Imagem Assistida por Computador/métodos , Margens de Excisão , Carcinoma Intraductal não Infiltrante/patologia , Feminino , Humanos , Mastectomia Segmentar , Neoplasia Residual/diagnóstico
4.
Mod Pathol ; 33(10): 2058-2066, 2020 10.
Artigo em Inglês | MEDLINE | ID: mdl-32393768

RESUMO

Prostate cancer (PrCa) is the second most common cancer among men in the United States. The gold standard for detecting PrCa is the examination of prostate needle core biopsies. Diagnosis can be challenging, especially for small, well-differentiated cancers. Recently, machine learning algorithms have been developed for detecting PrCa in whole slide images (WSIs) with high test accuracy. However, the impact of these artificial intelligence systems on pathologic diagnosis is not known. To address this, we investigated how pathologists interact with Paige Prostate Alpha, a state-of-the-art PrCa detection system, in WSIs of prostate needle core biopsies stained with hematoxylin and eosin. Three AP-board certified pathologists assessed 304 anonymized prostate needle core biopsy WSIs in 8 hours. The pathologists classified each WSI as benign or cancerous. After ~4 weeks, pathologists were tasked with re-reviewing each WSI with the aid of Paige Prostate Alpha. For each WSI, Paige Prostate Alpha was used to perform cancer detection and, for WSIs where cancer was detected, the system marked the area where cancer was detected with the highest probability. The original diagnosis for each slide was rendered by genitourinary pathologists and incorporated any ancillary studies requested during the original diagnostic assessment. Against this ground truth, the pathologists and Paige Prostate Alpha were measured. Without Paige Prostate Alpha, pathologists had an average sensitivity of 74% and an average specificity of 97%. With Paige Prostate Alpha, the average sensitivity for pathologists significantly increased to 90% with no statistically significant change in specificity. With Paige Prostate Alpha, pathologists more often correctly classified smaller, lower grade tumors, and spent less time analyzing each WSI. Future studies will investigate if similar benefit is yielded when such a system is used to detect other forms of cancer in a setting that more closely emulates real practice.


Assuntos
Aprendizado Profundo , Diagnóstico por Computador/métodos , Interpretação de Imagem Assistida por Computador/métodos , Patologia Clínica/métodos , Neoplasias da Próstata/diagnóstico , Biópsia com Agulha de Grande Calibre , Humanos , Masculino
5.
Mod Pathol ; 33(11): 2169-2185, 2020 11.
Artigo em Inglês | MEDLINE | ID: mdl-32467650

RESUMO

Pathologists are responsible for rapidly providing a diagnosis on critical health issues. Challenging cases benefit from additional opinions of pathologist colleagues. In addition to on-site colleagues, there is an active worldwide community of pathologists on social media for complementary opinions. Such access to pathologists worldwide has the capacity to improve diagnostic accuracy and generate broader consensus on next steps in patient care. From Twitter we curate 13,626 images from 6,351 tweets from 25 pathologists from 13 countries. We supplement the Twitter data with 113,161 images from 1,074,484 PubMed articles. We develop machine learning and deep learning models to (i) accurately identify histopathology stains, (ii) discriminate between tissues, and (iii) differentiate disease states. Area Under Receiver Operating Characteristic (AUROC) is 0.805-0.996 for these tasks. We repurpose the disease classifier to search for similar disease states given an image and clinical covariates. We report precision@k = 1 = 0.7618 ± 0.0018 (chance 0.397 ± 0.004, mean ±stdev ). The classifiers find that texture and tissue are important clinico-visual features of disease. Deep features trained only on natural images (e.g., cats and dogs) substantially improved search performance, while pathology-specific deep features and cell nuclei features further improved search to a lesser extent. We implement a social media bot (@pathobot on Twitter) to use the trained classifiers to aid pathologists in obtaining real-time feedback on challenging cases. If a social media post containing pathology text and images mentions the bot, the bot generates quantitative predictions of disease state (normal/artifact/infection/injury/nontumor, preneoplastic/benign/low-grade-malignant-potential, or malignant) and lists similar cases across social media and PubMed. Our project has become a globally distributed expert system that facilitates pathological diagnosis and brings expertise to underserved regions or hospitals with less expertise in a particular disease. This is the first pan-tissue pan-disease (i.e., from infection to malignancy) method for prediction and search on social media, and the first pathology study prospectively tested in public on social media. We will share data through http://pathobotology.org . We expect our project to cultivate a more connected world of physicians and improve patient care worldwide.


Assuntos
Aprendizado Profundo , Patologia , Mídias Sociais , Algoritmos , Humanos , Patologistas
7.
Lancet Digit Health ; 6(2): e114-e125, 2024 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-38135556

RESUMO

BACKGROUND: The rising global cancer burden has led to an increasing demand for imaging tests such as [18F]fluorodeoxyglucose ([18F]FDG)-PET-CT. To aid imaging specialists in dealing with high scan volumes, we aimed to train a deep learning artificial intelligence algorithm to classify [18F]FDG-PET-CT scans of patients with lymphoma with or without hypermetabolic tumour sites. METHODS: In this retrospective analysis we collected 16 583 [18F]FDG-PET-CTs of 5072 patients with lymphoma who had undergone PET-CT before or after treatment at the Memorial Sloa Kettering Cancer Center, New York, NY, USA. Using maximum intensity projection (MIP), three dimensional (3D) PET, and 3D CT data, our ResNet34-based deep learning model (Lymphoma Artificial Reader System [LARS]) for [18F]FDG-PET-CT binary classification (Deauville 1-3 vs 4-5), was trained on 80% of the dataset, and tested on 20% of this dataset. For external testing, 1000 [18F]FDG-PET-CTs were obtained from a second centre (Medical University of Vienna, Vienna, Austria). Seven model variants were evaluated, including MIP-based LARS-avg (optimised for accuracy) and LARS-max (optimised for sensitivity), and 3D PET-CT-based LARS-ptct. Following expert curation, areas under the curve (AUCs), accuracies, sensitivities, and specificities were calculated. FINDINGS: In the internal test cohort (3325 PET-CTs, 1012 patients), LARS-avg achieved an AUC of 0·949 (95% CI 0·942-0·956), accuracy of 0·890 (0·879-0·901), sensitivity of 0·868 (0·851-0·885), and specificity of 0·913 (0·899-0·925); LARS-max achieved an AUC of 0·949 (0·942-0·956), accuracy of 0·868 (0·858-0·879), sensitivity of 0·909 (0·896-0·924), and specificity of 0·826 (0·808-0·843); and LARS-ptct achieved an AUC of 0·939 (0·930-0·948), accuracy of 0·875 (0·864-0·887), sensitivity of 0·836 (0·817-0·855), and specificity of 0·915 (0·901-0·927). In the external test cohort (1000 PET-CTs, 503 patients), LARS-avg achieved an AUC of 0·953 (0·938-0·966), accuracy of 0·907 (0·888-0·925), sensitivity of 0·874 (0·843-0·904), and specificity of 0·949 (0·921-0·960); LARS-max achieved an AUC of 0·952 (0·937-0·965), accuracy of 0·898 (0·878-0·916), sensitivity of 0·899 (0·871-0·926), and specificity of 0·897 (0·871-0·922); and LARS-ptct achieved an AUC of 0·932 (0·915-0·948), accuracy of 0·870 (0·850-0·891), sensitivity of 0·827 (0·793-0·863), and specificity of 0·913 (0·889-0·937). INTERPRETATION: Deep learning accurately distinguishes between [18F]FDG-PET-CT scans of lymphoma patients with and without hypermetabolic tumour sites. Deep learning might therefore be potentially useful to rule out the presence of metabolically active disease in such patients, or serve as a second reader or decision support tool. FUNDING: National Institutes of Health-National Cancer Institute Cancer Center Support Grant.


Assuntos
Aprendizado Profundo , Linfoma , Estados Unidos , Humanos , Tomografia por Emissão de Pósitrons combinada à Tomografia Computadorizada/métodos , Fluordesoxiglucose F18 , Estudos Retrospectivos , Inteligência Artificial , Compostos Radiofarmacêuticos , Linfoma/diagnóstico por imagem
8.
Am J Surg Pathol ; 48(7): 846-854, 2024 Jul 01.
Artigo em Inglês | MEDLINE | ID: mdl-38809272

RESUMO

The detection of lymph node metastases is essential for breast cancer staging, although it is a tedious and time-consuming task where the sensitivity of pathologists is suboptimal. Artificial intelligence (AI) can help pathologists detect lymph node metastases, which could help alleviate workload issues. We studied how pathologists' performance varied when aided by AI. An AI algorithm was trained using more than 32 000 breast sentinel lymph node whole slide images (WSIs) matched with their corresponding pathology reports from more than 8000 patients. The algorithm highlighted areas suspicious of harboring metastasis. Three pathologists were asked to review a dataset comprising 167 breast sentinel lymph node WSIs, of which 69 harbored cancer metastases of different sizes, enriched for challenging cases. Ninety-eight slides were benign. The pathologists read the dataset twice, both digitally, with and without AI assistance, randomized for slide and reading orders to reduce bias, separated by a 3-week washout period. Their slide-level diagnosis was recorded, and they were timed during their reads. The average reading time per slide was 129 seconds during the unassisted phase versus 58 seconds during the AI-assisted phase, resulting in an overall efficiency gain of 55% ( P <0.001). These efficiency gains are applied to both benign and malignant WSIs. Two of the 3 reading pathologists experienced significant sensitivity improvements, from 74.5% to 93.5% ( P ≤0.006). This study highlights that AI can help pathologists shorten their reading times by more than half and also improve their metastasis detection rate.


Assuntos
Inteligência Artificial , Neoplasias da Mama , Metástase Linfática , Biópsia de Linfonodo Sentinela , Humanos , Neoplasias da Mama/patologia , Neoplasias da Mama/diagnóstico , Feminino , Metástase Linfática/diagnóstico , Metástase Linfática/patologia , Interpretação de Imagem Assistida por Computador , Patologistas , Reprodutibilidade dos Testes , Valor Preditivo dos Testes , Variações Dependentes do Observador , Linfonodo Sentinela/patologia , Algoritmos , Fluxo de Trabalho
9.
PLoS Pathog ; 7(1): e1001257, 2011 Jan 13.
Artigo em Inglês | MEDLINE | ID: mdl-21249178

RESUMO

Prions, the agents causing transmissible spongiform encephalopathies, colonize the brain of hosts after oral, parenteral, intralingual, or even transdermal uptake. However, prions are not generally considered to be airborne. Here we report that inbred and crossbred wild-type mice, as well as tga20 transgenic mice overexpressing PrP(C), efficiently develop scrapie upon exposure to aerosolized prions. NSE-PrP transgenic mice, which express PrP(C) selectively in neurons, were also susceptible to airborne prions. Aerogenic infection occurred also in mice lacking B- and T-lymphocytes, NK-cells, follicular dendritic cells or complement components. Brains of diseased mice contained PrP(Sc) and transmitted scrapie when inoculated into further mice. We conclude that aerogenic exposure to prions is very efficacious and can lead to direct invasion of neural pathways without an obligatory replicative phase in lymphoid organs. This previously unappreciated risk for airborne prion transmission may warrant re-thinking on prion biosafety guidelines in research and diagnostic laboratories.


Assuntos
Aerossóis , Imunocompetência/imunologia , Hospedeiro Imunocomprometido/imunologia , Príons/patogenicidade , Scrapie/imunologia , Animais , Animais Recém-Nascidos , Encéfalo/imunologia , Encéfalo/metabolismo , Encéfalo/patologia , Feminino , Exposição por Inalação , Longevidade , Masculino , Camundongos , Camundongos Endogâmicos , Camundongos Knockout , Camundongos SCID , Camundongos Transgênicos , Neurônios/imunologia , Neurônios/metabolismo , Neurônios/patologia , Scrapie/metabolismo , Scrapie/transmissão , Especificidade da Espécie
11.
Neurobiol Aging ; 130: 80-83, 2023 10.
Artigo em Inglês | MEDLINE | ID: mdl-37473581

RESUMO

Amyotrophic lateral sclerosis (ALS) is a devastating neuromuscular disease with limited therapeutic options. Biomarkers are needed for early disease detection, clinical trial design, and personalized medicine. Early evidence suggests that specific morphometric features in ALS primary skin fibroblasts may be used as biomarkers; however, this hypothesis has not been rigorously tested in conclusively large fibroblast populations. Here, we imaged ALS-relevant organelles (mitochondria, endoplasmic reticulum, lysosomes) and proteins (TAR DNA-binding protein 43, Ras GTPase-activating protein-binding protein 1, heat-shock protein 60) at baseline and under stress perturbations and tested their predictive power on a total set of 443 human fibroblast lines from ALS and healthy individuals. Machine learning approaches were able to confidently predict stress perturbation states (ROC-AUC ∼0.99) but not disease groups or clinical features (ROC-AUC 0.58-0.64). Our findings indicate that multivariate models using patient-derived fibroblast morphometry can accurately predict different stressors but are insufficient to develop viable ALS biomarkers.


Assuntos
Esclerose Lateral Amiotrófica , Humanos , Esclerose Lateral Amiotrófica/diagnóstico , Esclerose Lateral Amiotrófica/metabolismo , Biomarcadores , Retículo Endoplasmático/metabolismo , Aprendizado de Máquina , Fibroblastos/metabolismo
12.
J Pathol Inform ; 14: 100160, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-36536772

RESUMO

Deep learning has been widely used to analyze digitized hematoxylin and eosin (H&E)-stained histopathology whole slide images. Automated cancer segmentation using deep learning can be used to diagnose malignancy and to find novel morphological patterns to predict molecular subtypes. To train pixel-wise cancer segmentation models, manual annotation from pathologists is generally a bottleneck due to its time-consuming nature. In this paper, we propose Deep Interactive Learning with a pretrained segmentation model from a different cancer type to reduce manual annotation time. Instead of annotating all pixels from cancer and non-cancer regions on giga-pixel whole slide images, an iterative process of annotating mislabeled regions from a segmentation model and training/finetuning the model with the additional annotation can reduce the time. Especially, employing a pretrained segmentation model can further reduce the time than starting annotation from scratch. We trained an accurate ovarian cancer segmentation model with a pretrained breast segmentation model by 3.5 hours of manual annotation which achieved intersection-over-union of 0.74, recall of 0.86, and precision of 0.84. With automatically extracted high-grade serous ovarian cancer patches, we attempted to train an additional classification deep learning model to predict BRCA mutation. The segmentation model and code have been released at https://github.com/MSKCC-Computational-Pathology/DMMN-ovary.

13.
JMIR Res Protoc ; 12: e49204, 2023 Nov 16.
Artigo em Inglês | MEDLINE | ID: mdl-37971801

RESUMO

BACKGROUND: The increasing use of smartphones, wearables, and connected devices has enabled the increasing application of digital technologies for research. Remote digital study platforms comprise a patient-interfacing digital application that enables multimodal data collection from a mobile app and connected sources. They offer an opportunity to recruit at scale, acquire data longitudinally at a high frequency, and engage study participants at any time of the day in any place. Few published descriptions of centralized digital research platforms provide a framework for their development. OBJECTIVE: This study aims to serve as a road map for those seeking to develop a centralized digital research platform. We describe the technical and functional aspects of the ehive app, the centralized digital research platform of the Hasso Plattner Institute for Digital Health at Mount Sinai Hospital, New York, New York. We then provide information about ongoing studies hosted on ehive, including usership statistics and data infrastructure. Finally, we discuss our experience with ehive in the broader context of the current landscape of digital health research platforms. METHODS: The ehive app is a multifaceted and patient-facing central digital research platform that permits the collection of e-consent for digital health studies. An overview of its development, its e-consent process, and the tools it uses for participant recruitment and retention are provided. Data integration with the platform and the infrastructure supporting its operations are discussed; furthermore, a description of its participant- and researcher-facing dashboard interfaces and the e-consent architecture is provided. RESULTS: The ehive platform was launched in 2020 and has successfully hosted 8 studies, namely 6 observational studies and 2 clinical trials. Approximately 1484 participants downloaded the app across 36 states in the United States. The use of recruitment methods such as bulk messaging through the EPIC electronic health records and standard email portals enables broad recruitment. Light-touch engagement methods, used in an automated fashion through the platform, maintain high degrees of engagement and retention. The ehive platform demonstrates the successful deployment of a central digital research platform that can be modified across study designs. CONCLUSIONS: Centralized digital research platforms such as ehive provide a novel tool that allows investigators to expand their research beyond their institution, engage in large-scale longitudinal studies, and combine multimodal data streams. The ehive platform serves as a model for groups seeking to develop similar digital health research programs. INTERNATIONAL REGISTERED REPORT IDENTIFIER (IRRID): DERR1-10.2196/49204.

14.
Arch Pathol Lab Med ; 147(10): 1178-1185, 2023 10 01.
Artigo em Inglês | MEDLINE | ID: mdl-36538386

RESUMO

CONTEXT.­: Prostate cancer diagnosis rests on accurate assessment of tissue by a pathologist. The application of artificial intelligence (AI) to digitized whole slide images (WSIs) can aid pathologists in cancer diagnosis, but robust, diverse evidence in a simulated clinical setting is lacking. OBJECTIVE.­: To compare the diagnostic accuracy of pathologists reading WSIs of prostatic biopsy specimens with and without AI assistance. DESIGN.­: Eighteen pathologists, 2 of whom were genitourinary subspecialists, evaluated 610 prostate needle core biopsy WSIs prepared at 218 institutions, with the option for deferral. Two evaluations were performed sequentially for each WSI: initially without assistance, and immediately thereafter aided by Paige Prostate (PaPr), a deep learning-based system that provides a WSI-level binary classification of suspicious for cancer or benign and pinpoints the location that has the greatest probability of harboring cancer on suspicious WSIs. Pathologists' changes in sensitivity and specificity between the assisted and unassisted modalities were assessed, together with the impact of PaPr output on the assisted reads. RESULTS.­: Using PaPr, pathologists improved their sensitivity and specificity across all histologic grades and tumor sizes. Accuracy gains on both benign and cancerous WSIs could be attributed to PaPr, which correctly classified 100% of the WSIs showing corrected diagnoses in the PaPr-assisted phase. CONCLUSIONS.­: This study demonstrates the effectiveness and safety of an AI tool for pathologists in simulated diagnostic practice, bridging the gap between computational pathology research and its clinical application, and resulted in the first US Food and Drug Administration authorization of an AI system in pathology.


Assuntos
Inteligência Artificial , Neoplasias da Próstata , Masculino , Humanos , Próstata/patologia , Interpretação de Imagem Assistida por Computador/métodos , Neoplasias da Próstata/diagnóstico , Neoplasias da Próstata/patologia , Biópsia por Agulha
15.
J Instrum ; 17(6)2022 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38938475

RESUMO

AIMS: Clinical radiographic imaging is seated upon the principle of differential keV photon transmission through an object. At clinical x-ray energies the scattering of photons causes signal noise and is utilized solely for transmission measurements. However, scatter - particularly Compton scatter, is characterizable. In this work we hypothesized that modern radiation sources and detectors paired with deep learning techniques can use scattered photon information constructively to resolve superimposed attenuators in planar x-ray imaging. METHODS: We simulated a monoenergetic x-ray imaging system consisting of a pencil beam x-ray source directed at an imaging target positioned in front of a high spatial- and energy-resolution detector array. The setup maximizes information capture of transmitted photons by measuring off-axis scatter location and energy. The signal was analyzed by a convolutional neural network, and a description of scattering material along the axis of the beam was derived. The system was virtually designed/tested using Monte Carlo processing of simple phantoms consisting of 10 pseudo-randomly stacked air/bone/water materials, and the network was trained by solving a classification problem. RESULTS: From our simulations we were able to resolve traversed material depth information to a high degree, within our simple imaging task. The average accuracy of the material identification along the beam was 0.91±0.01, with slightly higher accuracy towards the entrance/exit peripheral surfaces of the object. The average sensitivity and specificity was 0.91 and 0.95, respectively. CONCLUSIONS: Our work provides proof of principle that deep learning techniques can be used to analyze scattered photon patterns which can constructively contribute to the information content in radiography, here used to infer depth information in a traditional 2D planar setup. This principle, and our results, demonstrate that the information in Compton scattered photons may provide a basis for further development. The work was limited by simple testing scenarios and without yet integrating complexities or optimizations. The ability to scale performance to the clinic remains unexplored and requires further study.

16.
J Invest Dermatol ; 142(1): 97-103, 2022 01.
Artigo em Inglês | MEDLINE | ID: mdl-34265329

RESUMO

Basal cell carcinoma (BCC) is the most common skin cancer, with over 2 million cases diagnosed annually in the United States. Conventionally, BCC is diagnosed by naked eye examination and dermoscopy. Suspicious lesions are either removed or biopsied for histopathological confirmation, thus lowering the specificity of noninvasive BCC diagnosis. Recently, reflectance confocal microscopy, a noninvasive diagnostic technique that can image skin lesions at cellular level resolution, has shown to improve specificity in BCC diagnosis and reduced the number needed to biopsy by 2-3 times. In this study, we developed and evaluated a deep learning-based artificial intelligence model to automatically detect BCC in reflectance confocal microscopy images. The proposed model achieved an area under the curve for the receiver operator characteristic curve of 89.7% (stack level) and 88.3% (lesion level), a performance on par with that of reflectance confocal microscopy experts. Furthermore, the model achieved an area under the curve of 86.1% on a held-out test set from international collaborators, demonstrating the reproducibility and generalizability of the proposed automated diagnostic approach. These results provide a clear indication that the clinical deployment of decision support systems for the detection of BCC in reflectance confocal microscopy images has the potential for optimizing the evaluation and diagnosis of patients with skin cancer.


Assuntos
Carcinoma Basocelular/diagnóstico , Aprendizado Profundo/normas , Neoplasias Cutâneas/diagnóstico , Adulto , Idoso , Idoso de 80 Anos ou mais , Inteligência Artificial , Automação , Biópsia , Dermoscopia/métodos , Feminino , Humanos , Masculino , Microscopia Confocal , Pessoa de Meia-Idade , Modelos Biológicos , Exame Físico , Reprodutibilidade dos Testes
17.
Acta Neuropathol Commun ; 10(1): 131, 2022 09 21.
Artigo em Inglês | MEDLINE | ID: mdl-36127723

RESUMO

Age-related cognitive impairment is multifactorial, with numerous underlying and frequently co-morbid pathological correlates. Amyloid beta (Aß) plays a major role in Alzheimer's type age-related cognitive impairment, in addition to other etiopathologies such as Aß-independent hyperphosphorylated tau, cerebrovascular disease, and myelin damage, which also warrant further investigation. Classical methods, even in the setting of the gold standard of postmortem brain assessment, involve semi-quantitative ordinal staging systems that often correlate poorly with clinical outcomes, due to imperfect cognitive measurements and preconceived notions regarding the neuropathologic features that should be chosen for study. Improved approaches are needed to identify histopathological changes correlated with cognition in an unbiased way. We used a weakly supervised multiple instance learning algorithm on whole slide images of human brain autopsy tissue sections from a group of elderly donors to predict the presence or absence of cognitive impairment (n = 367 with cognitive impairment, n = 349 without). Attention analysis allowed us to pinpoint the underlying subregional architecture and cellular features that the models used for the prediction in both brain regions studied, the medial temporal lobe and frontal cortex. Despite noisy labels of cognition, our trained models were able to predict the presence of cognitive impairment with a modest accuracy that was significantly greater than chance. Attention-based interpretation studies of the features most associated with cognitive impairment in the top performing models suggest that they identified myelin pallor in the white matter. Our results demonstrate a scalable platform with interpretable deep learning to identify unexpected aspects of pathology in cognitive impairment that can be translated to the study of other neurobiological disorders.


Assuntos
Disfunção Cognitiva , Aprendizado Profundo , Idoso , Peptídeos beta-Amiloides/metabolismo , Encéfalo/patologia , Disfunção Cognitiva/patologia , Humanos , Bainha de Mielina/patologia
18.
J Pathol Inform ; 12: 31, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34760328

RESUMO

BACKGROUND: Web-based digital slide viewers for pathology commonly use OpenSlide and OpenSeadragon (OSD) to access, visualize, and navigate whole-slide images (WSI). Their standard settings represent WSI as deep zoom images (DZI), a generic image pyramid structure that differs from the proprietary pyramid structure in the WSI files. The transformation from WSI to DZI is an additional, time-consuming step when rendering digital slides in the viewer, and inefficiency of digital slide viewers is a major criticism for digital pathology. AIMS: To increase efficiency of digital slide visualization by serving tiles directly from the native WSI pyramid, making the transformation from WSI to DZI obsolete. METHODS: We implemented a new flexible tile source for OSD that accepts arbitrary native pyramid structures instead of DZI levels. We measured its performance on a data set of 8104 WSI reviewed by 207 pathologists over 40 days in a web-based digital slide viewer used for routine diagnostics. RESULTS: The new FlexTileSource accelerates the display of a field of view in general by 67 ms and even by 117 ms if the block size of the WSI and the tile size of the viewer is increased to 1024 px. We provide the code of our open-source library freely on https://github.com/schuefflerlab/openseadragon. CONCLUSIONS: This is the first study to quantify visualization performance on a web-based slide viewer at scale, taking block size and tile size of digital slides into account. Quantifying performance will enable to compare and improve web-based viewers and therewith facilitate the adoption of digital pathology.

19.
J Pathol Inform ; 12: 9, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34012713

RESUMO

BACKGROUND: The development of artificial intelligence (AI) in pathology frequently relies on digitally annotated whole slide images (WSI). The creation of these annotations - manually drawn by pathologists in digital slide viewers - is time consuming and expensive. At the same time, pathologists routinely annotate glass slides with a pen to outline cancerous regions, for example, for molecular assessment of the tissue. These pen annotations are currently considered artifacts and excluded from computational modeling. METHODS: We propose a novel method to segment and fill hand-drawn pen annotations and convert them into a digital format to make them accessible for computational models. Our method is implemented in Python as an open source, publicly available software tool. RESULTS: Our method is able to extract pen annotations from WSI and save them as annotation masks. On a data set of 319 WSI with pen markers, we validate our algorithm segmenting the annotations with an overall Dice metric of 0.942, Precision of 0.955, and Recall of 0.943. Processing all images takes 15 min in contrast to 5 h manual digital annotation time. Further, the approach is robust against text annotations. CONCLUSIONS: We envision that our method can take advantage of already pen-annotated slides in scenarios in which the annotations would be helpful for training computational models. We conclude that, considering the large archives of many pathology departments that are currently being digitized, our method will help to collect large numbers of training samples from those data.

20.
Comput Med Imaging Graph ; 88: 101866, 2021 03.
Artigo em Inglês | MEDLINE | ID: mdl-33485058

RESUMO

Pathologic analysis of surgical excision specimens for breast carcinoma is important to evaluate the completeness of surgical excision and has implications for future treatment. This analysis is performed manually by pathologists reviewing histologic slides prepared from formalin-fixed tissue. In this paper, we present Deep Multi-Magnification Network trained by partial annotation for automated multi-class tissue segmentation by a set of patches from multiple magnifications in digitized whole slide images. Our proposed architecture with multi-encoder, multi-decoder, and multi-concatenation outperforms other single and multi-magnification-based architectures by achieving the highest mean intersection-over-union, and can be used to facilitate pathologists' assessments of breast cancer.


Assuntos
Neoplasias da Mama , Redes Neurais de Computação , Mama , Neoplasias da Mama/diagnóstico por imagem , Feminino , Humanos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA