Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 15 de 15
Filtrar
1.
J Surg Oncol ; 2024 May 07.
Artigo em Inglês | MEDLINE | ID: mdl-38712939

RESUMO

BACKGROUND AND OBJECTIVES: Deep learning models (DLMs) are applied across domains of health sciences to generate meaningful predictions. DLMs make use of neural networks to generate predictions from discrete data inputs. This study employs DLM on prechemotherapy cross-sectional imaging to predict patients' response to neoadjuvant chemotherapy. METHODS: Adult patients with colorectal liver metastasis who underwent surgery after neoadjuvant chemotherapy were included. A DLM was trained on computed tomography images using attention-based multiple-instance learning. A logistic regression model incorporating clinical parameters of the Fong clinical risk score was used for comparison. Both model performances were benchmarked against the Response Evaluation Criteria in Solid Tumors criteria. A receiver operating curve was created and resulting area under the curve (AUC) was determined. RESULTS: Ninety-five patients were included, with 33,619 images available for study inclusion. Ninety-five percent of patients underwent 5-fluorouracil-based chemotherapy with oxaliplatin and/or irinotecan. Sixty percent of the patients were categorized as chemotherapy responders (30% reduction in tumor diameter). The DLM had an AUC of 0.77. The AUC for the clinical model was 0.41. CONCLUSIONS: Image-based DLM for prediction of response to neoadjuvant chemotherapy in patients with colorectal cancer liver metastases was superior to a clinical-based model. These results demonstrate potential to identify nonresponders to chemotherapy and guide select patients toward earlier curative resection.

2.
Diagn Pathol ; 19(1): 17, 2024 Jan 19.
Artigo em Inglês | MEDLINE | ID: mdl-38243330

RESUMO

BACKGROUND: c-MYC and BCL2 positivity are important prognostic factors for diffuse large B-cell lymphoma. However, manual quantification is subject to significant intra- and inter-observer variability. We developed an automated method for quantification in whole-slide images of tissue sections where manual quantification requires evaluating large areas of tissue with possibly heterogeneous staining. We train this method using annotations of tumor positivity in smaller tissue microarray cores where expression and staining are more homogeneous and then translate this model to whole-slide images. METHODS: Our method applies a technique called attention-based multiple instance learning to regress the proportion of c-MYC-positive and BCL2-positive tumor cells from pathologist-scored tissue microarray cores. This technique does not require annotation of individual cell nuclei and is trained instead on core-level annotations of percent tumor positivity. We translate this model to scoring of whole-slide images by tessellating the slide into smaller core-sized tissue regions and calculating an aggregate score. Our method was trained on a public tissue microarray dataset from Stanford and applied to whole-slide images from a geographically diverse multi-center cohort produced by the Lymphoma Epidemiology of Outcomes study. RESULTS: In tissue microarrays, the automated method had Pearson correlations of 0.843 and 0.919 with pathologist scores for c-MYC and BCL2, respectively. When utilizing standard clinical thresholds, the sensitivity/specificity of our method was 0.743 / 0.963 for c-MYC and 0.938 / 0.951 for BCL2. For double-expressors, sensitivity and specificity were 0.720 and 0.974. When translated to the external WSI dataset scored by two pathologists, Pearson correlation was 0.753 & 0.883 for c-MYC and 0.749 & 0.765 for BCL2, and sensitivity/specificity was 0.857/0.991 & 0.706/0.930 for c-MYC, 0.856/0.719 & 0.855/0.690 for BCL2, and 0.890/1.00 & 0.598/0.952 for double-expressors. Survival analysis demonstrates that for progression-free survival, model-predicted TMA scores significantly stratify double-expressors and non double-expressors (p = 0.0345), whereas pathologist scores do not (p = 0.128). CONCLUSIONS: We conclude that proportion of positive stains can be regressed using attention-based multiple instance learning, that these models generalize well to whole slide images, and that our models can provide non-inferior stratification of progression-free survival outcomes.


Assuntos
Aprendizado Profundo , Linfoma Difuso de Grandes Células B , Humanos , Prognóstico , Proteínas Proto-Oncogênicas c-myc/metabolismo , Proteínas Proto-Oncogênicas c-bcl-2/metabolismo , Protocolos de Quimioterapia Combinada Antineoplásica
3.
Semin Cancer Biol ; 97: 70-85, 2023 12.
Artigo em Inglês | MEDLINE | ID: mdl-37832751

RESUMO

Artificial Intelligence (AI)-enhanced histopathology presents unprecedented opportunities to benefit oncology through interpretable methods that require only one overall label per hematoxylin and eosin (H&E) slide with no tissue-level annotations. We present a structured review of these methods organized by their degree of verifiability and by commonly recurring application areas in oncological characterization. First, we discuss morphological markers (tumor presence/absence, metastases, subtypes, grades) in which AI-identified regions of interest (ROIs) within whole slide images (WSIs) verifiably overlap with pathologist-identified ROIs. Second, we discuss molecular markers (gene expression, molecular subtyping) that are not verified via H&E but rather based on overlap with positive regions on adjacent tissue. Third, we discuss genetic markers (mutations, mutational burden, microsatellite instability, chromosomal instability) that current technologies cannot verify if AI methods spatially resolve specific genetic alterations. Fourth, we discuss the direct prediction of survival to which AI-identified histopathological features quantitatively correlate but are nonetheless not mechanistically verifiable. Finally, we discuss in detail several opportunities and challenges for these one-label-per-slide methods within oncology. Opportunities include reducing the cost of research and clinical care, reducing the workload of clinicians, personalized medicine, and unlocking the full potential of histopathology through new imaging-based biomarkers. Current challenges include explainability and interpretability, validation via adjacent tissue sections, reproducibility, data availability, computational needs, data requirements, domain adaptability, external validation, dataset imbalances, and finally commercialization and clinical potential. Ultimately, the relative ease and minimum upfront cost with which relevant data can be collected in addition to the plethora of available AI methods for outcome-driven analysis will surmount these current limitations and achieve the innumerable opportunities associated with AI-driven histopathology for the benefit of oncology.


Assuntos
Inteligência Artificial , Instabilidade Cromossômica , Humanos , Reprodutibilidade dos Testes , Amarelo de Eosina-(YS) , Oncologia
4.
Artigo em Inglês | MEDLINE | ID: mdl-37538448

RESUMO

Obstructive sleep apnea (OSA) is a prevalent disease affecting 10 to 15% of Americans and nearly one billion people worldwide. It leads to multiple symptoms including daytime sleepiness; snoring, choking, or gasping during sleep; fatigue; headaches; non-restorative sleep; and insomnia due to frequent arousals. Although polysomnography (PSG) is the gold standard for OSA diagnosis, it is expensive, not universally available, and time-consuming, so many patients go undiagnosed due to lack of access to the test. Given the incomplete access and high cost of PSG, many studies are seeking alternative diagnosis approaches based on different data modalities. Here, we propose a machine learning model to predict OSA severity from 2D frontal view craniofacial images. In a cross-validation study of 280 patients, our method achieves an average AUC of 0.780. In comparison, the craniofacial analysis model proposed by a recent study only achieves 0.638 AUC on our dataset. The proposed model also outperforms the widely used STOP-BANG OSA screening questionnaire, which achieves an AUC of 0.52 on our dataset. Our findings indicate that deep learning has the potential to significantly reduce the cost of OSA diagnosis.

5.
Radiol Artif Intell ; 5(2): e220253, 2023 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-37041961
6.
PLoS One ; 18(4): e0283562, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37014891

RESUMO

Breast cancer is the most common malignancy in women, with over 40,000 deaths annually in the United States alone. Clinicians often rely on the breast cancer recurrence score, Oncotype DX (ODX), for risk stratification of breast cancer patients, by using ODX as a guide for personalized therapy. However, ODX and similar gene assays are expensive, time-consuming, and tissue destructive. Therefore, developing an AI-based ODX prediction model that identifies patients who will benefit from chemotherapy in the same way that ODX does would give a low-cost alternative to the genomic test. To overcome this problem, we developed a deep learning framework, Breast Cancer Recurrence Network (BCR-Net), which automatically predicts ODX recurrence risk from histopathology slides. Our proposed framework has two steps. First, it intelligently samples discriminative features from whole-slide histopathology images of breast cancer patients. Then, it automatically weights all features through a multiple instance learning model to predict the recurrence score at the slide level. On a dataset of H&E and Ki67 breast cancer resection whole slides images (WSIs) from 99 anonymized patients, the proposed framework achieved an overall AUC of 0.775 (68.9% and 71.1% accuracies for low and high risk) on H&E WSIs and overall AUC of 0.811 (80.8% and 79.2% accuracies for low and high risk) on Ki67 WSIs of breast cancer patients. Our findings provide strong evidence for automatically risk-stratify patients with a high degree of confidence. Our experiments reveal that the BCR-Net outperforms the state-of-the-art WSI classification models. Moreover, BCR-Net is highly efficient with low computational needs, making it practical to deploy in limited computational settings.


Assuntos
Neoplasias da Mama , Aprendizado Profundo , Feminino , Humanos , Neoplasias da Mama/patologia , Antígeno Ki-67 , Mama/patologia , Risco
7.
J Am Coll Surg ; 236(4): 884-893, 2023 04 01.
Artigo em Inglês | MEDLINE | ID: mdl-36727981

RESUMO

BACKGROUND: Surgical intervention remains the cornerstone of a multidisciplinary approach in the treatment of colorectal liver metastases (CLM). Nevertheless, patient outcomes vary greatly. While predictive tools can assist decision-making and patient counseling, decades of efforts have yet to result in generating a universally adopted tool in clinical practice. STUDY DESIGN: An international collaborative database of CLM patients who underwent surgical therapy between 2000 and 2018 was used to select 1,004 operations for this study. Two different machine learning methods were applied to construct 2 predictive models for recurrence and death, using 128 clinicopathologic variables: gradient-boosted trees (GBTs) and logistic regression with bootstrapping (LRB) in a leave-one-out cross-validation. RESULTS: Median survival after resection was 47.2 months, and disease-free survival was 19.0 months, with a median follow-up of 32.0 months in the cohort. Both models had good predictive power, with GBT demonstrating a superior performance in predicting overall survival (area under the receiver operating curve [AUC] 0.773, 95% CI 0.743 to 0.801 vs LRB: AUC 0.648, 95% CI 0.614 to 0.682) and recurrence (AUC 0.635, 95% CI 0.599 to 0.669 vs LRB: AUC 0.570, 95% CI 0.535 to 0.601). Similarly, better performances were observed predicting 3- and 5-year survival, as well as 3- and 5-year recurrence, with GBT methods generating higher AUCs. CONCLUSIONS: Machine learning provides powerful tools to create predictive models of survival and recurrence after surgery for CLM. The effectiveness of both machine learning models varies, but on most occasions, GBT outperforms LRB. Prospective validation of these models lays the groundwork to adopt them in clinical practice.


Assuntos
Neoplasias Colorretais , Aprendizado de Máquina , Humanos , Modelos Logísticos
8.
Cancers (Basel) ; 14(23)2022 Nov 24.
Artigo em Inglês | MEDLINE | ID: mdl-36497258

RESUMO

Recent methods in computational pathology have trended towards semi- and weakly-supervised methods requiring only slide-level labels. Yet, even slide-level labels may be absent or irrelevant to the application of interest, such as in clinical trials. Hence, we present a fully unsupervised method to learn meaningful, compact representations of WSIs. Our method initially trains a tile-wise encoder using SimCLR, from which subsets of tile-wise embeddings are extracted and fused via an attention-based multiple-instance learning framework to yield slide-level representations. The resulting set of intra-slide-level and inter-slide-level embeddings are attracted and repelled via contrastive loss, respectively. This resulted in slide-level representations with self-supervision. We applied our method to two tasks- (1) non-small cell lung cancer subtyping (NSCLC) as a classification prototype and (2) breast cancer proliferation scoring (TUPAC16) as a regression prototype-and achieved an AUC of 0.8641 ± 0.0115 and correlation (R2) of 0.5740 ± 0.0970, respectively. Ablation experiments demonstrate that the resulting unsupervised slide-level feature space can be fine-tuned with small datasets for both tasks. Overall, our method approaches computational pathology in a novel manner, where meaningful features can be learned from whole-slide images without the need for annotations of slide-level labels. The proposed method stands to benefit computational pathology, as it theoretically enables researchers to benefit from completely unlabeled whole-slide images.

9.
Med Image Anal ; 79: 102462, 2022 07.
Artigo em Inglês | MEDLINE | ID: mdl-35512532

RESUMO

Deep learning consistently demonstrates high performance in classifying and segmenting medical images like CT, PET, and MRI. However, compared to these kinds of images, whole slide images (WSIs) of stained tissue sections are huge and thus much less efficient to process, especially for deep learning algorithms. To overcome these challenges, we present attention2majority, a weak multiple instance learning model to automatically and efficiently process WSIs for classification. Our method initially assigns exhaustively sampled label-free patches with the label of the respective WSIs and trains a convolutional neural network to perform patch-wise classification. Then, an intelligent sampling method is performed in which patches with high confidence are collected to form weak representations of WSIs. Lastly, we apply a multi-head attention-based multiple instance learning model to do slide-level classification based on high-confidence patches (intelligently sampled patches). Attention2majority was trained and tested on classifying the quality of 127 WSIs (of regenerated kidney sections) into three categories. On average, attention2majority resulted in 97.4%±2.4 AUC for the four-fold cross-validation. We demonstrate that the intelligent sampling module within attention2majority is superior to the current state-of-the-art random sampling method. Furthermore, we show that the replacement of random sampling with intelligent sampling in attention2majority results in its performance boost (from 94.9%±3.1 to 97.4%±2.4 average AUC for the four-fold cross-validation). We also tested a variation of attention2majority on the famous Camelyon16 dataset, which resulted in 89.1%±0.8 AUC1. When compared to random sampling, the attention2majority demonstrated excellent slide-level interpretability. It also provided an efficient framework to arrive at a multi-class slide-level prediction.


Assuntos
Algoritmos , Redes Neurais de Computação , Humanos , Rim/diagnóstico por imagem
10.
Comput Biol Med ; 136: 104737, 2021 09.
Artigo em Inglês | MEDLINE | ID: mdl-34391000

RESUMO

Failure to identify difficult intubation is the leading cause of anesthesia-related death and morbidity. Despite preoperative airway assessment, 75-93% of difficult intubations are unanticipated, and airway examination methods underperform, with sensitivities of 20-62% and specificities of 82-97%. To overcome these impediments, we aim to develop a deep learning model to identify difficult to intubate patients using frontal face images. We proposed an ensemble of convolutional neural networks which leverages a database of celebrity facial images to learn robust features of multiple face regions. This ensemble extracts features from patient images (n = 152) which are subsequently classified by a respective ensemble of attention-based multiple instance learning models. Through majority voting, a patient is classified as difficult or easy to intubate. Whereas two conventional bedside tests resulted in AUCs of 0.6042 and 0.4661, the proposed method resulted in an AUC of 0.7105 using a cohort of 76 difficult and 76 easy to intubate patients. Generic features yielded AUCs of 0.4654-0.6278. The proposed model can operate at high sensitivity and low specificity (0.9079 and 0.4474) or low sensitivity and high specificity (0.3684 and 0.9605). The proposed ensembled model outperforms conventional bedside tests and generic features. Side facial images may improve the performance of the proposed model. The proposed method significantly surpasses conventional bedside tests and deep learning methods. We expect our model will play an important role in developing deep learning methods where frontal face features play an important role.


Assuntos
Aprendizado Profundo , Bases de Dados Factuais , Face/diagnóstico por imagem , Humanos , Redes Neurais de Computação
11.
EBioMedicine ; 67: 103388, 2021 May.
Artigo em Inglês | MEDLINE | ID: mdl-34000621

RESUMO

BACKGROUND: Machine learning sustains successful application to many diagnostic and prognostic problems in computational histopathology. Yet, few efforts have been made to model gene expression from histopathology. This study proposes a methodology which predicts selected gene expression values (microarray) from haematoxylin and eosin whole-slide images as an intermediate data modality to identify fulminant-like pulmonary tuberculosis ('supersusceptible') in an experimentally infected cohort of Diversity Outbred mice (n=77). METHODS: Gradient-boosted trees were utilized as a novel feature selector to identify gene transcripts predictive of fulminant-like pulmonary tuberculosis. A novel attention-based multiple instance learning model for regression was used to predict selected genes' expression from whole-slide images. Gene expression predictions were shown to be sufficiently replicated to identify supersusceptible mice using gradient-boosted trees trained on ground truth gene expression data. FINDINGS: The model was accurate, showing high positive correlations with ground truth gene expression on both cross-validation (n = 77, 0.63 ≤ ρ ≤ 0.84) and external testing sets (n = 33, 0.65 ≤ ρ ≤ 0.84). The sensitivity and specificity for gene expression predictions to identify supersusceptible mice (n=77) were 0.88 and 0.95, respectively, and for an external set of mice (n=33) 0.88 and 0.93, respectively. IMPLICATIONS: Our methodology maps histopathology to gene expression with sufficient accuracy to predict a clinical outcome. The proposed methodology exemplifies a computational template for gene expression panels, in which relatively inexpensive and widely available tissue histopathology may be mapped to specific genes' expression to serve as a diagnostic or prognostic tool. FUNDING: National Institutes of Health and American Lung Association.


Assuntos
Predisposição Genética para Doença , Aprendizado de Máquina , Transcriptoma , Tuberculose/genética , Animais , Feminino , Hibridização Genética , Camundongos , Tuberculose/metabolismo , Tuberculose/patologia
12.
EBioMedicine ; 62: 103094, 2020 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-33166789

RESUMO

BACKGROUND: Identifying which individuals will develop tuberculosis (TB) remains an unresolved problem due to few animal models and computational approaches that effectively address its heterogeneity. To meet these shortcomings, we show that Diversity Outbred (DO) mice reflect human-like genetic diversity and develop human-like lung granulomas when infected with Mycobacterium tuberculosis (M.tb) . METHODS: Following M.tb infection, a "supersusceptible" phenotype develops in approximately one-third of DO mice characterized by rapid morbidity and mortality within 8 weeks. These supersusceptible DO mice develop lung granulomas patterns akin to humans. This led us to utilize deep learning to identify supersusceptibility from hematoxylin & eosin (H&E) lung tissue sections utilizing only clinical outcomes (supersusceptible or not-supersusceptible) as labels. FINDINGS: The proposed machine learning model diagnosed supersusceptibility with high accuracy (91.50 ± 4.68%) compared to two expert pathologists using H&E stained lung sections (94.95% and 94.58%). Two non-experts used the imaging biomarker to diagnose supersusceptibility with high accuracy (88.25% and 87.95%) and agreement (96.00%). A board-certified veterinary pathologist (GB) examined the imaging biomarker and determined the model was making diagnostic decisions using a form of granuloma necrosis (karyorrhectic and pyknotic nuclear debris). This was corroborated by one other board-certified veterinary pathologist. Finally, the imaging biomarker was quantified, providing a novel means to convert visual patterns within granulomas to data suitable for statistical analyses. IMPLICATIONS: Overall, our results have translatable implication to improve our understanding of TB and also to the broader field of computational pathology in which clinical outcomes alone can drive automatic identification of interpretable imaging biomarkers, knowledge discovery, and validation of existing clinical biomarkers. FUNDING: National Institutes of Health and American Lung Association.


Assuntos
Biomarcadores , Aprendizado Profundo , Imagem Molecular , Mycobacterium tuberculosis , Tuberculose/diagnóstico , Tuberculose/etiologia , Algoritmos , Animais , Biologia Computacional/métodos , Modelos Animais de Doenças , Suscetibilidade a Doenças , Feminino , Humanos , Processamento de Imagem Assistida por Computador , Imuno-Histoquímica/métodos , Aprendizado de Máquina , Masculino , Imagem Molecular/métodos , Prognóstico , Reprodutibilidade dos Testes
13.
Diagn Pathol ; 15(1): 87, 2020 Jul 16.
Artigo em Inglês | MEDLINE | ID: mdl-32677978

RESUMO

BACKGROUND: Identification of bladder layers is a necessary prerequisite to bladder cancer diagnosis and prognosis. We present a method of multi-class image segmentation, which recognizes urothelium, lamina propria, muscularis propria, and muscularis mucosa layers as well as regions of red blood cells, cauterized tissue, and inflamed tissue from images of hematoxylin and eosin stained slides of bladder biopsies. METHODS: Segmentation is carried out using a U-Net architecture. The number of layers was either, eight, ten, or twelve and combined with a weight initializers of He uniform, He normal, Glorot uniform, and Glorot normal. The most optimal of these parameters was found by through a seven-fold training, validation, and testing of a dataset of 39 whole slide images of T1 bladder biopsies. RESULTS: The most optimal model was a twelve layer U-net using He normal initializer. Initial visual evaluation by an experienced pathologist on an independent set of 15 slides segmented by our method yielded an average score of 8.93 ± 0.6 out of 10 for segmentation accuracy. It took only 23 min for the pathologist to review 15 slides (1.53 min/slide) with the computer annotations. To assess the generalizability of the proposed model, we acquired an additional independent set of 53 whole slide images and segmented them using our method. Visual examination by a different experienced pathologist yielded an average score of 8.87 ± 0.63 out of 10 for segmentation accuracy. CONCLUSIONS: Our preliminary findings suggest that predictions of our model can minimize the time needed by pathologists to annotate slides. Moreover, the method has the potential to identify the bladder layers accurately. Further development can assist the pathologist with the diagnosis of T1 bladder cancer.


Assuntos
Aprendizado Profundo , Processamento de Imagem Assistida por Computador/métodos , Neoplasias da Bexiga Urinária/diagnóstico , Neoplasias da Bexiga Urinária/patologia , Humanos , Coloração e Rotulagem
14.
Sci Rep ; 10(1): 2398, 2020 Feb 06.
Artigo em Inglês | MEDLINE | ID: mdl-32024961

RESUMO

An amendment to this paper has been published and can be accessed via a link at the top of the paper.

15.
Sci Rep ; 9(1): 18969, 2019 12 12.
Artigo em Inglês | MEDLINE | ID: mdl-31831792

RESUMO

Automatic identification of tissue structures in the analysis of digital tissue biopsies remains an ongoing problem in digital pathology. Common barriers include lack of reliable ground truth due to inter- and intra- reader variability, class imbalances, and inflexibility of discriminative models. To overcome these barriers, we are developing a framework that benefits from a reliable immunohistochemistry ground truth during labeling, overcomes class imbalances through single task learning, and accommodates any number of classes through a minimally supervised, modular model-per-class paradigm. This study explores an initial application of this framework, based on conditional generative adversarial networks, to automatically identify tumor from non-tumor regions in colorectal H&E slides. The average precision, sensitivity, and F1 score during validation was 95.13 ± 4.44%, 93.05 ± 3.46%, and 94.02 ± 3.23% and for an external test dataset was 98.75 ± 2.43%, 88.53 ± 5.39%, and 93.31 ± 3.07%, respectively. With accurate identification of tumor regions, we plan to further develop our framework to establish a tumor front, from which tumor buds can be detected in a restricted region. This model will be integrated into a larger system which will quantitatively determine the prognostic significance of tumor budding.


Assuntos
Neoplasias Colorretais/diagnóstico , Neoplasias Colorretais/metabolismo , Neoplasias Colorretais/patologia , Processamento de Imagem Assistida por Computador , Redes Neurais de Computação , Feminino , Humanos , Imuno-Histoquímica , Masculino
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA