Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 96
Filtrar
1.
Cardiovasc Pathol ; : 107646, 2024 Apr 25.
Artigo em Inglês | MEDLINE | ID: mdl-38677634

RESUMO

BACKGROUND: Pathologic antibody mediated rejection (pAMR) remains a major driver of graft failure in cardiac transplant patients. The endomyocardial biopsy remains the primary diagnostic tool but presents with challenges, particularly in distinguishing the histologic component (pAMR-H) defined by 1) intravascular macrophage accumulation in capillaries and 2) activated endothelial cells that expand the cytoplasm to narrow or occlude the vascular lumen. Frequently, pAMR-H is difficult to distinguish from acute cellular rejection (ACR) and healing injury. With the advent of digital slide scanning and advances in machine deep learning, artificial intelligence technology is widely under investigation in the areas of oncologic pathology, but in its infancy in transplant pathology. For the first time, we determined if a machine learning algorithm could distinguish pAMR-H from normal myocardium, healing injury and ACR. MATERIALS AND METHODS: A total of 4,212 annotations (1,053 regions of normal, 1,053 pAMR-H, 1,053 healing injury and 1,053 ACR) were completed from 300 hematoxylin and eosin slides scanned using a Leica Aperio GT450 digital whole slide scanner at 40X magnification. All regions of pAMR-H were annotated from patients confirmed with a previous diagnosis of pAMR2 (>50% positive C4d immunofluorescence and/or >10% CD68 positive intravascular macrophages). Annotations were imported into a Python 3.7 development environment using the OpenSlide™ package and a convolutional neural network approach utilizing transfer learning was performed. RESULTS: The machine learning algorithm showed 98% overall validation accuracy and pAMR-H was correctly distinguished from specific categories with the following accuracies: normal myocardium (99.2%), healing injury (99.5%) and ACR (99.5%). CONCLUSION: Our novel deep learning algorithm can reach acceptable, and possibly surpass, performance of current diagnostic standards of identifying pAMR-H. Such a tool may serve as an adjunct diagnostic aid for improving the pathologist's accuracy and reproducibility, especially in difficult cases with high inter-observer variability. This is one of the first studies that provides evidence that an artificial intelligence machine learning algorithm can be trained and validated to diagnose pAMR-H in cardiac transplant patients. Ongoing studies include multi-institutional verification testing to ensure generalizability.

3.
JAMA Ophthalmol ; 141(11): 1052-1061, 2023 Nov 01.
Artigo em Inglês | MEDLINE | ID: mdl-37856139

RESUMO

Importance: The identification of patients at risk of progressing from intermediate age-related macular degeneration (iAMD) to geographic atrophy (GA) is essential for clinical trials aimed at preventing disease progression. DeepGAze is a fully automated and accurate convolutional neural network-based deep learning algorithm for predicting progression from iAMD to GA within 1 year from spectral-domain optical coherence tomography (SD-OCT) scans. Objective: To develop a deep-learning algorithm based on volumetric SD-OCT scans to predict the progression from iAMD to GA during the year following the scan. Design, Setting, and Participants: This retrospective cohort study included participants with iAMD at baseline and who either progressed or did not progress to GA within the subsequent 13 months. Participants were included from centers in 4 US states. Data set 1 included patients from the Age-Related Eye Disease Study 2 AREDS2 (Ancillary Spectral-Domain Optical Coherence Tomography) A2A study (July 2008 to August 2015). Data sets 2 and 3 included patients with imaging taken in routine clinical care at a tertiary referral center and associated satellites between January 2013 and January 2023. The stored imaging data were retrieved for the purpose of this study from July 1, 2022, to February 1, 2023. Data were analyzed from May 2021 to July 2023. Exposure: A position-aware convolutional neural network with proactive pseudointervention was trained and cross-validated on Bioptigen SD-OCT volumes (data set 1) and validated on 2 external data sets comprising Heidelberg Spectralis SD-OCT scans (data sets 2 and 3). Main Outcomes and Measures: Prediction of progression to GA within 13 months was evaluated with area under the receiver-operator characteristic curves (AUROC) as well as area under the precision-recall curve (AUPRC), sensitivity, specificity, positive predictive value, negative predictive value, and accuracy. Results: The study included a total of 417 patients: 316 in data set 1 (mean [SD] age, 74 [8]; 185 [59%] female), 53 in data set 2, (mean [SD] age, 83 [8]; 32 [60%] female), and 48 in data set 3 (mean [SD] age, 81 [8]; 32 [67%] female). The AUROC for prediction of progression from iAMD to GA within 1 year was 0.94 (95% CI, 0.92-0.95; AUPRC, 0.90 [95% CI, 0.85-0.95]; sensitivity, 0.88 [95% CI, 0.84-0.92]; specificity, 0.90 [95% CI, 0.87-0.92]) for data set 1. The addition of expert-annotated SD-OCT features to the model resulted in no improvement compared to the fully autonomous model (AUROC, 0.95; 95% CI, 0.92-0.95; P = .19). On an independent validation data set (data set 2), the model predicted progression to GA with an AUROC of 0.94 (95% CI, 0.91-0.96; AUPRC, 0.92 [0.89-0.94]; sensitivity, 0.91 [95% CI, 0.74-0.98]; specificity, 0.80 [95% CI, 0.63-0.91]). At a high-specificity operating point, simulated clinical trial recruitment was enriched for patients progressing to GA within 1 year by 8.3- to 20.7-fold (data sets 2 and 3). Conclusions and Relevance: The fully automated, position-aware deep-learning algorithm assessed in this study successfully predicted progression from iAMD to GA over a clinically meaningful time frame. The ability to predict imminent GA progression could facilitate clinical trials aimed at preventing the condition and could guide clinical decision-making regarding screening frequency or treatment initiation.


Assuntos
Aprendizado Profundo , Atrofia Geográfica , Degeneração Macular , Idoso , Idoso de 80 Anos ou mais , Feminino , Humanos , Masculino , Algoritmos , Progressão da Doença , Atrofia Geográfica/diagnóstico por imagem , Degeneração Macular/diagnóstico por imagem , Estudos Retrospectivos , Tomografia de Coerência Óptica/métodos , Ensaios Clínicos como Assunto
4.
Am J Pathol ; 193(9): 1185-1194, 2023 09.
Artigo em Inglês | MEDLINE | ID: mdl-37611969

RESUMO

Thyroid cancer is the most common malignant endocrine tumor. The key test to assess preoperative risk of malignancy is cytologic evaluation of fine-needle aspiration biopsies (FNABs). The evaluation findings can often be indeterminate, leading to unnecessary surgery for benign post-surgical diagnoses. We have developed a deep-learning algorithm to analyze thyroid FNAB whole-slide images (WSIs). We show, on the largest reported data set of thyroid FNAB WSIs, clinical-grade performance in the screening of determinate cases and indications for its use as an ancillary test to disambiguate indeterminate cases. The algorithm screened and definitively classified 45.1% (130/288) of the WSIs as either benign or malignant with risk of malignancy rates of 2.7% and 94.7%, respectively. It reduced the number of indeterminate cases (N = 108) by reclassifying 21.3% (N = 23) as benign with a resultant risk of malignancy rate of 1.8%. Similar results were reproduced using a data set of consecutive FNABs collected during an entire calendar year, achieving clinically acceptable margins of error for thyroid FNAB classification.


Assuntos
Aprendizado Profundo , Neoplasias da Glândula Tireoide , Humanos , Citologia , Neoplasias da Glândula Tireoide/diagnóstico , Algoritmos
5.
Mod Pathol ; 36(6): 100129, 2023 06.
Artigo em Inglês | MEDLINE | ID: mdl-36931041

RESUMO

We examined the performance of deep learning models on the classification of thyroid fine-needle aspiration biopsies using microscope images captured in 2 ways: with a high-resolution scanner and with a mobile phone camera. Our training set consisted of images from 964 whole-slide images captured with a high-resolution scanner. Our test set consisted of 100 slides; 20 manually selected regions of interest (ROIs) from each slide were captured in 2 ways as mentioned above. Applying a baseline machine learning algorithm trained on scanner ROIs resulted in performance deterioration when applied to the smartphone ROIs (97.8% area under the receiver operating characteristic curve [AUC], CI = [95.4%, 100.0%] for scanner images vs 89.5% AUC, CI = [82.3%, 96.6%] for mobile images, P = .019). Preliminary analysis via histogram matching showed that the baseline model was overly sensitive to slight color variations in the images (specifically, to color differences between mobile and scanner images). Adding color augmentation during training reduces this sensitivity and narrows the performance gap between mobile and scanner images (97.6% AUC, CI = [95.0%, 100.0%] for scanner images vs 96.0% AUC, CI = [91.8%, 100.0%] for mobile images, P = .309), with both modalities on par with human pathologist performance (95.6% AUC, CI = [91.6%, 99.5%]) for malignancy prediction (P = .398 for pathologist vs scanner and P = .875 for pathologist vs mobile). For indeterminate cases (pathologist-assigned Bethesda category of 3, 4, or 5), color augmentations confer some improvement (88.3% AUC, CI = [73.7%, 100.0%] for the baseline model vs 96.2% AUC, CI = [90.9%, 100.0%] with color augmentations, P = .158). In addition, we found that our model's performance levels off after 15 ROIs, a promising indication that ROI data collection would not be time-consuming for our diagnostic system. Finally, we showed that the model has sensible Bethesda category (TBS) predictions (increasing risk malignancy rate with predicted TBS category, with 0% malignancy for predicted TBS 2 and 100% malignancy for TBS 6).


Assuntos
Citologia , Neoplasias da Glândula Tireoide , Humanos , Smartphone , Neoplasias da Glândula Tireoide/diagnóstico , Neoplasias da Glândula Tireoide/patologia , Aprendizado de Máquina
6.
iScience ; 26(1): 105872, 2023 Jan 20.
Artigo em Inglês | MEDLINE | ID: mdl-36647383

RESUMO

Diagnosis of primary brain tumors relies heavily on histopathology. Although various computational pathology methods have been developed for automated diagnosis of primary brain tumors, they usually require neuropathologists' annotation of region of interests or selection of image patches on whole-slide images (WSI). We developed an end-to-end Vision Transformer (ViT) - based deep learning architecture for brain tumor WSI analysis, yielding a highly interpretable deep-learning model, ViT-WSI. Based on the principle of weakly supervised machine learning, ViT-WSI accomplishes the task of major primary brain tumor type and subtype classification. Using a systematic gradient-based attribution analysis procedure, ViT-WSI can discover diagnostic histopathological features for primary brain tumors. Furthermore, we demonstrated that ViT-WSI has high predictive power of inferring the status of three diagnostic glioma molecular markers, IDH1 mutation, p53 mutation, and MGMT methylation, directly from H&E-stained histopathological images, with patient level AUC scores of 0.960, 0.874, and 0.845, respectively.

7.
Ophthalmol Glaucoma ; 6(3): 228-238, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-36410708

RESUMO

PURPOSE: To develop and validate a deep learning (DL) model for detection of glaucoma progression using spectral-domain (SD)-OCT measurements of retinal nerve fiber layer (RNFL) thickness. DESIGN: Retrospective cohort study. PARTICIPANTS: A total of 14 034 SD-OCT scans from 816 eyes from 462 individuals. METHODS: A DL convolutional neural network was trained to assess SD-OCT RNFL thickness measurements of 2 visits (a baseline and a follow-up visit) along with time between visits to predict the probability of glaucoma progression. The ground truth was defined by consensus from subjective grading by glaucoma specialists. Diagnostic performance was summarized by the area under the receiver operator characteristic curve (AUC), sensitivity, and specificity, and was compared with conventional trend-based analyses of change. Interval likelihood ratios were calculated to determine the impact of DL model results in changing the post-test probability of progression. MAIN OUTCOME MEASURES: The AUC, sensitivity, and specificity of the DL model. RESULTS: The DL model had an AUC of 0.938 (95% confidence interval [CI], 0.921-0.955), with sensitivity of 87.3% (95% CI, 83.6%-91.6%) and specificity of 86.4% (95% CI, 79.9%-89.6%). When matched for the same specificity, the DL model significantly outperformed trend-based analyses. Likelihood ratios for the DL model were associated with large changes in the probability of progression in the vast majority of SD-OCT tests. CONCLUSIONS: A DL model was able to assess the probability of glaucomatous structural progression from SD-OCT RNFL thickness measurements. The model agreed well with expert judgments and outperformed conventional trend-based analyses of change, while also providing indication of the likely locations of change. FINANCIAL DISCLOSURE(S): Proprietary or commercial disclosure may be found after the references.


Assuntos
Aprendizado Profundo , Glaucoma , Disco Óptico , Humanos , Estudos Retrospectivos , Tomografia de Coerência Óptica/métodos , Campos Visuais , Células Ganglionares da Retina , Glaucoma/diagnóstico
8.
IEEE Trans Pattern Anal Mach Intell ; 45(6): 7293-7307, 2023 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-36383576

RESUMO

Traditional multi-view learning methods often rely on two assumptions: ( i) the samples in different views are well-aligned, and ( ii) their representations obey the same distribution in a latent space. Unfortunately, these two assumptions may be questionable in practice, which limits the application of multi-view learning. In this work, we propose a differentiable hierarchical optimal transport (DHOT) method to mitigate the dependency of multi-view learning on these two assumptions. Given arbitrary two views of unaligned multi-view data, the DHOT method calculates the sliced Wasserstein distance between their latent distributions. Based on these sliced Wasserstein distances, the DHOT method further calculates the entropic optimal transport across different views and explicitly indicates the clustering structure of the views. Accordingly, the entropic optimal transport, together with the underlying sliced Wasserstein distances, leads to a hierarchical optimal transport distance defined for unaligned multi-view data, which works as the objective function of multi-view learning and leads to a bi-level optimization task. Moreover, our DHOT method treats the entropic optimal transport as a differentiable operator of model parameters. It considers the gradient of the entropic optimal transport in the backpropagation step and thus helps improve the descent direction for the model in the training phase. We demonstrate the superiority of our bi-level optimization strategy by comparing it to the traditional alternating optimization strategy. The DHOT method is applicable for both unsupervised and semi-supervised learning. Experimental results show that our DHOT method is at least comparable to state-of-the-art multi-view learning methods on both synthetic and real-world tasks, especially for challenging scenarios with unaligned multi-view data.

9.
IEEE Trans Neural Netw Learn Syst ; 34(4): 1666-1680, 2023 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-33119513

RESUMO

Models for predicting the time of a future event are crucial for risk assessment, across a diverse range of applications. Existing time-to-event (survival) models have focused primarily on preserving pairwise ordering of estimated event times (i.e., relative risk). We propose neural time-to-event models that account for calibration and uncertainty while predicting accurate absolute event times. Specifically, an adversarial nonparametric model is introduced for estimating matched time-to-event distributions for probabilistically concentrated and accurate predictions. We also consider replacing the discriminator of the adversarial nonparametric model with a survival-function matching estimator that accounts for model calibration. The proposed estimator can be used as a means of estimating and comparing conditional survival distributions while accounting for the predictive uncertainty of probabilistic models. Extensive experiments show that the distribution matching methods outperform existing approaches in terms of both calibration and concentration of time-to-event distributions.

10.
IEEE Trans Neural Netw Learn Syst ; 34(8): 4273-4285, 2023 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-34591772

RESUMO

Organizing the implicit topology of a document as a graph, and further performing feature extraction via the graph convolutional network (GCN), has proven effective in document analysis. However, existing document graphs are often restricted to expressing single-level relations, which are predefined and independent of downstream learning. A set of learnable hierarchical graphs are built to explore multilevel sentence relations, assisted by a hierarchical probabilistic topic model. Based on these graphs, multiple parallel GCNs are used to extract multilevel semantic features, which are aggregated by an attention mechanism for different document-comprehension tasks. Equipped with variational inference, the graph construction and GCN are learned jointly, allowing the graphs to evolve dynamically to better match the downstream task. The effectiveness and efficiency of the proposed multilevel sentence relation graph convolutional network (MuserGCN) is demonstrated via experiments on document classification, abstractive summarization, and matching.

11.
IEEE Trans Pattern Anal Mach Intell ; 45(1): 999-1016, 2023 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-35196227

RESUMO

Graph representation is a challenging and significant problem for many real-world applications. In this work, we propose a novel paradigm called "Gromov-Wasserstein Factorization" (GWF) to learn graph representations in a flexible and interpretable way. Given a set of graphs, whose correspondence between nodes is unknown and whose sizes can be different, our GWF model reconstructs each graph by a weighted combination of some "graph factors" under a pseudo-metric called Gromov-Wasserstein (GW) discrepancy. This model leads to a new nonlinear factorization mechanism of the graphs. The graph factors are shared by all the graphs, which represent the typical patterns of the graphs' structures. The weights associated with each graph indicate the graph factors' contributions to the graph's reconstruction, which lead to a permutation-invariant graph representation. We learn the graph factors of the GWF model and the weights of the graphs jointly by minimizing the overall reconstruction error. When learning the model, we reparametrize the graph factors and the weights to unconstrained model parameters and simplify the backpropagation of gradient with the help of the envelope theorem. For the GW discrepancy (the critical training step), we consider two algorithms to compute it, which correspond to the proximal point algorithm (PPA) and Bregman alternating direction method of multipliers (BADMM), respectively. Furthermore, we propose some extensions of the GWF model, including (i) combining with a graph neural network and learning graph representations in an auto-encoding manner, (ii) representing the graphs with node attributes, and (iii) working as a regularizer for semi-supervised graph classification. Experiments on various datasets demonstrate that our GWF model is comparable to the state-of-the-art methods. The graph representations derived by it perform well in graph clustering and classification tasks.

12.
Front Med (Lausanne) ; 9: 946937, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36341258

RESUMO

Background: Understanding performance of convolutional neural networks (CNNs) for binary (benign vs. malignant) lesion classification based on real world images is important for developing a meaningful clinical decision support (CDS) tool. Methods: We developed a CNN based on real world smartphone images with histopathological ground truth and tested the utility of structured electronic health record (EHR) data on model performance. Model accuracy was compared against three board-certified dermatologists for clinical validity. Results: At a classification threshold of 0.5, the sensitivity was 79 vs. 77 vs. 72%, and specificity was 64 vs. 65 vs. 57% for image-alone vs. combined image and clinical data vs. clinical data-alone models, respectively. The PPV was 68 vs. 69 vs. 62%, AUC was 0.79 vs. 0.79 vs. 0.69, and AP was 0.78 vs. 0.79 vs. 0.64 for image-alone vs. combined data vs. clinical data-alone models. Older age, male sex, and number of prior dermatology visits were important positive predictors for malignancy in the clinical data-alone model. Conclusion: Additional clinical data did not significantly improve CNN image model performance. Model accuracy for predicting malignant lesions was comparable to dermatologists (model: 71.31% vs. 3 dermatologists: 77.87, 69.88, and 71.93%), validating clinical utility. Prospective validation of the model in primary care setting will enhance understanding of the model's clinical utility.

13.
Artif Intell Med ; 132: 102372, 2022 10.
Artigo em Inglês | MEDLINE | ID: mdl-36207074

RESUMO

Understanding model predictions is critical in healthcare, to facilitate rapid verification of model correctness and to guard against use of models that exploit confounding variables. We introduce the challenging new task of explainable multiple abnormality classification in volumetric medical images, in which a model must indicate the regions used to predict each abnormality. To solve this task, we propose a multiple instance learning convolutional neural network, AxialNet, that allows identification of top slices for each abnormality. Next we incorporate HiResCAM, an attention mechanism, to identify sub-slice regions. We prove that for AxialNet, HiResCAM explanations are guaranteed to reflect the locations the model used, unlike Grad-CAM which sometimes highlights irrelevant locations. Armed with a model that produces faithful explanations, we then aim to improve the model's learning through a novel mask loss that leverages HiResCAM and 3D allowed regions to encourage the model to predict abnormalities based only on the organs in which those abnormalities appear. The 3D allowed regions are obtained automatically through a new approach, PARTITION, that combines location information extracted from radiology reports with organ segmentation maps obtained through morphological image processing. Overall, we propose the first model for explainable multi-abnormality prediction in volumetric medical images, and then use the mask loss to achieve a 33% improvement in organ localization of multiple abnormalities in the RAD-ChestCT dataset of 36,316 scans, representing the state of the art. This work advances the clinical applicability of multiple abnormality modeling in chest CT volumes.


Assuntos
Anormalidades Múltiplas , Tomografia Computadorizada por Raios X , Humanos , Processamento de Imagem Assistida por Computador/métodos , Redes Neurais de Computação , Tomografia Computadorizada por Raios X/métodos
14.
Artigo em Inglês | MEDLINE | ID: mdl-36256717

RESUMO

Text generation is a key component of many natural language tasks. Motivated by the success of generative adversarial networks (GANs) for image generation, many text-specific GANs have been proposed. However, due to the discrete nature of text, these text GANs often use reinforcement learning (RL) or continuous relaxations to calculate gradients during learning, leading to high-variance or biased estimation. Furthermore, the existing text GANs often suffer from mode collapse (i.e., they have limited generative diversity). To tackle these problems, we propose a new text GAN model named text feature GAN (TFGAN), where adversarial learning is performed in a continuous text feature space. In the adversarial game, GPT2 provides the "true" features, while the generator of TFGAN learns from them. TFGAN is trained by maximum likelihood estimation on text space and adversarial learning on text feature space, effectively combining them into a single objective, while alleviating mode collapse. TFGAN achieves appealing performance in text generation tasks, and it can also be used as a flexible framework for learning text representations.

15.
Sci Rep ; 12(1): 15836, 2022 09 23.
Artigo em Inglês | MEDLINE | ID: mdl-36151257

RESUMO

We consider machine-learning-based lesion identification and malignancy prediction from clinical dermatological images, which can be indistinctly acquired via smartphone or dermoscopy capture. Additionally, we do not assume that images contain single lesions, thus the framework supports both focal or wide-field images. Specifically, we propose a two-stage approach in which we first identify all lesions present in the image regardless of sub-type or likelihood of malignancy, then it estimates their likelihood of malignancy, and through aggregation, it also generates an image-level likelihood of malignancy that can be used for high-level screening processes. Further, we consider augmenting the proposed approach with clinical covariates (from electronic health records) and publicly available data (the ISIC dataset). Comprehensive experiments validated on an independent test dataset demonstrate that (1) the proposed approach outperforms alternative model architectures; (2) the model based on images outperforms a pure clinical model by a large margin, and the combination of images and clinical data does not significantly improves over the image-only model; and (3) the proposed framework offers comparable performance in terms of malignancy classification relative to three board certified dermatologists with different levels of experience.


Assuntos
Melanoma , Neoplasias Cutâneas , Algoritmos , Dermoscopia/métodos , Humanos , Aprendizado de Máquina , Melanoma/patologia , Neoplasias Cutâneas/diagnóstico por imagem , Neoplasias Cutâneas/patologia
16.
Br J Ophthalmol ; 106(3): 388-395, 2022 03.
Artigo em Inglês | MEDLINE | ID: mdl-33243829

RESUMO

BACKGROUND/AIMS: To develop a convolutional neural network (CNN) to detect symptomatic Alzheimer's disease (AD) using a combination of multimodal retinal images and patient data. METHODS: Colour maps of ganglion cell-inner plexiform layer (GC-IPL) thickness, superficial capillary plexus (SCP) optical coherence tomography angiography (OCTA) images, and ultra-widefield (UWF) colour and fundus autofluorescence (FAF) scanning laser ophthalmoscopy images were captured in individuals with AD or healthy cognition. A CNN to predict AD diagnosis was developed using multimodal retinal images, OCT and OCTA quantitative data, and patient data. RESULTS: 284 eyes of 159 subjects (222 eyes from 123 cognitively healthy subjects and 62 eyes from 36 subjects with AD) were used to develop the model. Area under the receiving operating characteristic curve (AUC) values for predicted probability of AD for the independent test set varied by input used: UWF colour AUC 0.450 (95% CI 0.282, 0.592), OCTA SCP 0.582 (95% CI 0.440, 0.724), UWF FAF 0.618 (95% CI 0.462, 0.773), GC-IPL maps 0.809 (95% CI 0.700, 0.919). A model incorporating all images, quantitative data and patient data (AUC 0.836 (CI 0.729, 0.943)) performed similarly to models only incorporating all images (AUC 0.829 (95% CI 0.719, 0.939)). GC-IPL maps, quantitative data and patient data AUC 0.841 (95% CI 0.739, 0.943). CONCLUSION: Our CNN used multimodal retinal images to successfully predict diagnosis of symptomatic AD in an independent test set. GC-IPL maps were the most useful single inputs for prediction. Models including only images performed similarly to models also including quantitative data and patient data.


Assuntos
Doença de Alzheimer , Doença de Alzheimer/diagnóstico por imagem , Angiofluoresceinografia/métodos , Humanos , Redes Neurais de Computação , Retina/diagnóstico por imagem , Vasos Retinianos , Tomografia de Coerência Óptica/métodos
17.
Arch Pathol Lab Med ; 146(6): 727-734, 2022 06 01.
Artigo em Inglês | MEDLINE | ID: mdl-34591085

RESUMO

CONTEXT.­: Prostate cancer is a common malignancy, and accurate diagnosis typically requires histologic review of multiple prostate core biopsies per patient. As pathology volumes and complexity increase, new tools to improve the efficiency of everyday practice are keenly needed. Deep learning has shown promise in pathology diagnostics, but most studies silo the efforts of pathologists from the application of deep learning algorithms. Very few hybrid pathologist-deep learning approaches have been explored, and these typically require complete review of histologic slides by both the pathologist and the deep learning system. OBJECTIVE.­: To develop a novel and efficient hybrid human-machine learning approach to screen prostate biopsies. DESIGN.­: We developed an algorithm to determine the 20 regions of interest with the highest probability of malignancy for each prostate biopsy; presenting these regions to a pathologist for manual screening limited the initial review by a pathologist to approximately 2% of the tissue area of each sample. We evaluated this approach by using 100 biopsies (29 malignant, 60 benign, 11 other) that were reviewed by 4 pathologists (3 urologic pathologists, 1 general pathologist) using a custom-designed graphical user interface. RESULTS.­: Malignant biopsies were correctly identified as needing comprehensive review with high sensitivity (mean, 99.2% among all pathologists); conversely, most benign prostate biopsies (mean, 72.1%) were correctly identified as needing no further review. CONCLUSIONS.­: This novel hybrid system has the potential to efficiently triage out most benign prostate core biopsies, conserving time for the pathologist to dedicate to detailed evaluation of malignant biopsies.


Assuntos
Próstata , Neoplasias da Próstata , Biópsia , Humanos , Aprendizado de Máquina , Masculino , Patologistas , Próstata/patologia , Neoplasias da Próstata/diagnóstico , Neoplasias da Próstata/patologia
18.
Arch Pathol Lab Med ; 146(7): 872-878, 2022 07 01.
Artigo em Inglês | MEDLINE | ID: mdl-34669924

RESUMO

CONTEXT.­: The use of whole slide images (WSIs) in diagnostic pathology presents special challenges for the cytopathologist. Informative areas on a direct smear from a thyroid fine-needle aspiration biopsy (FNAB) smear may be spread across a large area comprising blood and dead space. Manually navigating through these areas makes screening and evaluation of FNA smears on a digital platform time-consuming and laborious. We designed a machine learning algorithm that can identify regions of interest (ROIs) on thyroid fine-needle aspiration biopsy WSIs. OBJECTIVE.­: To evaluate the ability of the machine learning algorithm and screening software to identify and screen for a subset of informative ROIs on a thyroid FNA WSI that can be used for final diagnosis. DESIGN.­: A representative slide from each of 109 consecutive thyroid fine-needle aspiration biopsies was scanned. A cytopathologist reviewed each WSI and recorded a diagnosis. The machine learning algorithm screened and selected a subset of 100 ROIs from each WSI to present as an image gallery to the same cytopathologist after a washout period of 117 days. RESULTS.­: Concordance between the diagnoses using WSIs and those using the machine learning algorithm-generated ROI image gallery was evaluated using pairwise weighted κ statistics. Almost perfect concordance was seen between the 2 methods with a κ score of 0.924. CONCLUSIONS.­: Our results show the potential of the screening software as an effective screening tool with the potential to reduce cytopathologist workloads.


Assuntos
Software , Glândula Tireoide , Algoritmos , Biópsia por Agulha Fina/métodos , Humanos , Aprendizado de Máquina , Glândula Tireoide/diagnóstico por imagem , Glândula Tireoide/patologia
19.
Acad Med ; 96(9): 1230, 2021 09 01.
Artigo em Inglês | MEDLINE | ID: mdl-34432659
20.
Surg Endosc ; 35(9): 4918-4929, 2021 09.
Artigo em Inglês | MEDLINE | ID: mdl-34231065

RESUMO

BACKGROUND: The growing interest in analysis of surgical video through machine learning has led to increased research efforts; however, common methods of annotating video data are lacking. There is a need to establish recommendations on the annotation of surgical video data to enable assessment of algorithms and multi-institutional collaboration. METHODS: Four working groups were formed from a pool of participants that included clinicians, engineers, and data scientists. The working groups were focused on four themes: (1) temporal models, (2) actions and tasks, (3) tissue characteristics and general anatomy, and (4) software and data structure. A modified Delphi process was utilized to create a consensus survey based on suggested recommendations from each of the working groups. RESULTS: After three Delphi rounds, consensus was reached on recommendations for annotation within each of these domains. A hierarchy for annotation of temporal events in surgery was established. CONCLUSIONS: While additional work remains to achieve accepted standards for video annotation in surgery, the consensus recommendations on a general framework for annotation presented here lay the foundation for standardization. This type of framework is critical to enabling diverse datasets, performance benchmarks, and collaboration.


Assuntos
Aprendizado de Máquina , Consenso , Técnica Delphi , Humanos , Inquéritos e Questionários
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...