Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 21
Filtrar
1.
Nature ; 2024 Jun 12.
Artigo em Inglês | MEDLINE | ID: mdl-38866050

RESUMO

The field of computational pathology[1,2] has witnessed remarkable progress in the development of both task-specific predictive models and task-agnostic self-supervised vision encoders[3,4]. However, despite the explosive growth of generative artificial intelligence (AI), there has been limited study on building general purpose, multimodal AI assistants and copilots[5] tailored to pathology. Here we present PathChat, a vision-language generalist AI assistant for human pathology. We build PathChat by adapting a foundational vision encoder for pathology, combining it with a pretrained large language model and finetuning the whole system on over 456,000 diverse visual language instructions consisting of 999,202 question-answer turns. We compare PathChat against several multimodal vision language AI assistants and GPT4V, which powers the commercially available multimodal general purpose AI assistant ChatGPT-4[7]. PathChat achieved state-of-the-art performance on multiple-choice diagnostic questions from cases of diverse tissue origins and disease models. Furthermore, using open-ended questions and human expert evaluation, we found that overall PathChat produced more accurate and pathologist-preferable responses to diverse queries related to pathology. As an interactive and general vision-language AI Copilot that can flexibly handle both visual and natural language inputs, PathChat can potentially find impactful applications in pathology education, research, and human-in-the-loop clinical decision making.

2.
BMC Musculoskelet Disord ; 15: 425, 2014 Dec 11.
Artigo em Inglês | MEDLINE | ID: mdl-25496568

RESUMO

BACKGROUND: Recent work has shown that the gap junction protein connexin43 (Cx43) is upregulated in cells of the joint during osteoarthritis (OA). Here we examined if the OA-associated increase in Cx43 expression impacts the function of synovial fibroblasts by contributing to the production of catabolic and inflammatory factors that exacerbate joint destruction in arthritic disease. METHODS: Using rabbit and human synovial fibroblast cell lines, we examined the effects of Cx43 overexpression and Cx43 siRNA-mediated knockdown on the gene expression of OA-associated matrix metalloproteinases (MMP1 and MMP13), aggrecanases (ADAMTS4 and ADAMTS5), and inflammatory factors (IL1, IL6 and PTGS2) by quantitative real time RT-PCR. We examined collagenase activity in conditioned media of cultured synovial cells following Cx43 overexpression. Lastly, we assessed the interplay between Cx43 and the NFκB cascade by western blotting and gene expression studies. RESULTS: Increasing Cx43 expression enhanced the gene expression of MMP1, MMP13, ADAMTS4, ADAMTS5, IL1, IL6 and PTGS2 and increased the secretion of collagenases into conditioned media of cultured synovial fibroblasts. Conversely, knockdown of Cx43 decreased expression of many of these catabolic and inflammatory genes. Modulation of Cx43 expression altered the phosphorylation of the NFκB subunit, p65, and inhibition of NFκB with chemical inhibitors blocked the effects of increased Cx43 expression on the mRNA levels of a subset of these catabolic and inflammatory genes. CONCLUSIONS: Increasing or decreasing Cx43 expression alone was sufficient to alter the levels of catabolic and inflammatory genes expressed by synovial cells. The NFκB cascade mediated the effect of Cx43 on the expression of a subset of these OA-associated genes. As such, Cx43 may be involved in joint pathology during OA, and targeting Cx43 expression or function may be a viable therapeutic strategy to attenuate the catabolic and inflammatory environment of the joint during OA.


Assuntos
Conexina 43/biossíntese , Conexina 43/genética , Fibroblastos/metabolismo , Osteoartrite/genética , Osteoartrite/metabolismo , Membrana Sinovial/metabolismo , Animais , Células Cultivadas , Regulação da Expressão Gênica , Humanos , Mediadores da Inflamação/metabolismo , Coelhos , Ratos
3.
Nat Med ; 30(3): 863-874, 2024 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-38504017

RESUMO

The accelerated adoption of digital pathology and advances in deep learning have enabled the development of robust models for various pathology tasks across a diverse array of diseases and patient cohorts. However, model training is often difficult due to label scarcity in the medical domain, and a model's usage is limited by the specific task and disease for which it is trained. Additionally, most models in histopathology leverage only image data, a stark contrast to how humans teach each other and reason about histopathologic entities. We introduce CONtrastive learning from Captions for Histopathology (CONCH), a visual-language foundation model developed using diverse sources of histopathology images, biomedical text and, notably, over 1.17 million image-caption pairs through task-agnostic pretraining. Evaluated on a suite of 14 diverse benchmarks, CONCH can be transferred to a wide range of downstream tasks involving histopathology images and/or text, achieving state-of-the-art performance on histology image classification, segmentation, captioning, and text-to-image and image-to-text retrieval. CONCH represents a substantial leap over concurrent visual-language pretrained systems for histopathology, with the potential to directly facilitate a wide array of machine learning-based workflows requiring minimal or no further supervised fine-tuning.


Assuntos
Idioma , Aprendizado de Máquina , Humanos , Fluxo de Trabalho
4.
Nat Med ; 30(4): 1174-1190, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-38641744

RESUMO

Despite increasing numbers of regulatory approvals, deep learning-based computational pathology systems often overlook the impact of demographic factors on performance, potentially leading to biases. This concern is all the more important as computational pathology has leveraged large public datasets that underrepresent certain demographic groups. Using publicly available data from The Cancer Genome Atlas and the EBRAINS brain tumor atlas, as well as internal patient data, we show that whole-slide image classification models display marked performance disparities across different demographic groups when used to subtype breast and lung carcinomas and to predict IDH1 mutations in gliomas. For example, when using common modeling approaches, we observed performance gaps (in area under the receiver operating characteristic curve) between white and Black patients of 3.0% for breast cancer subtyping, 10.9% for lung cancer subtyping and 16.0% for IDH1 mutation prediction in gliomas. We found that richer feature representations obtained from self-supervised vision foundation models reduce performance variations between groups. These representations provide improvements upon weaker models even when those weaker models are combined with state-of-the-art bias mitigation strategies and modeling choices. Nevertheless, self-supervised vision foundation models do not fully eliminate these discrepancies, highlighting the continuing need for bias mitigation efforts in computational pathology. Finally, we demonstrate that our results extend to other demographic factors beyond patient race. Given these findings, we encourage regulatory and policy agencies to integrate demographic-stratified evaluation into their assessment guidelines.


Assuntos
Glioma , Neoplasias Pulmonares , Humanos , Viés , Negro ou Afro-Americano , População Negra , Demografia , Erros de Diagnóstico , Glioma/diagnóstico , Glioma/genética , Brancos
5.
Nat Med ; 30(3): 850-862, 2024 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-38504018

RESUMO

Quantitative evaluation of tissue images is crucial for computational pathology (CPath) tasks, requiring the objective characterization of histopathological entities from whole-slide images (WSIs). The high resolution of WSIs and the variability of morphological features present significant challenges, complicating the large-scale annotation of data for high-performance applications. To address this challenge, current efforts have proposed the use of pretrained image encoders through transfer learning from natural image datasets or self-supervised learning on publicly available histopathology datasets, but have not been extensively developed and evaluated across diverse tissue types at scale. We introduce UNI, a general-purpose self-supervised model for pathology, pretrained using more than 100 million images from over 100,000 diagnostic H&E-stained WSIs (>77 TB of data) across 20 major tissue types. The model was evaluated on 34 representative CPath tasks of varying diagnostic difficulty. In addition to outperforming previous state-of-the-art models, we demonstrate new modeling capabilities in CPath such as resolution-agnostic tissue classification, slide classification using few-shot class prototypes, and disease subtyping generalization in classifying up to 108 cancer types in the OncoTree classification system. UNI advances unsupervised representation learning at scale in CPath in terms of both pretraining data and downstream evaluation, enabling data-efficient artificial intelligence models that can generalize and transfer to a wide range of diagnostically challenging tasks and clinical workflows in anatomic pathology.


Assuntos
Inteligência Artificial , Fluxo de Trabalho
6.
Nat Biomed Eng ; 7(6): 719-742, 2023 06.
Artigo em Inglês | MEDLINE | ID: mdl-37380750

RESUMO

In healthcare, the development and deployment of insufficiently fair systems of artificial intelligence (AI) can undermine the delivery of equitable care. Assessments of AI models stratified across subpopulations have revealed inequalities in how patients are diagnosed, treated and billed. In this Perspective, we outline fairness in machine learning through the lens of healthcare, and discuss how algorithmic biases (in data acquisition, genetic variation and intra-observer labelling variability, in particular) arise in clinical workflows and the resulting healthcare disparities. We also review emerging technology for mitigating biases via disentanglement, federated learning and model explainability, and their role in the development of AI-based software as a medical device.


Assuntos
Inteligência Artificial , Medicina , Humanos , Software , Aprendizado de Máquina , Atenção à Saúde
7.
ArXiv ; 2023 Aug 29.
Artigo em Inglês | MEDLINE | ID: mdl-37693180

RESUMO

Tissue phenotyping is a fundamental computational pathology (CPath) task in learning objective characterizations of histopathologic biomarkers in anatomic pathology. However, whole-slide imaging (WSI) poses a complex computer vision problem in which the large-scale image resolutions of WSIs and the enormous diversity of morphological phenotypes preclude large-scale data annotation. Current efforts have proposed using pretrained image encoders with either transfer learning from natural image datasets or self-supervised pretraining on publicly-available histopathology datasets, but have not been extensively developed and evaluated across diverse tissue types at scale. We introduce UNI, a general-purpose self-supervised model for pathology, pretrained using over 100 million tissue patches from over 100,000 diagnostic haematoxylin and eosin-stained WSIs across 20 major tissue types, and evaluated on 33 representative CPath clinical tasks in CPath of varying diagnostic difficulties. In addition to outperforming previous state-of-the-art models, we demonstrate new modeling capabilities in CPath such as resolution-agnostic tissue classification, slide classification using few-shot class prototypes, and disease subtyping generalization in classifying up to 108 cancer types in the OncoTree code classification system. UNI advances unsupervised representation learning at scale in CPath in terms of both pretraining data and downstream evaluation, enabling data-efficient AI models that can generalize and transfer to a gamut of diagnostically-challenging tasks and clinical workflows in anatomic pathology.

8.
Emerg Med Clin North Am ; 40(2): 343-364, 2022 May.
Artigo em Inglês | MEDLINE | ID: mdl-35461627

RESUMO

In the 2019 annual report by the American Association of Poison Control Centers, there were more than 180,000 single substance exposures involving household cleaners, making these products the second most common exposure reported to poison control centers. Little controversy exists in the general management following dermal or ocular caustic exposure. However, there still exists controversy concerning management of gastrointestinal caustic exposure. This article provides a thorough review of diagnosis, management and prevention of gastrointestinal caustic exposures and their sequelae. Hydrofluoric acid, which requires special consideration compared to other acids, is also explored.


Assuntos
Cáusticos , Intoxicação , Cáusticos/toxicidade , Humanos , Centros de Controle de Intoxicações , Estados Unidos/epidemiologia
9.
IEEE Trans Med Imaging ; 41(4): 757-770, 2022 04.
Artigo em Inglês | MEDLINE | ID: mdl-32881682

RESUMO

Cancer diagnosis, prognosis, mymargin and therapeutic response predictions are based on morphological information from histology slides and molecular profiles from genomic data. However, most deep learning-based objective outcome prediction and grading paradigms are based on histology or genomics alone and do not make use of the complementary information in an intuitive manner. In this work, we propose Pathomic Fusion, an interpretable strategy for end-to-end multimodal fusion of histology image and genomic (mutations, CNV, RNA-Seq) features for survival outcome prediction. Our approach models pairwise feature interactions across modalities by taking the Kronecker product of unimodal feature representations, and controls the expressiveness of each representation via a gating-based attention mechanism. Following supervised learning, we are able to interpret and saliently localize features across each modality, and understand how feature importance shifts when conditioning on multimodal input. We validate our approach using glioma and clear cell renal cell carcinoma datasets from the Cancer Genome Atlas (TCGA), which contains paired whole-slide image, genotype, and transcriptome data with ground truth survival and histologic grade labels. In a 15-fold cross-validation, our results demonstrate that the proposed multimodal fusion paradigm improves prognostic determinations from ground truth grading and molecular subtyping, as well as unimodal deep networks trained on histology and genomic data alone. The proposed method establishes insight and theory on how to train deep networks on multimodal biomedical data in an intuitive manner, which will be useful for other problems in medicine that seek to combine heterogeneous data streams for understanding diseases and predicting response and resistance to treatment. Code and trained models are made available at: https://github.com/mahmoodlab/PathomicFusion.


Assuntos
Genômica , Glioma , Genômica/métodos , Glioma/patologia , Técnicas Histológicas , Humanos , Prognóstico
10.
Med Image Anal ; 76: 102298, 2022 02.
Artigo em Inglês | MEDLINE | ID: mdl-34911013

RESUMO

Deep Learning-based computational pathology algorithms have demonstrated profound ability to excel in a wide array of tasks that range from characterization of well known morphological phenotypes to predicting non human-identifiable features from histology such as molecular alterations. However, the development of robust, adaptable and accurate deep learning-based models often rely on the collection and time-costly curation large high-quality annotated training data that should ideally come from diverse sources and patient populations to cater for the heterogeneity that exists in such datasets. Multi-centric and collaborative integration of medical data across multiple institutions can naturally help overcome this challenge and boost the model performance but is limited by privacy concerns among other difficulties that may arise in the complex data sharing process as models scale towards using hundreds of thousands of gigapixel whole slide images. In this paper, we introduce privacy-preserving federated learning for gigapixel whole slide images in computational pathology using weakly-supervised attention multiple instance learning and differential privacy. We evaluated our approach on two different diagnostic problems using thousands of histology whole slide images with only slide-level labels. Additionally, we present a weakly-supervised learning framework for survival prediction and patient stratification from whole slide images and demonstrate its effectiveness in a federated setting. Our results show that using federated learning, we can effectively develop accurate weakly-supervised deep learning models from distributed data silos without direct data sharing and its associated complexities, while also preserving differential privacy using randomized noise generation. We also make available an easy-to-use federated learning for computational pathology software package: http://github.com/mahmoodlab/HistoFL.


Assuntos
Algoritmos , Privacidade , Técnicas Histológicas , Humanos
11.
Cancer Cell ; 40(10): 1095-1110, 2022 10 10.
Artigo em Inglês | MEDLINE | ID: mdl-36220072

RESUMO

In oncology, the patient state is characterized by a whole spectrum of modalities, ranging from radiology, histology, and genomics to electronic health records. Current artificial intelligence (AI) models operate mainly in the realm of a single modality, neglecting the broader clinical context, which inevitably diminishes their potential. Integration of different data modalities provides opportunities to increase robustness and accuracy of diagnostic and prognostic models, bringing AI closer to clinical practice. AI models are also capable of discovering novel patterns within and across modalities suitable for explaining differences in patient outcomes or treatment resistance. The insights gleaned from such models can guide exploration studies and contribute to the discovery of novel biomarkers and therapeutic targets. To support these advances, here we present a synopsis of AI methods and strategies for multimodal data fusion and association discovery. We outline approaches for AI interpretability and directions for AI-driven exploration through multimodal data interconnections. We examine challenges in clinical adoption and discuss emerging solutions.


Assuntos
Inteligência Artificial , Radiologia , Registros Eletrônicos de Saúde , Genômica , Humanos , Oncologia
12.
Cancer Cell ; 40(8): 865-878.e6, 2022 08 08.
Artigo em Inglês | MEDLINE | ID: mdl-35944502

RESUMO

The rapidly emerging field of computational pathology has demonstrated promise in developing objective prognostic models from histology images. However, most prognostic models are either based on histology or genomics alone and do not address how these data sources can be integrated to develop joint image-omic prognostic models. Additionally, identifying explainable morphological and molecular descriptors from these models that govern such prognosis is of interest. We use multimodal deep learning to jointly examine pathology whole-slide images and molecular profile data from 14 cancer types. Our weakly supervised, multimodal deep-learning algorithm is able to fuse these heterogeneous modalities to predict outcomes and discover prognostic features that correlate with poor and favorable outcomes. We present all analyses for morphological and molecular correlates of patient prognosis across the 14 cancer types at both a disease and a patient level in an interactive open-access database to allow for further exploration, biomarker discovery, and feature assessment.


Assuntos
Aprendizado Profundo , Neoplasias , Algoritmos , Genômica/métodos , Humanos , Neoplasias/genética , Neoplasias/patologia , Prognóstico
13.
Nat Med ; 28(3): 575-582, 2022 03.
Artigo em Inglês | MEDLINE | ID: mdl-35314822

RESUMO

Endomyocardial biopsy (EMB) screening represents the standard of care for detecting allograft rejections after heart transplant. Manual interpretation of EMBs is affected by substantial interobserver and intraobserver variability, which often leads to inappropriate treatment with immunosuppressive drugs, unnecessary follow-up biopsies and poor transplant outcomes. Here we present a deep learning-based artificial intelligence (AI) system for automated assessment of gigapixel whole-slide images obtained from EMBs, which simultaneously addresses detection, subtyping and grading of allograft rejection. To assess model performance, we curated a large dataset from the United States, as well as independent test cohorts from Turkey and Switzerland, which includes large-scale variability across populations, sample preparations and slide scanning instrumentation. The model detects allograft rejection with an area under the receiver operating characteristic curve (AUC) of 0.962; assesses the cellular and antibody-mediated rejection type with AUCs of 0.958 and 0.874, respectively; detects Quilty B lesions, benign mimics of rejection, with an AUC of 0.939; and differentiates between low-grade and high-grade rejections with an AUC of 0.833. In a human reader study, the AI system showed non-inferior performance to conventional assessment and reduced interobserver variability and assessment time. This robust evaluation of cardiac allograft rejection paves the way for clinical trials to establish the efficacy of AI-assisted EMB assessment and its potential for improving heart transplant outcomes.


Assuntos
Aprendizado Profundo , Rejeição de Enxerto , Aloenxertos , Inteligência Artificial , Biópsia , Rejeição de Enxerto/diagnóstico , Humanos , Miocárdio/patologia
14.
Nat Biomed Eng ; 5(6): 555-570, 2021 06.
Artigo em Inglês | MEDLINE | ID: mdl-33649564

RESUMO

Deep-learning methods for computational pathology require either manual annotation of gigapixel whole-slide images (WSIs) or large datasets of WSIs with slide-level labels and typically suffer from poor domain adaptation and interpretability. Here we report an interpretable weakly supervised deep-learning method for data-efficient WSI processing and learning that only requires slide-level labels. The method, which we named clustering-constrained-attention multiple-instance learning (CLAM), uses attention-based learning to identify subregions of high diagnostic value to accurately classify whole slides and instance-level clustering over the identified representative regions to constrain and refine the feature space. By applying CLAM to the subtyping of renal cell carcinoma and non-small-cell lung cancer as well as the detection of lymph node metastasis, we show that it can be used to localize well-known morphological features on WSIs without the need for spatial labels, that it overperforms standard weakly supervised classification algorithms and that it is adaptable to independent test cohorts, smartphone microscopy and varying tissue content.


Assuntos
Carcinoma Pulmonar de Células não Pequenas/diagnóstico por imagem , Carcinoma de Células Renais/diagnóstico por imagem , Aprendizado Profundo , Interpretação de Imagem Assistida por Computador/estatística & dados numéricos , Neoplasias Renais/diagnóstico por imagem , Neoplasias Pulmonares/diagnóstico por imagem , Área Sob a Curva , Carcinoma Pulmonar de Células não Pequenas/patologia , Carcinoma de Células Renais/patologia , Histocitoquímica/métodos , Histocitoquímica/estatística & dados numéricos , Humanos , Neoplasias Renais/patologia , Neoplasias Pulmonares/patologia , Metástase Linfática , Microscopia/métodos , Microscopia/estatística & dados numéricos , Smartphone
15.
Med Image Anal ; 70: 101990, 2021 05.
Artigo em Inglês | MEDLINE | ID: mdl-33609920

RESUMO

Current capsule endoscopes and next-generation robotic capsules for diagnosis and treatment of gastrointestinal diseases are complex cyber-physical platforms that must orchestrate complex software and hardware functions. The desired tasks for these systems include visual localization, depth estimation, 3D mapping, disease detection and segmentation, automated navigation, active control, path realization and optional therapeutic modules such as targeted drug delivery and biopsy sampling. Data-driven algorithms promise to enable many advanced functionalities for capsule endoscopes, but real-world data is challenging to obtain. Physically-realistic simulations providing synthetic data have emerged as a solution to the development of data-driven algorithms. In this work, we present a comprehensive simulation platform for capsule endoscopy operations and introduce VR-Caps, a virtual active capsule environment that simulates a range of normal and abnormal tissue conditions (e.g., inflated, dry, wet etc.) and varied organ types, capsule endoscope designs (e.g., mono, stereo, dual and 360∘ camera), and the type, number, strength, and placement of internal and external magnetic sources that enable active locomotion. VR-Caps makes it possible to both independently or jointly develop, optimize, and test medical imaging and analysis software for the current and next-generation endoscopic capsule systems. To validate this approach, we train state-of-the-art deep neural networks to accomplish various medical image analysis tasks using simulated data from VR-Caps and evaluate the performance of these models on real medical data. Results demonstrate the usefulness and effectiveness of the proposed virtual platform in developing algorithms that quantify fractional coverage, camera trajectory, 3D map reconstruction, and disease classification. All of the code, pre-trained weights and created 3D organ models of the virtual environment with detailed instructions how to setup and use the environment are made publicly available at https://github.com/CapsuleEndoscope/VirtualCapsuleEndoscopy and a video demonstration can be seen in the supplementary videos (Video-I).


Assuntos
Endoscopia por Cápsula , Robótica , Algoritmos , Simulação por Computador , Endoscopia , Humanos , Redes Neurais de Computação
16.
Case Rep Emerg Med ; 2020: 1790310, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-32257458

RESUMO

Carbamazepine is an antiepileptic drug that can cause seizures in overdose. In certain patient populations, this may be misdiagnosed as a seizure disorder. We describe a case of a 20-month-old female who presented with fever and seizure-like activity who was initially thought to have complex febrile seizures. Further historical information prompted carbamazepine level to be checked, which was found to be 29 mcg/ml (therapeutic range of 4-12 mcg/ml). Her carbamazepine levels downtrended with multidose activated charcoal. Her condition improved, and she was discharged without evidence of permanent neurologic sequelae. This case illustrates that xenobiotic exposure should often be considered, even if historical clues are not present, as they can often present as other conditions leading to misdiagnosis and delayed treatment.

17.
IEEE Trans Med Imaging ; 39(11): 3257-3267, 2020 11.
Artigo em Inglês | MEDLINE | ID: mdl-31283474

RESUMO

Nuclei mymargin segmentation is a fundamental task for various computational pathology applications including nuclei morphology analysis, cell type classification, and cancer grading. Deep learning has emerged as a powerful approach to segmenting nuclei but the accuracy of convolutional neural networks (CNNs) depends on the volume and the quality of labeled histopathology data for training. In particular, conventional CNN-based approaches lack structured prediction capabilities, which are required to distinguish overlapping and clumped nuclei. Here, we present an approach to nuclei segmentation that overcomes these challenges by utilizing a conditional generative adversarial network (cGAN) trained with synthetic and real data. We generate a large dataset of H&E training images with perfect nuclei segmentation labels using an unpaired GAN framework. This synthetic data along with real histopathology data from six different organs are used to train a conditional GAN with spectral normalization and gradient penalty for nuclei segmentation. This adversarial regression framework enforces higher-order spacial-consistency when compared to conventional CNN models. We demonstrate that this nuclei segmentation approach generalizes across different organs, sites, patients and disease states, and outperforms conventional approaches, especially in isolating individual and overlapping nuclei.


Assuntos
Processamento de Imagem Assistida por Computador , Redes Neurais de Computação , Humanos
18.
IEEE Trans Med Imaging ; 39(12): 4297-4309, 2020 12.
Artigo em Inglês | MEDLINE | ID: mdl-32795966

RESUMO

Although wireless capsule endoscopy is the preferred modality for diagnosis and assessment of small bowel diseases, the poor camera resolution is a substantial limitation for both subjective and automated diagnostics. Enhanced-resolution endoscopy has shown to improve adenoma detection rate for conventional endoscopy and is likely to do the same for capsule endoscopy. In this work, we propose and quantitatively validate a novel framework to learn a mapping from low-to-high-resolution endoscopic images. We combine conditional adversarial networks with a spatial attention block to improve the resolution by up to factors of 8× , 10× , 12× , respectively. Quantitative and qualitative studies demonstrate the superiority of EndoL2H over state-of-the-art deep super-resolution methods Deep Back-Projection Networks (DBPN), Deep Residual Channel Attention Networks (RCAN) and Super Resolution Generative Adversarial Network (SRGAN). Mean Opinion Score (MOS) tests were performed by 30 gastroenterologists qualitatively assess and confirm the clinical relevance of the approach. EndoL2H is generally applicable to any endoscopic capsule system and has the potential to improve diagnosis and better harness computational approaches for polyp detection and characterization. Our code and trained models are available at https://github.com/CapsuleEndoscope/EndoL2H.


Assuntos
Endoscopia por Cápsula
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA