RESUMEN
Background and aim: In traditional medicine, Machilus zuihoensis Hayata bark (MZ) is used in combination with other medicines to treat gastric cancer, gastric ulcer (GU), and liver and cardiovascular diseases. This study aims to evaluate the gastroprotective effects and possible mechanism(s) of MZ powder against acidic ethanol (AE)-induced GU and its toxicity in mice. Experimental procedure: The gastroprotective effect of MZ powder was analyzed by orally administering MZ for 14 consecutive days before AE-inducing GU. Ulcer index (UI) and protection percentage were calculated, hematoxylin and eosin staining and periodic acid-Schiff staining were performed, and gastric mucus weights were measured. The antioxidative, anti-inflammatory, and anti-apoptotic mechanisms, and possible signaling pathway(s) were studied. Results and conclusion: Pretreatment with MZ (100 and 200 mg/kg) significantly decreased 10 µL/g AE-induced mucosal hemorrhage, edema, inflammation, and UI, resulted in protection percentages of 88.9% and 93.4%, respectively. MZ pretreatment reduced AE-induced oxidative stress by decreasing malondialdehyde level and restoring superoxide dismutase activity. MZ pretreatment demonstrated anti-inflammatory effects by reducing both serum and gastric tumor necrosis factor-α, interleukin (IL)-6, and IL-1ß levels. Furthermore, MZ pretreatment exhibited anti-apoptotic effect by decreasing Bcl-2 associated X protein/B-cell lymphoma 2 ratio. The gastroprotective mechanisms of MZ involved inactivations of nuclear factor kappa-light-chain enhancer of activated B cells (NF-κB) and mitogen activated protein kinase (MAPK) signaling pathways. Otherwise, 200 mg/kg MZ didn't induce liver or kidney toxicity. In conclusion, MZ protects AE-induced GU through mucus secreting, antioxidative, anti-inflammatory, and anti-apoptotic mechanisms, and inhibitions of NF-κB and MAPK signaling pathways.
RESUMEN
Advancements in deep learning and computer vision provide promising solutions for medical image analysis, potentially improving healthcare and patient outcomes. However, the prevailing paradigm of training deep learning models requires large quantities of labeled training data, which is both time-consuming and cost-prohibitive to curate for medical images. Self-supervised learning has the potential to make significant contributions to the development of robust medical imaging models through its ability to learn useful insights from copious medical datasets without labels. In this review, we provide consistent descriptions of different self-supervised learning strategies and compose a systematic review of papers published between 2012 and 2022 on PubMed, Scopus, and ArXiv that applied self-supervised learning to medical imaging classification. We screened a total of 412 relevant studies and included 79 papers for data extraction and analysis. With this comprehensive effort, we synthesize the collective knowledge of prior work and provide implementation guidelines for future researchers interested in applying self-supervised learning to their development of medical imaging classification models.
RESUMEN
The collection and curation of large-scale medical datasets from multiple institutions is essential for training accurate deep learning models, but privacy concerns often hinder data sharing. Federated learning (FL) is a promising solution that enables privacy-preserving collaborative learning among different institutions, but it generally suffers from performance deterioration due to heterogeneous data distributions and a lack of quality labeled data. In this paper, we present a robust and label-efficient self-supervised FL framework for medical image analysis. Our method introduces a novel Transformer-based self-supervised pre-training paradigm that pre-trains models directly on decentralized target task datasets using masked image modeling, to facilitate more robust representation learning on heterogeneous data and effective knowledge transfer to downstream models. Extensive empirical results on simulated and real-world medical imaging non-IID federated datasets show that masked image modeling with Transformers significantly improves the robustness of models against various degrees of data heterogeneity. Notably, under severe data heterogeneity, our method, without relying on any additional pre-training data, achieves an improvement of 5.06%, 1.53% and 4.58% in test accuracy on retinal, dermatology and chest X-ray classification compared to the supervised baseline with ImageNet pre-training. In addition, we show that our federated self-supervised pre-training methods yield models that generalize better to out-of-distribution data and perform more effectively when fine-tuning with limited labeled data, compared to existing FL algorithms. The code is available at https://github.com/rui-yan/SSL-FL.
Asunto(s)
Algoritmos , Diagnóstico por Imagen , Radiografía , RetinaRESUMEN
Prostate cancer is the most frequent cancer in men and a leading cause of cancer death. Determining a patient's optimal therapy is a challenge, where oncologists must select a therapy with the highest likelihood of success and the lowest likelihood of toxicity. International standards for prognostication rely on non-specific and semi-quantitative tools, commonly leading to over- and under-treatment. Tissue-based molecular biomarkers have attempted to address this, but most have limited validation in prospective randomized trials and expensive processing costs, posing substantial barriers to widespread adoption. There remains a significant need for accurate and scalable tools to support therapy personalization. Here we demonstrate prostate cancer therapy personalization by predicting long-term, clinically relevant outcomes using a multimodal deep learning architecture and train models using clinical data and digital histopathology from prostate biopsies. We train and validate models using five phase III randomized trials conducted across hundreds of clinical centers. Histopathological data was available for 5654 of 7764 randomized patients (71%) with a median follow-up of 11.4 years. Compared to the most common risk-stratification tool-risk groups developed by the National Cancer Center Network (NCCN)-our models have superior discriminatory performance across all endpoints, ranging from 9.2% to 14.6% relative improvement in a held-out validation set. This artificial intelligence-based tool improves prognostication over standard tools and allows oncologists to computationally predict the likeliest outcomes of specific patients to determine optimal treatment. Outfitted with digital scanners and internet access, any clinic could offer such capabilities, enabling global access to therapy personalization.
RESUMEN
BACKGROUND: Non-T2 asthma and hypothyroidism share several inflammatory mechanisms in common. However, large-scale, real-world studies evaluating the association between asthma and hypothyroidism are lacking. The objective of this study was to evaluate the risk for asthma patients of developing hypothyroidism. METHODS: In the retrospective cohort study, people with asthma were recruited from the Longitudinal Health Insurance Database in Taiwan. After excluding ineligible patients with a previous history of hypothyroidism, 1:1 propensity matching was conducted to select a non-asthma control group. Based on the multivariate Cox regression model, the adjusted hazard ratio of asthma patients developing hypothyroidism was calculated. RESULTS: In total, 95,321 asthma patients were selected as the asthma group and the same amount of people without asthma were selected as the control group. The incidence levels of new-onset hypothyroidism in asthma and non-asthma groups were 8.13 and 6.83 per 100,000 people per year, respectively. Compared with the non-asthma group, the adjusted hazard ratio of the asthma group developing hypothyroidism was 1.217 (95% confidence interval, 1.091-1.357). CONCLUSIONS: We found having asthma to be associated with an increased risk of hypothyroidism. Clinicians should be concerned regarding the endocrinological and inflammatory interaction between the two diseases while caring for people with asthma.
RESUMEN
BACKGROUND: Previous studies in medical imaging have shown disparate abilities of artificial intelligence (AI) to detect a person's race, yet there is no known correlation for race on medical imaging that would be obvious to human experts when interpreting the images. We aimed to conduct a comprehensive evaluation of the ability of AI to recognise a patient's racial identity from medical images. METHODS: Using private (Emory CXR, Emory Chest CT, Emory Cervical Spine, and Emory Mammogram) and public (MIMIC-CXR, CheXpert, National Lung Cancer Screening Trial, RSNA Pulmonary Embolism CT, and Digital Hand Atlas) datasets, we evaluated, first, performance quantification of deep learning models in detecting race from medical images, including the ability of these models to generalise to external environments and across multiple imaging modalities. Second, we assessed possible confounding of anatomic and phenotypic population features by assessing the ability of these hypothesised confounders to detect race in isolation using regression models, and by re-evaluating the deep learning models by testing them on datasets stratified by these hypothesised confounding variables. Last, by exploring the effect of image corruptions on model performance, we investigated the underlying mechanism by which AI models can recognise race. FINDINGS: In our study, we show that standard AI deep learning models can be trained to predict race from medical images with high performance across multiple imaging modalities, which was sustained under external validation conditions (x-ray imaging [area under the receiver operating characteristics curve (AUC) range 0·91-0·99], CT chest imaging [0·87-0·96], and mammography [0·81]). We also showed that this detection is not due to proxies or imaging-related surrogate covariates for race (eg, performance of possible confounders: body-mass index [AUC 0·55], disease distribution [0·61], and breast density [0·61]). Finally, we provide evidence to show that the ability of AI deep learning models persisted over all anatomical regions and frequency spectrums of the images, suggesting the efforts to control this behaviour when it is undesirable will be challenging and demand further study. INTERPRETATION: The results from our study emphasise that the ability of AI deep learning models to predict self-reported race is itself not the issue of importance. However, our finding that AI can accurately predict self-reported race, even from corrupted, cropped, and noised medical images, often when clinical experts cannot, creates an enormous risk for all model deployments in medical imaging. FUNDING: National Institute of Biomedical Imaging and Bioengineering, MIDRC grant of National Institutes of Health, US National Science Foundation, National Library of Medicine of the National Institutes of Health, and Taiwan Ministry of Science and Technology.
Asunto(s)
Aprendizaje Profundo , Neoplasias Pulmonares , Inteligencia Artificial , Detección Precoz del Cáncer , Humanos , Estudios RetrospectivosRESUMEN
Automatic segmentation of lung nodules on computed tomography (CT) images is challenging owing to the variability of morphology, location, and intensity. In addition, few segmentation methods can capture intra-nodular heterogeneity to assist lung nodule diagnosis. In this study, we propose an end-to-end architecture to perform fully automated segmentation of multiple types of lung nodules and generate intra-nodular heterogeneity images for clinical use. To this end, a hybrid loss is considered by introducing a Faster R-CNN model based on generalized intersection over union loss in generative adversarial network. The Lung Image Database Consortium image collection dataset, comprising 2,635 lung nodules, was combined with 3,200 lung nodules from five hospitals for this study. Compared with manual segmentation by radiologists, the proposed model obtained an average dice coefficient (DC) of 82.05% on the test dataset. Compared with U-net, NoduleNet, nnU-net, and other three models, the proposed method achieved comparable performance on lung nodule segmentation and generated more vivid and valid intra-nodular heterogeneity images, which are beneficial in radiological diagnosis. In an external test of 91 patients from another hospital, the proposed model achieved an average DC of 81.61%. The proposed method effectively addresses the challenges of inevitable human interaction and additional pre-processing procedures in the existing solutions for lung nodule segmentation. In addition, the results show that the intra-nodular heterogeneity images generated by the proposed model are suitable to facilitate lung nodule diagnosis in radiology.
Asunto(s)
Neoplasias Pulmonares , Bases de Datos Factuales , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Pulmón/diagnóstico por imagen , Neoplasias Pulmonares/diagnóstico por imagen , Tórax , Tomografía Computarizada por Rayos X/métodosRESUMEN
Background: In the field of autoimmune and inflammatory disorders, different approaches were applied to provide information regarding disease activity, comorbidities, epidemiological reports and risk factors. However, no previous studies had thoroughly analyzed the research trend in the field, and the bibliometric analysis focusing on pemphigoid diseases was available. The objective of the current study was to evaluate the current research trend in the field. Methods: A search has been conducted for the Web of Science database based on various subcategories of pemphigoid diseases. Detailed information including articles' publication types, Author information, citation, and publication information was attained for further analysis. Results: Within the 6,995 studies, the top 100 most-cited articles were extracted for analysis. Among the top 100 studies, 70% of the studies focused on bullous pemphigoid. More than 60% of the top 100 studies were studies with original data. Furthermore, 30% of the studies were guidelines and narrative reviews. For the issues primarily focused on, most of the high-impact studies described the molecular mechanism of pemphigoid diseases (26%), managements (19%), risk factors of pemphigoid diseases (17%). Additionally, some other studies provided general review or discussed about the issue of epidemiology, diagnosis/definition, comorbidities and clinical characteristics of pemphigoid diseases. Conclusion: This comprehensive bibliographic study of pemphigoid diseases provided an overview of current research focuses in the field. Topics such as disease management, molecular mechanism of pathogenesis, and drug-inducing pemphigoid diseases were highly mentioned in the most-cited studies. For researchers and clinicians, the researching trend and study focus in the top-100 cited studies could serve as a potential reference for future investigation and patient management.
RESUMEN
Recent advancements in deep learning have led to a resurgence of medical imaging and Electronic Medical Record (EMR) models for a variety of applications, including clinical decision support, automated workflow triage, clinical prediction and more. However, very few models have been developed to integrate both clinical and imaging data, despite that in routine practice clinicians rely on EMR to provide context in medical imaging interpretation. In this study, we developed and compared different multimodal fusion model architectures that are capable of utilizing both pixel data from volumetric Computed Tomography Pulmonary Angiography scans and clinical patient data from the EMR to automatically classify Pulmonary Embolism (PE) cases. The best performing multimodality model is a late fusion model that achieves an AUROC of 0.947 [95% CI: 0.946-0.948] on the entire held-out test set, outperforming imaging-only and EMR-only single modality models.
Asunto(s)
Registros Electrónicos de Salud , Procesamiento de Imagen Asistido por Computador/métodos , Redes Neurales de la Computación , Embolia Pulmonar/diagnóstico , Tomografía Computarizada por Rayos X , Toma de Decisiones Clínicas , Manejo de la Enfermedad , Humanos , Interpretación de Imagen Asistida por Computador , Aprendizaje Automático , Sensibilidad y Especificidad , Tomografía Computarizada por Rayos X/métodos , Flujo de TrabajoRESUMEN
Advancements in deep learning techniques carry the potential to make significant contributions to healthcare, particularly in fields that utilize medical imaging for diagnosis, prognosis, and treatment decisions. The current state-of-the-art deep learning models for radiology applications consider only pixel-value information without data informing clinical context. Yet in practice, pertinent and accurate non-imaging data based on the clinical history and laboratory data enable physicians to interpret imaging findings in the appropriate clinical context, leading to a higher diagnostic accuracy, informative clinical decision making, and improved patient outcomes. To achieve a similar goal using deep learning, medical imaging pixel-based models must also achieve the capability to process contextual data from electronic health records (EHR) in addition to pixel data. In this paper, we describe different data fusion techniques that can be applied to combine medical imaging with EHR, and systematically review medical data fusion literature published between 2012 and 2020. We conducted a systematic search on PubMed and Scopus for original research articles leveraging deep learning for fusion of multimodality data. In total, we screened 985 studies and extracted data from 17 papers. By means of this systematic review, we present current knowledge, summarize important results and provide implementation guidelines to serve as a reference for researchers interested in the application of multimodal fusion in medical imaging.
RESUMEN
[This corrects the article DOI: 10.1038/s41746-020-0266-y.].
RESUMEN
Pulmonary embolism (PE) is a life-threatening clinical problem and computed tomography pulmonary angiography (CTPA) is the gold standard for diagnosis. Prompt diagnosis and immediate treatment are critical to avoid high morbidity and mortality rates, yet PE remains among the diagnoses most frequently missed or delayed. In this study, we developed a deep learning model-PENet, to automatically detect PE on volumetric CTPA scans as an end-to-end solution for this purpose. The PENet is a 77-layer 3D convolutional neural network (CNN) pretrained on the Kinetics-600 dataset and fine-tuned on a retrospective CTPA dataset collected from a single academic institution. The PENet model performance was evaluated in detecting PE on data from two different institutions: one as a hold-out dataset from the same institution as the training data and a second collected from an external institution to evaluate model generalizability to an unrelated population dataset. PENet achieved an AUROC of 0.84 [0.82-0.87] on detecting PE on the hold out internal test set and 0.85 [0.81-0.88] on external dataset. PENet also outperformed current state-of-the-art 3D CNN models. The results represent successful application of an end-to-end 3D CNN model for the complex task of PE diagnosis without requiring computationally intensive and time consuming preprocessing and demonstrates sustained performance on data from an external institution. Our model could be applied as a triage tool to automatically identify clinically important PEs allowing for prioritization for diagnostic radiology interpretation and improved care pathways via more efficient diagnosis.
Asunto(s)
Biología Computacional , Guías como Asunto , Escritura , Autoria , Documentación , HumanosRESUMEN
In this study, indium-tin oxide (ITO)/Al-doped zinc oxide (AZO) composite films were fabricated by pulsed laser deposition and used as transparent contact layers (TCLs) in GaN-based blue light emitting diodes (LEDs). The ITO/AZO TCLs were composed of the thin ITO (50 nm) films and AZO films with various thicknesses from 200 to 1000 nm. Conventional LED with ITO (200 nm) TCL prepared by E-beam evaporation was fabricated and characterized for comparison. From the transmittance spectra, the ITO/AZO films exhibited high transparency above 90% at wavelength of 465 nm. The sheet resistance of ITO/AZO TCL decreased as the AZO thickness increased, which could be attributed to the increase in a carrier concentration, leading to a decrease in the forward bias of LED. The LEDs with ITO/AZO composite TCLs showed better light extraction as compared to LED with ITO TCL in compliance with simulation. When an injection current of 20 mA was applied, the output power for LEDs fabricated with ITO/AZO TCLs had 45%, 63%, and 71% enhancement as compared with those fabricated using ITO (200 nm) TCL for the AZO thicknesses of 200, 460, and 1000 nm, respectively.