RESUMEN
Optical coherence tomography angiography (OCTA) plays a crucial role in quantifying and analyzing retinal vascular diseases. However, the limited field of view (FOV) inherent in most commercial OCTA imaging systems poses a significant challenge for clinicians, restricting the possibility to analyze larger retinal regions of high resolution. Automatic stitching of OCTA scans in adjacent regions may provide a promising solution to extend the region of interest. However, commonly-used stitching algorithms face difficulties in achieving effective alignment due to noise, artifacts and dense vasculature present in OCTA images. To address these challenges, we propose a novel retinal OCTA image stitching network, named MR2-Net, which integrates multi-scale representation learning and dynamic location guidance. In the first stage, an image registration network with a progressive multi-resolution feature fusion is proposed to derive deep semantic information effectively. Additionally, we introduce a dynamic guidance strategy to locate the foveal avascular zone (FAZ) and constrain registration errors in overlapping vascular regions. In the second stage, an image fusion network based on multiple mask constraints and adjacent image aggregation (AIA) strategies is developed to further eliminate the artifacts in the overlapping areas of stitched images, thereby achieving precise vessel alignment. To validate the effectiveness of our method, we conduct a series of experiments on two delicately constructed datasets, i.e., OPTOVUE-OCTA and SVision-OCTA. Experimental results demonstrate that our method outperforms other image stitching methods and effectively generates high-quality wide-field OCTA images, achieving a structural similarity index (SSIM) score of 0.8264 and 0.8014 on the two datasets, respectively.
RESUMEN
Living-skin detection is an important step for imaging photoplethysmography and biometric anti-spoofing. In this paper, we propose a new approach that exploits spatio-temporal characteristics of structured light patterns projected on the skin surface for living-skin detection. We observed that due to the interactions between laser photons and tissues inside a multi-layer skin structure, the frequency-domain sharpness feature of laser spots on skin and non-skin surfaces exhibits clear difference. Additionally, the subtle physiological motion of living-skin causes laser interference, leading to brightness fluctuations of laser spots projected on the skin surface. Based on these two observations, we designed a new living-skin detection algorithm to distinguish skin from non-skin using spatio-temporal features of structured laser spots. Experiments in the dark chamber and Neonatal Intensive Care Unit (NICU) demonstrated that the proposed setup and method performed well, achieving a precision of 85.32%, recall of 83.87%, and F1-score of 83.03% averaged over these two scenes. Compared to the approach that only leverages the property of multilayer skin structure, the hybrid approach obtains an averaged improvement of 8.18% in precision, 3.93% in recall, and 8.64% in F1-score. These results validate the efficacy of using frequency domain sharpness and brightness fluctuations to augment the features of living-skin tissues irradiated by structured light, providing a solid basis for structured light based physiological imaging.
Asunto(s)
Algoritmos , Piel , Análisis Espacio-Temporal , Humanos , Piel/diagnóstico por imagen , Fotopletismografía/métodos , Procesamiento de Imagen Asistido por Computador/métodosRESUMEN
Efficient medical image segmentation aims to provide accurate pixel-wise predictions with a lightweight implementation framework. However, existing lightweight networks generally overlook the generalizability of the cross-domain medical segmentation tasks. In this paper, we propose Generalizable Knowledge Distillation (GKD), a novel framework for enhancing the performance of lightweight networks on cross-domain medical segmentation by generalizable knowledge distillation from powerful teacher networks. Considering the domain gaps between different medical datasets, we propose the Model-Specific Alignment Networks (MSAN) to obtain the domain-invariant representations. Meanwhile, a customized Alignment Consistency Training (ACT) strategy is designed to promote the MSAN training. Based on the domain-invariant vectors in MSAN, we propose two generalizable distillation schemes, Dual Contrastive Graph Distillation (DCGD) and Domain-Invariant Cross Distillation (DICD). In DCGD, two implicit contrastive graphs are designed to model the intra-coupling and inter-coupling semantic correlations. Then, in DICD, the domain-invariant semantic vectors are reconstructed from two networks (i.e., teacher and student) with a crossover manner to achieve simultaneous generalization of lightweight networks, hierarchically. Moreover, a metric named Fréchet Semantic Distance (FSD) is tailored to verify the effectiveness of the regularized domain-invariant features. Extensive experiments conducted on the Liver, Retinal Vessel and Colonoscopy segmentation datasets demonstrate the superiority of our method, in terms of performance and generalization ability on lightweight networks.
Asunto(s)
Procesamiento de Imagen Asistido por Computador , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Algoritmos , Redes Neurales de la Computación , Bases de Datos Factuales , Aprendizaje ProfundoRESUMEN
BACKGROUND: Early and reliable identification of patients with sepsis who are at high risk of mortality is important to improve clinical outcomes. However, 3 major barriers to artificial intelligence (AI) models, including the lack of interpretability, the difficulty in generalizability, and the risk of automation bias, hinder the widespread adoption of AI models for use in clinical practice. OBJECTIVE: This study aimed to develop and validate (internally and externally) a conformal predictor of sepsis mortality risk in patients who are critically ill, leveraging AI-assisted prediction modeling. The proposed approach enables explaining the model output and assessing its confidence level. METHODS: We retrospectively extracted data on adult patients with sepsis from a database collected in a teaching hospital at Beth Israel Deaconess Medical Center for model training and internal validation. A large multicenter critical care database from the Philips eICU Research Institute was used for external validation. A total of 103 clinical features were extracted from the first day after admission. We developed an AI model using gradient-boosting machines to predict the mortality risk of sepsis and used Mondrian conformal prediction to estimate the prediction uncertainty. The Shapley additive explanation method was used to explain the model. RESULTS: A total of 16,746 (80%) patients from Beth Israel Deaconess Medical Center were used to train the model. When tested on the internal validation population of 4187 (20%) patients, the model achieved an area under the receiver operating characteristic curve of 0.858 (95% CI 0.845-0.871), which was reduced to 0.800 (95% CI 0.789-0.811) when externally validated on 10,362 patients from the Philips eICU database. At a specified confidence level of 90% for the internal validation cohort the percentage of error predictions (n=438) out of all predictions (n=4187) was 10.5%, with 1229 (29.4%) predictions requiring clinician review. In contrast, the AI model without conformal prediction made 1449 (34.6%) errors. When externally validated, more predictions (n=4004, 38.6%) were flagged for clinician review due to interdatabase heterogeneity. Nevertheless, the model still produced significantly lower error rates compared to the point predictions by AI (n=1221, 11.8% vs n=4540, 43.8%). The most important predictors identified in this predictive model were Acute Physiology Score III, age, urine output, vasopressors, and pulmonary infection. Clinically relevant risk factors contributing to a single patient were also examined to show how the risk arose. CONCLUSIONS: By combining model explanation and conformal prediction, AI-based systems can be better translated into medical practice for clinical decision-making.
Asunto(s)
Inteligencia Artificial , Sepsis , Adulto , Humanos , Toma de Decisiones Clínicas , Hospitales de Enseñanza , Estudios Retrospectivos , Sepsis/diagnóstico , Estudios Multicéntricos como AsuntoRESUMEN
Recent studies have seen significant advancements in the field of long-term person re-identification (LT-reID) through the use of clothing-irrelevant or insensitive features. This work takes the field a step further by addressing a previously unexplored issue, the Clothing Status Distribution Shift (CSDS). CSDS refers to the differing ratios of samples with clothing changes to those without clothing changes between the training and test sets, leading to a decline in LT-reID performance. We establish a connection between the performance of LT-reID and CSDS, and argue that addressing CSDS can improve LT-reID performance. To that end, we propose a novel framework called Meta Clothing Status Calibration (MCSC), which uses meta-learning to optimize the LT-reID model. Specifically, MCSC simulates CSDS between meta-train and meta-test with meta-optimization objectives, optimizing the LT-reID model and making it robust to CSDS. This framework is designed to prevent overfitting and improve the generalization ability of the LT-reID model in the presence of CSDS. Comprehensive evaluations on seven datasets demonstrate that the proposed MCSC framework effectively handles CSDS and improves current state-of-the-art LT-reID methods on several LT-reID benchmarks.
RESUMEN
AIM: This study aimed to classify quiet sleep, active sleep and wake states in preterm infants by analysing cardiorespiratory signals obtained from routine patient monitors. METHODS: We studied eight preterm infants, with an average postmenstrual age of 32.3 ± 2.4 weeks, in a neonatal intensive care unit in the Netherlands. Electrocardiography and chest impedance respiratory signals were recorded. After filtering and R-peak detection, cardiorespiratory features and motion and cardiorespiratory interaction features were extracted, based on previous research. An extremely randomised trees algorithm was used for classification and performance was evaluated using leave-one-patient-out cross-validation and Cohen's kappa coefficient. RESULTS: A sleep expert annotated 4731 30-second epochs (39.4 h) and active sleep, quiet sleep and wake accounted for 73.3%, 12.6% and 14.1% respectively. Using all features, and the extremely randomised trees algorithm, the binary discrimination between active and quiet sleep was better than between other states. Incorporating motion and cardiorespiratory interaction features improved the classification of all sleep states (kappa 0.38 ± 0.09) than analyses without these features (kappa 0.31 ± 0.11). CONCLUSION: Cardiorespiratory interactions contributed to detecting quiet sleep and motion features contributed to detecting wake states. This combination improved the automated classifications of sleep states.
Asunto(s)
Recien Nacido Prematuro , Sueño , Humanos , Recién Nacido , Sueño/fisiología , Masculino , Femenino , ElectrocardiografíaRESUMEN
Heart rate variability (HRV) is a crucial metric that quantifies the variation between consecutive heartbeats, serving as a significant indicator of autonomic nervous system (ANS) activity. It has found widespread applications in clinical diagnosis, treatment, and prevention of cardiovascular diseases. In this study, we proposed an optical model for defocused speckle imaging, to simultaneously incorporate out-of-plane translation and rotation-induced motion for highly-sensitive non-contact seismocardiogram (SCG) measurement. Using electrocardiogram (ECG) signals as the gold standard, we evaluated the performance of photoplethysmogram (PPG) signals and speckle-based SCG signals in assessing HRV. The results indicated that the HRV parameters measured from SCG signals extracted from laser speckle videos showed higher consistency with the results obtained from the ECG signals compared to PPG signals. Additionally, we confirmed that even when clothing obstructed the measurement site, the efficacy of SCG signals extracted from the motion of laser speckle patterns persisted in assessing the HRV levels. This demonstrates the robustness of camera-based non-contact SCG in monitoring HRV, highlighting its potential as a reliable, non-contact alternative to traditional contact-PPG sensors.
Asunto(s)
Electrocardiografía , Frecuencia Cardíaca , Fotopletismografía , Procesamiento de Señales Asistido por Computador , Humanos , Frecuencia Cardíaca/fisiología , Electrocardiografía/métodos , Adulto , Fotopletismografía/métodos , Masculino , Femenino , Adulto JovenRESUMEN
Living-skin detection has been used to prevent the attack of face fraud in a face recognition system. In this paper, we propose a new concept that exploits the multi-layer structure property of skin for living-skin detection. We observe a significant difference in the blur of the laser spot created by the structured light on the skin and non-skin due to the characteristic properties of laser photons in skin penetration and reflection. Based on this observation, we designed a new living-skin detection algorithm to differentiate skin and non-skin based on the blur detection of laser spots. The experimental results show that the proposed setup and method have a promising performance with an averaged precision of 96.7%, averaged recall of 82.2%, and averaged F1-score of 88.6% on a dataset of 20 adult subjects. This demonstrates the effectiveness of the new concept that uses multi-layer properties of skin tissues for living-skin detection, which may lead to new solutions for face anti-spoofing.
Asunto(s)
Cara , Piel , Adulto , Humanos , Algoritmos , FraudeRESUMEN
Biphasic face photo-sketch synthesis has significant practical value in wide-ranging fields such as digital entertainment and law enforcement. Previous approaches directly generate the photo-sketch in a global view, they always suffer from the low quality of sketches and complex photograph variations, leading to unnatural and low-fidelity results. In this article, we propose a novel semantic-driven generative adversarial network to address the above issues, cooperating with graph representation learning. Considering that human faces have distinct spatial structures, we first inject class-wise semantic layouts into the generator to provide style-based spatial information for synthesized face photographs and sketches. In addition, to enhance the authenticity of details in generated faces, we construct two types of representational graphs via semantic parsing maps upon input faces, dubbed the intraclass semantic graph (IASG) and the interclass structure graph (IRSG). Specifically, the IASG effectively models the intraclass semantic correlations of each facial semantic component, thus producing realistic facial details. To preserve the generated faces being more structure-coordinated, the IRSG models interclass structural relations among every facial component by graph representation learning. To further enhance the perceptual quality of synthesized images, we present a biphasic interactive cycle training strategy by fully taking advantage of the multilevel feature consistency between the photograph and sketch. Extensive experiments demonstrate that our method outperforms the state-of-the-art competitors on the CUHK Face Sketch (CUFS) and CUHK Face Sketch FERET (CUFSF) datasets.
RESUMEN
With the development of deep convolutional neural networks, medical image segmentation has achieved a series of breakthroughs in recent years. However, high-performance convolutional neural networks always mean numerous parameters and high computation costs, which will hinder the applications in resource-limited medical scenarios. Meanwhile, the scarceness of large-scale annotated medical image datasets further impedes the application of high-performance networks. To tackle these problems, we propose Graph Flow, a comprehensive knowledge distillation framework, for both network-efficiency and annotation-efficiency medical image segmentation. Specifically, the Graph Flow Distillation transfers the essence of cross-layer variations from a well-trained cumbersome teacher network to a non-trained compact student network. In addition, an unsupervised Paraphraser Module is integrated to purify the knowledge of the teacher, which is also beneficial for the training stabilization. Furthermore, we build a unified distillation framework by integrating the adversarial distillation and the vanilla logits distillation, which can further refine the final predictions of the compact network. With different teacher networks (traditional convolutional architecture or prevalent transformer architecture) and student networks, we conduct extensive experiments on four medical image datasets with different modalities (Gastric Cancer, Synapse, BUSI, and CVC-ClinicDB). We demonstrate the prominent ability of our method on these datasets, which achieves competitive performances. Moreover, we demonstrate the effectiveness of our Graph Flow through a novel semi-supervised paradigm for dual efficient medical image segmentation. Our code will be available at Graph Flow.
Asunto(s)
Procesamiento de Imagen Asistido por Computador , Redes Neurales de la ComputaciónRESUMEN
One essential problem in skeleton-based action recognition is how to extract discriminative features over all skeleton joints. However, the complexity of the recent State-Of-The-Art (SOTA) models for this task tends to be exceedingly sophisticated and over-parameterized. The low efficiency in model training and inference has increased the validation costs of model architectures in large-scale datasets. To address the above issue, recent advanced separable convolutional layers are embedded into an early fused Multiple Input Branches (MIB) network, constructing an efficient Graph Convolutional Network (GCN) baseline for skeleton-based action recognition. In addition, based on such the baseline, we design a compound scaling strategy to expand the model's width and depth synchronously, and eventually obtain a family of efficient GCN baselines with high accuracies and small amounts of trainable parameters, termed EfficientGCN-Bx, where "x" denotes the scaling coefficient. On two large-scale datasets, i.e., NTU RGB+D 60 and 120, the proposed EfficientGCN-B4 baseline outperforms other SOTA methods, e.g., achieving 92.1% accuracy on the cross-subject benchmark of NTU 60 dataset, while being 5.82× smaller and 5.85× faster than MS-G3D, which is one of the SOTA methods. The source code in PyTorch version and the pretrained models are available at https://github.com/yfsong0709/EfficientGCNv1.
RESUMEN
In this paper, we propose a prior guided transformer for accurate radiology reports generation. In the encoder part, a radiograph is firstly represented by a set of patch features, which is obtained through a convolutional neural network and a traditional transformer encoder. Then an Additive Gaussian model is applied to represent the prior knowledge based on unsupervised clustering and sparse attention. In the decoder part, prior embeddings are acquired by probabilistically sampling from the radiograph prior. Then the visual features, language embeddings, and prior embeddings are fused by our proposed Prior Guided Attention to generate accurate radiology reports. Experiment results show that our method achieves better performance than state-of-the-art methods on two public radiology datasets, which proves the effectiveness of our prior guided transformer.
Asunto(s)
Redes Neurales de la Computación , Radiología , Humanos , Radiografía , Distribución NormalRESUMEN
Objective: To investigate the effect of Fufang Huangqi Decoction on the gut microbiota in patients with class I or II myasthenia gravis (MG) and to explore the correlation between gut microbiota and MG (registration number, ChiCTR2100048367; registration website, http://www.chictr.org.cn/listbycreater.aspx; NCBI: SRP338707). Methods: In this study, microbial community composition and diversity analyses were carried out on fecal specimens from MG patients who did not take Fufang Huangqi Decoction (control group, n = 8) and those who took Fufang Huangqi Decoction and achieved remarkable alleviation of symptoms (medication group, n = 8). The abundance, diversity within and between habitats, taxonomic differences and corresponding discrimination markers of gut microbiota in the control group and medicated group were assessed. Results: Compared with the control group, the medicated group showed a significantly decreased abundance of Bacteroidetes (P < 0.05) and significantly increased abundance of Actinobacteria at the phylum level, a significantly decreased abundance of Bacteroidaceae (P < 0.05) and significantly increased abundance of Bifidobacteriaceae at the family level and a significantly decreased abundance of Blautia and Bacteroides (P < 0.05) and significantly increased abundance of Bifidobacterium, Lactobacillus and Roseburia at the genus level. Compared to the control group, the medicated group had decreased abundance, diversity, and genetic diversity of the communities and increased coverage, but the differences were not significant (P > 0.05); the markers that differed significantly between communities at the genus level and influenced the differences between groups were Blautia, Bacteroides, Bifidobacterium and Lactobacillus. Conclusions: MG patients have obvious gut microbiota-associated metabolic disorders. Fufang Huangqi Decoction regulates the gut microbiota in patients with class I or II MG by reducing the abundance of Blautia and Bacteroides and increasing the abundance of Bifidobacterium and Lactobacillus. The correlation between gut microbiota and MG may be related to cell-mediated immunity.
RESUMEN
Accurate and efficient catheter segmentation in 3D ultrasound (US) is essential for ultrasound-guided cardiac interventions. State-of-the-art segmentation algorithms, based on convolutional neural networks (CNNs), suffer from high computational cost and large 3D data size for GPU implementation, which are far from satisfactory for real-time applications. In this paper, we propose a novel approach for efficient catheter segmentation in 3D US. Instead of using Cartesian US, our approach performs catheter segmentation in Frustum US (i.e., the US data before scan conversion). Compared to Cartesian US, Frustum US has a much smaller volume size, therefore the catheter can be segmented more efficiently in Frustum US. However, annotating the irregular and deformed Frustum images is challenging, and it is laborious to obtain the voxel-level annotation. To address this, we propose a weakly supervised learning framework, which requires only bounding-box annotations. The labels of the voxels are generated by incorporating class activation maps with line filtering, which are iteratively updated during the training cycles. Our experimental results show that, compared to Cartesian US, the catheter can be segmented much more efficiently in Frustum US (i.e., 0.25 s per volume) with better accuracy. Extensive experiments also validate the effectiveness of the proposed weakly supervised learning method.
Asunto(s)
Procesamiento de Imagen Asistido por Computador , Redes Neurales de la Computación , Catéteres , Procesamiento de Imagen Asistido por Computador/métodos , Aprendizaje Automático Supervisado , UltrasonografíaRESUMEN
For clinical medical diagnosis and treatment, image super-resolution (SR) technology will be helpful to improve the ultrasonic imaging quality so as to enhance the accuracy of disease diagnosis. However, due to the differences of sensing devices or transmission media, the resolution degradation process of ultrasound imaging in real scenes is uncontrollable, especially when the blur kernel is usually unknown. This issue makes current end-to-end SR networks poor performance when applied to ultrasonic images. Aiming to achieve effective SR in real ultrasound medical scenes, in this work, we propose a blind deep SR method based on progressive residual learning and memory upgrade. Specifically, we estimate the accurate blur kernel from the spatial attention map block of low resolution (LR) ultrasound image through a multi-label classification network, then we construct three modules-up- sampling (US) module, residual learning (RL) model and memory upgrading (MU) model for ultrasound image blind SR. The US module is designed to upscale the input information and the up-sampled residual result will be used for SR reconstruction. The RL module is employed to approximate the original LR and continuously generate the updated residual and feed it to the next US module. The last MU module can store all progressively learned residuals, which offers increased interactions between the US and RL modules, augmenting the details recovery. Extensive experiments and evaluations on the benchmark CCA-US and US-CASE datasets demonstrate the proposed approach achieves better performance against the state-of-the-art methods.
Asunto(s)
Procesamiento de Imagen Asistido por Computador , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , UltrasonografíaRESUMEN
Medical instrument segmentation in 3D ultrasound is essential for image-guided intervention. However, to train a successful deep neural network for instrument segmentation, a large number of labeled images are required, which is expensive and time-consuming to obtain. In this article, we propose a semi-supervised learning (SSL) framework for instrument segmentation in 3D US, which requires much less annotation effort than the existing methods. To achieve the SSL learning, a Dual-UNet is proposed to segment the instrument. The Dual-UNet leverages unlabeled data using a novel hybrid loss function, consisting of uncertainty and contextual constraints. Specifically, the uncertainty constraints leverage the uncertainty estimation of the predictions of the UNet, and therefore improve the unlabeled information for SSL training. In addition, contextual constraints exploit the contextual information of the training images, which are used as the complementary information for voxel-wise uncertainty estimation. Extensive experiments on multiple ex-vivo and in-vivo datasets show that our proposed method achieves Dice score of about 68.6%-69.1% and the inference time of about 1 sec. per volume. These results are better than the state-of-the-art SSL methods and the inference time is comparable to the supervised approaches.
Asunto(s)
Redes Neurales de la Computación , Aprendizaje Automático Supervisado , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Proyectos de Investigación , Ultrasonografía , IncertidumbreRESUMEN
Automated segmentation of brain glioma plays an active role in diagnosis decision, progression monitoring and surgery planning. Based on deep neural networks, previous studies have shown promising technologies for brain glioma segmentation. However, these approaches lack powerful strategies to incorporate contextual information of tumor cells and their surrounding, which has been proven as a fundamental cue to deal with local ambiguity. In this work, we propose a novel approach named Context-Aware Network (CANet) for brain glioma segmentation. CANet captures high dimensional and discriminative features with contexts from both the convolutional space and feature interaction graphs. We further propose context guided attentive conditional random fields which can selectively aggregate features. We evaluate our method using publicly accessible brain glioma segmentation datasets BRATS2017, BRATS2018 and BRATS2019. The experimental results show that the proposed algorithm has better or competitive performance against several State-of-The-Art approaches under different segmentation metrics on the training and validation sets.
Asunto(s)
Glioma , Imagen por Resonancia Magnética , Algoritmos , Encéfalo/diagnóstico por imagen , Glioma/diagnóstico por imagen , Humanos , Procesamiento de Imagen Asistido por Computador , Redes Neurales de la ComputaciónRESUMEN
BACKGROUND: Minimally invasive spine surgery is dependent on accurate navigation. Computer-assisted navigation is increasingly used in minimally invasive surgery (MIS), but current solutions require the use of reference markers in the surgical field for both patient and instruments tracking. PURPOSE: To improve reliability and facilitate clinical workflow, this study proposes a new marker-free tracking framework based on skin feature recognition. METHODS: Maximally Stable Extremal Regions (MSER) and Speeded Up Robust Feature (SURF) algorithms are applied for skin feature detection. The proposed tracking framework is based on a multi-camera setup for obtaining multi-view acquisitions of the surgical area. Features can then be accurately detected using MSER and SURF and afterward localized by triangulation. The triangulation error is used for assessing the localization quality in 3D. RESULTS: The framework was tested on a cadaver dataset and in eight clinical cases. The detected features for the entire patient datasets were found to have an overall triangulation error of 0.207 mm for MSER and 0.204 mm for SURF. The localization accuracy was compared to a system with conventional markers, serving as a ground truth. An average accuracy of 0.627 and 0.622 mm was achieved for MSER and SURF, respectively. CONCLUSIONS: This study demonstrates that skin feature localization for patient tracking in a surgical setting is feasible. The technology shows promising results in terms of detected features and localization accuracy. In the future, the framework may be further improved by exploiting extended feature processing using modern optical imaging techniques for clinical applications where patient tracking is crucial.
Asunto(s)
Procedimientos Quirúrgicos Mínimamente Invasivos , Piel , Columna Vertebral/cirugía , Cirugía Asistida por ComputadorRESUMEN
Recent research on single image super-resolution (SISR) has achieved great success due to the development of deep convolutional neural networks. However, most existing SISR methods merely focus on super-resolution of a single fixed integer scale factor. This simplified assumption does not meet the complex conditions for real-world images which often suffer from various blur kernels or various levels of noise. More importantly, previous methods lack the ability to cope with arbitrary degradation parameters (scale factors, blur kernels, and noise levels) with a single model. A few methods can handle multiple degradation factors, e.g., noninteger scale factors, blurring, and noise, simultaneously within a single SISR model. In this work, we propose a simple yet powerful method termed meta-USR which is the first unified super-resolution network for arbitrary degradation parameters with meta-learning. In Meta-USR, a meta-restoration module (MRM) is proposed to enhance the traditional upscale module with the capability to adaptively predict the weights of the convolution filters for various combinations of degradation parameters. Thus, the MRM can not only upscale the feature maps with arbitrary scale factors but also restore the SR image with different blur kernels and noise levels. Moreover, the lightweight MRM can be placed at the end of the network, which makes it very efficient for iteratively/repeatedly searching the various degradation factors. We evaluate the proposed method through extensive experiments on several widely used benchmark data sets on SISR. The qualitative and quantitative experimental results show the superiority of our Meta-USR.