RESUMO
Neuro-electrophysiological recordings contain prominent aperiodic activity - meaning irregular activity, with no characteristic frequency - which has variously been referred to as 1/f (or 1/f-like activity), fractal, or 'scale-free' activity. Previous work has established that aperiodic features of neural activity is dynamic and variable, relating (between subjects) to healthy aging and to clinical diagnoses, and also (within subjects) tracking conscious states and behavioral performance. There are, however, a wide variety of conceptual frameworks and associated methods for the analyses and interpretation of aperiodic activity - for example, time domain measures such as the autocorrelation, fractal measures, and/or various complexity and entropy measures, as well as measures of the aperiodic exponent in the frequency domain. There is a lack of clear understanding of how these different measures relate to each other and to what extent they reflect the same or different properties of the data, which makes it difficult to synthesize results across approaches and complicates our overall understanding of the properties, biological significance, and demographic, clinical, and behavioral correlates of aperiodic neural activity. To address this problem, in this project we systematically survey the different approaches for measuring aperiodic neural activity, starting with an automated literature analysis to curate a collection of the most common methods. We then evaluate and compare these methods, using statistically representative time series simulations. In doing so, we establish consistent relationships between the measures, showing that much of what they capture reflects shared variance - though with some notable idiosyncrasies. Broadly, frequency domain methods are more specific to aperiodic features of the data, whereas time domain measures are more impacted by oscillatory activity. We extend this analysis by applying the measures to a series of empirical EEG and iEEG datasets, replicating the simulation results. We conclude by summarizing the relationships between the multiple methods, emphasizing opportunities for reexamining previous findings and for future work.
RESUMO
Sustained attention, as the basis of general cognitive ability, naturally varies across different time scales, spanning from hours, e.g. from wakefulness to drowsiness state, to seconds, e.g. trial-by-trail fluctuation in a task session. Whether there is a unified mechanism underneath such trans-scale variability remains unclear. Here we show that fluctuation of cortical excitation/inhibition (E/I) is a strong modulator to sustained attention in humans across time scales. First, we observed the ability to attend varied across different brain states (wakefulness, postprandial somnolence, sleep deprived), as well as within any single state with larger swings. Second, regardless of the time scale involved, we found highly attentive state was always linked to more balanced cortical E/I characterized by electroencephalography (EEG) features, while deviations from the balanced state led to temporal decline in attention, suggesting the fluctuation of cortical E/I as a common mechanism underneath trans-scale attentional variability. Furthermore, we found the variations of both sustained attention and cortical E/I indices exhibited fractal structure in the temporal domain, exhibiting features of self-similarity. Taken together, these results demonstrate that sustained attention naturally varies across different time scales in a more complex way than previously appreciated, with the cortical E/I as a shared neurophysiological modulator.
Assuntos
Atenção , Córtex Cerebral , Eletroencefalografia , Vigília , Humanos , Atenção/fisiologia , Masculino , Feminino , Adulto Jovem , Adulto , Vigília/fisiologia , Córtex Cerebral/fisiologia , Inibição Neural/fisiologia , Fatores de Tempo , Excitabilidade Cortical/fisiologia , Privação do Sono/fisiopatologiaRESUMO
This article focuses on characterizing a class of quasi-periodic metamaterials created through the repeated arrangement of an elementary cell in a fixed direction. The elementary cell consists of two building blocks made of elastic materials and arranged according to the generalized Fibonacci sequence, giving rise to a quasi-periodic finite microstructure, also called Fibonacci generation. By exploiting the transfer matrix method, the frequency band structure of selected periodic approximants associated with the Fibonacci superlattice, i.e. the layered quasi-periodic metamaterial, is determined. The self-similarity of the frequency band structure is analysed by means of the invariants of the symplectic transfer matrix as well as the transmission coefficients of the finite clusters of Fibonacci generations. A high-frequency continualization scheme is then proposed to identify integral-type or gradient-type non-local continua. The frequency band structures obtained from the continualization scheme are compared with those derived from the Floquet-Bloch theory to validate the proposed scheme. This article is part of the theme issue 'Current developments in elastic and acoustic metamaterials science (Part 1).'
RESUMO
High-resolution (HR) magnetic resonance imaging (MRI) can reveal rich anatomical structures for clinical diagnoses. However, due to hardware and signal-to-noise ratio limitations, MRI images are often collected with low resolution (LR) which is not conducive to diagnosing and analyzing clinical diseases. Recently, deep learning super-resolution (SR) methods have demonstrated great potential in enhancing the resolution of MRI images; however, most of them did not take the cross-modality and internal priors of MR seriously, which hinders the SR performance. In this paper, we propose a cross-modality reference and feature mutual-projection (CRFM) method to enhance the spatial resolution of brain MRI images. Specifically, we feed the gradients of HR MRI images from referenced imaging modality into the SR network to transform true clear textures to LR feature maps. Meanwhile, we design a plug-in feature mutual-projection (FMP) method to capture the cross-scale dependency and cross-modality similarity details of MRI images. Finally, we fuse all feature maps with parallel attentions to produce and refine the HR features adaptively. Extensive experiments on MRI images in the image domain and k-space show that our CRFM method outperforms existing state-of-the-art MRI SR methods.
RESUMO
Magnetic resonance imaging (MRI) is a non-invasive medical imaging technique that provides high-resolution 3D images and valuable insights into human tissue conditions. Even at present, the refinement of denoising methods for MRI remains a crucial concern for improving the quality of the images. This study aims to improve the prefiltered rotationally invariant non-local principal component analysis (PRI-NL-PCA) algorithm. We relaxed the original restrictions using particle swarm optimization to determine optimal parameters for the PCA part of the original algorithm. In addition, we adjusted the prefiltered rotationally invariant non-local mean (PRI-NLM) part by traversing the signal intensities of voxels instead of their spatial positions to reduce duplicate calculations and expand the search volume to the whole image when estimating voxels' signal intensities. The new method demonstrated superior denoising performance compared to the original approach. Moreover, in most cases, the new algorithm ran faster. Furthermore, our proposed method can also be applied to process Gaussian noise in natural images and has the potential to enhance other NLM-based denoising algorithms.
Assuntos
Algoritmos , Imageamento por Ressonância Magnética , Razão Sinal-Ruído , Imageamento por Ressonância Magnética/métodos , Humanos , Análise de Componente Principal , Encéfalo/diagnóstico por imagem , Processamento de Imagem Assistida por Computador/métodosRESUMO
The first chapter of this book introduces some history, philosophy, and basic concepts of fractal geometry and discusses how the neurosciences can benefit from applying computational fractal-based analysis. Further, it compares fractal with Euclidean approaches to analyzing and quantifying the brain in its entire physiopathological spectrum and presents an overview of the first section of this book as well.
RESUMO
The introduction of fractal geometry to the neurosciences has been a major paradigm shift over the last decades as it has helped overcome approximations and limitations that occur when Euclidean and reductionist approaches are used to analyze neurons or the entire brain. Fractal geometry allows for quantitative analysis and description of the geometric complexity of the brain, from its single units to the neuronal networks.As illustrated in the second section of this book, fractal analysis provides a quantitative tool for the study of the morphology of brain cells (i.e., neurons and microglia) and its components (e.g., dendritic trees, synapses), as well as the brain structure itself (cortex, functional modules, neuronal networks). The self-similar logic which generates and shapes the different hierarchical systems of the brain and even some structures related to its "container," that is, the cranial sutures on the skull, is widely discussed in the following chapters, with a link between the applications of fractal analysis to the neuroanatomy and basic neurosciences to the clinical applications discussed in the third section.
Assuntos
Fractais , Neuroanatomia , Humanos , Encéfalo/fisiologia , NeurôniosRESUMO
The identification of compound fault components of a planetary gearbox is especially important for keeping the mechanical equipment working safely. However, the recognition performance of existing deep learning-based methods is limited by insufficient compound fault samples and single label classification principles. To solve the issue, a capsule neural network with an improved feature extractor, named LTSS-BoW-CapsNet, is proposed for the intelligent recognition of compound fault components. Firstly, a feature extractor is constructed to extract fault feature vectors from raw signals, which is based on local temporal self-similarity coupled with bag-of-words models (LTSS-BoW). Then, a multi-label classifier based on a capsule network (CapsNet) is designed, in which the dynamic routing algorithm and average threshold are adopted. The effectiveness of the proposed LTSS-BoW-CapsNet method is validated by processing three compound fault diagnosis tasks. The experimental results demonstrate that our method can via decoupling effectively identify the multi-fault components of different compound fault patterns. The testing accuracy is more than 97%, which is better than the other four traditional classification models.
RESUMO
Hyperspectral images (HSIs) contain abundant spectral and spatial structural information, but they are inevitably contaminated by a variety of noises during data reception and transmission, leading to image quality degradation and subsequent application hindrance. Hence, removing mixed noise from hyperspectral images is an important step in improving the performance of subsequent image processing. It is a well-established fact that the data information of hyperspectral images can be effectively represented by a global spectral low-rank subspace due to the high redundancy and correlation (RAC) in the spatial and spectral domains. Taking advantage of this property, a new algorithm based on subspace representation and nonlocal low-rank tensor decomposition is proposed to filter the mixed noise of hyperspectral images. The algorithm first obtains the subspace representation of the hyperspectral image by utilizing the spectral low-rank property and obtains the orthogonal basis and representation coefficient image (RCI). Then, the representation coefficient image is grouped and denoised using tensor decomposition and wavelet decomposition, respectively, according to the spatial nonlocal self-similarity. Afterward, the orthogonal basis and denoised representation coefficient image are optimized using the alternating direction method of multipliers (ADMM). Finally, iterative regularization is used to update the image to obtain the final denoised hyperspectral image. Experiments on both simulated and real datasets demonstrate that the algorithm proposed in this paper is superior to related mainstream methods in both quantitative metrics and intuitive vision. Because it is denoising for image subspace, the time complexity is greatly reduced and is lower than related denoising algorithms in terms of computational cost.
RESUMO
Chemical exchange saturation transfer (CEST) is a versatile technique that enables noninvasive detections of endogenous metabolites present in low concentrations in living tissue. However, CEST imaging suffers from an inherently low signal-to-noise ratio (SNR) due to the decreased water signal caused by the transfer of saturated spins. This limitation challenges the accuracy and reliability of quantification in CEST imaging. In this study, a novel spatial-spectral denoising method, called BOOST (suBspace denoising with nOnlocal lOw-rank constraint and Spectral local-smooThness regularization), was proposed to enhance the SNR of CEST images and boost quantification accuracy. More precisely, our method initially decomposes the noisy CEST images into a low-dimensional subspace by leveraging the global spectral low-rank prior. Subsequently, a spatial nonlocal self-similarity prior is applied to the subspace-based images. Simultaneously, the spectral local-smoothness property of Z-spectra is incorporated by imposing a weighted spectral total variation constraint. The efficiency and robustness of BOOST were validated in various scenarios, including numerical simulations and preclinical and clinical conditions, spanning magnetic field strengths from 3.0 to 11.7 T. The results demonstrated that BOOST outperforms state-of-the-art algorithms in terms of noise elimination. As a cost-effective and widely available post-processing method, BOOST can be easily integrated into existing CEST protocols, consequently promoting accuracy and reliability in detecting subtle CEST effects.
Assuntos
Algoritmos , Imageamento por Ressonância Magnética , Reprodutibilidade dos Testes , Imageamento por Ressonância Magnética/métodos , Razão Sinal-RuídoRESUMO
Decreased long-range temporal correlations (LRTC) in brain signals can be used to measure cognitive effort during task execution. Here, we examined how learning a motor sequence affects long-range temporal memory within resting-state functional magnetic resonance imaging signal. Using the Hurst exponent (HE), we estimated voxel-wise LRTC and assessed changes over 5 consecutive days of training, followed by a retention scan 12 days later. The experimental group learned a complex visuomotor sequence while a complementary control group performed tightly matched movements. An interaction analysis revealed that HE decreases were specific to the complex sequence and occurred in well-known motor sequence learning associated regions including left supplementary motor area, left premotor cortex, left M1, left pars opercularis, bilateral thalamus, and right striatum. Five regions exhibited moderate to strong negative correlations with overall behavioral performance improvements. Following learning, HE values returned to pretraining levels in some regions, whereas in others, they remained decreased even 2 weeks after training. Our study presents new evidence of HE's possible relevance for functional plasticity during the resting-state and suggests that a cortical subset of sequence-specific regions may continue to represent a functional signature of learning reflected in decreased long-range temporal dependence after a period of inactivity.
Assuntos
Aprendizagem , Imageamento por Ressonância Magnética , Humanos , Imageamento por Ressonância Magnética/métodos , Encéfalo/diagnóstico por imagem , Mapeamento Encefálico , OxigênioRESUMO
Complex topographies exhibit universal properties when fluvial erosion dominates landscape evolution over other geomorphological processes. Similarly, we show that the solutions of a minimalist landscape evolution model display invariant behavior as the impact of soil diffusion diminishes compared to fluvial erosion at the landscape scale, yielding complete self-similarity with respect to a dimensionless channelization index. Approaching its zero limit, soil diffusion becomes confined to a region of vanishing area and large concavity or convexity, corresponding to the locus of the ridge and valley network. We demonstrate these results using one dimensional analytical solutions and two dimensional numerical simulations, supported by real-world topographic observations. Our findings on the landscape self-similarity and the localized diffusion resemble the self-similarity of turbulent flows and the role of viscous dissipation. Topographic singularities in the vanishing diffusion limit are suggestive of shock waves and singularities observed in nonlinear complex systems.
RESUMO
The Group Sparse Representation (GSR) model shows excellent potential in various image restoration tasks. In this study, we propose a novel Multi-Scale Group Sparse Residual Constraint Model (MS-GSRC) which can be applied to various inverse problems, including denoising, inpainting, and compressed sensing (CS). Our new method involves the following three steps: (1) finding similar patches with an overlapping scheme for the input degraded image using a multi-scale strategy, (2) performing a group sparse coding on these patches with low-rank constraints to get an initial representation vector, and (3) under the Bayesian maximum a posteriori (MAP) restoration framework, we adopt an alternating minimization scheme to solve the corresponding equation and reconstruct the target image finally. Simulation experiments demonstrate that our proposed model outperforms in terms of both objective image quality and subjective visual quality compared to several state-of-the-art methods.
RESUMO
Introduction: Dynamics-on-graph concepts and generalized finite-length Fibonacci sequences have been used to characterize, from a temporal point of view, both human walking & running at a comfortable speed and front-crawl & butterfly swimming strokes at a middle/long distance pace. Such sequences, in which the golden ratio plays a crucial role to describe self-similar patterns, have been found to be subtly experimentally exhibited by healthy (but not pathological) walking subjects and elite swimmers, in terms of durations of gait/stroke-subphases with a clear physical meaning. Corresponding quantitative indices have been able to unveil the resulting hidden time-harmonic and self-similar structures. Results: In this study, we meaningfully extend such latest findings to the remaining two swimming strokes, namely, the breast-stroke and the back-stroke: breast-stroke, just like butterfly swimming, is highly technical and involves the complex coordination of the arm and leg actions, while back-stroke is definitely similar to front-crawl swimming. An experimental validation with reference to international-level swimmers is included.
RESUMO
Introduction: As an important human-computer interaction technology, steady-state visual evoked potential (SSVEP) plays a key role in the application of brain computer interface (BCI) systems by accurately decoding SSVEP signals. Currently, the majority SSVEP feature recognition methods use a static classifier. However, electroencephalogram (EEG) signals are non-stationary and time-varying. Hence, an adaptive classification method would be an alternative option to a static classifier for tracking the changes in EEG feature distribution, as its parameters can be re-estimated and updated with the input of new EEG data. Methods: In this study, an unsupervised adaptive classification algorithm is designed based on the self-similarity of same-frequency signals. The proposed classification algorithm saves the EEG data that has undergone feature recognition as a template signal in accordance with its estimated label, and the new testing signal is superimposed with the template signals at each stimulus frequency as the new test signals to be analyzed. With the continuous input of EEG data, the template signals are continuously updated. Results: By comparing the classification accuracy of the original testing signal and the testing signal superimposed with the template signals, this study demonstrates the effectiveness of using the self-similarity of same-frequency signals in the adaptive classification algorithm. The experimental results also show that the longer the SSVEP-BCI system is used, the better the responses of users on SSVEP are, and the more significantly the adaptive classification algorithm performs in terms of feature recognition. The testing results of two public datasets show that the adaptive classification algorithm outperforms the static classification method in terms of feature recognition. Discussion: The proposed adaptive classification algorithm can update the parameters with the input of new EEG data, which is of favorable impact for the accurate analysis of EEG data with time-varying characteristics.
RESUMO
Models trained on datasets with texture bias usually perform poorly on out-of-distribution samples since biased representations are embedded into the model. Recently, various image translation and debiasing methods have attempted to disentangle texture biased representations for downstream tasks, but accurately discarding biased features without altering other relevant information is still challenging. In this paper, we propose a novel framework that leverages image translation to generate additional training images using the content of a source image and the texture of a target image with a different bias property to explicitly mitigate texture bias when training a model on a target task. Our model ensures texture similarity between the target and generated images via a texture co-occurrence loss while preserving content details from source images with a spatial self-similarity loss. Both the generated and original training images are combined to train improved classification or segmentation models robust to inconsistent texture bias. Evaluation on five classification- and two segmentation-datasets with known texture biases demonstrates the utility of our method, and reports significant improvements over recent state-of-the-art methods in all cases.
RESUMO
Despite widespread claims of power laws across the natural and social sciences, evidence in data is often equivocal. Modern data and statistical methods reject even classic power laws such as Pareto's law of wealth and the Gutenberg-Richter law for earthquake magnitudes. We show that the maximum-likelihood estimators and Kolmogorov-Smirnov (K-S) statistics in widespread use are unexpectedly sensitive to ubiquitous errors in data such as measurement noise, quantization noise, heaping and censorship of small values. This sensitivity causes spurious rejection of power laws and biases parameter estimates even in arbitrarily large samples, which explains inconsistencies between theory and data. We show that logarithmic binning by powers of λ > 1 attenuates these errors in a manner analogous to noise averaging in normal statistics and that λ thereby tunes a trade-off between accuracy and precision in estimation. Binning also removes potentially misleading within-scale information while preserving information about the shape of a distribution over powers of λ, and we show that some amount of binning can improve sensitivity and specificity of K-S tests without any cost, while more extreme binning tunes a trade-off between sensitivity and specificity. We therefore advocate logarithmic binning as a simple essential step in power-law inference.
RESUMO
With the explosive growth of human knowledge especially in the twenteeth century with even greater facilitation of access to knowledge, the world of even relatively recent great thinkers becomes daunting as seen from a modern viewpoint. Recently, humans ignored the existence of the complex intracellular world of cell organs, giant information molecules such as DNA, societies of specialized worker molecules (proteins), and generally the surprising nanoscale world visible to humanity since only a few decades ago. Moreover, computational power and video technology were inaccessible to all scientists from, for example, Aristotle to Freud, so new views and ideas seem to be expected about phenomena at all scales including nano and human. Some have arrived very recently. Thus urgently needed knowledge about the biology of animal and human behavior received the first Nobel Prize as late as 1973, in Physiology and Medicine, shared by Karl von Frisch, Konrad Lorenz, and Niko Tinbergen. Lorenz's Nobel lecture was entitled "Analogy as a Source of Knowledge" which did not mention self-analogy (self-similarity) as none of the species studied were part of others and knowledge of the nanoscale phenomena at the heart of this article had barely become available. The views and empirical findings presented in this article depend on such recent intracellular nanoscale insights and the development of a set of mathematical patterns, called T-system, of which only two are considered, the self-similar (i.e., parts having a structure similar to the whole) T-pattern and the derived T-string, a T-patterned material string (here, polymer or text). Specially developed algorithms implemented in the THEMETM software for T-pattern detection and analysis (TPA) allowed the detection of interaction T-patterns in humans, animals, and brain neuronal networks, showing self-similarity between animal interaction patterns and neuronal interaction patterns in their brains. TPA of DNA and text also showed unique self-similarity between modern human literate mass societies and the protein societies of their body cells, both with Giant Extra-Individual Purely Informational T-strings (GEIPIT; genomes or textomes) defining the behavioral potentials of their specialized citizens. This kind of society is here called T-society and only exists in humans and proteins, while the self-similarity between them only exists in human T-societies.
RESUMO
Living structures constantly interact with the biotic and abiotic environment by sensing and responding via specialized functional parts. In other words, biological bodies embody highly functional machines and actuators. What are the signatures of engineering mechanisms in biology? In this review, we connect the dots in the literature to seek engineering principles in plant structures. We identify three thematic motifs-bilayer actuator, slender-bodied functional surface, and self-similarity-and provide an overview of their structure-function relationships. Unlike human-engineered machines and actuators, biological counterparts may appear suboptimal in design, loosely complying with physical theories or engineering principles. We postulate what factors may influence the evolution of functional morphology and anatomy to dissect and comprehend better the why behind the biological forms.