Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 44
Filtrar
1.
Sensors (Basel) ; 24(16)2024 Aug 10.
Artículo en Inglés | MEDLINE | ID: mdl-39204867

RESUMEN

In order to solve the problem of difficult separation of audio signals collected in pig environments, this study proposes an underdetermined blind source separation (UBSS) method based on sparsification theory. The audio signals obtained by mixing the audio signals of pigs in different states with different coefficients are taken as observation signals, and the mixing matrix is first estimated from the observation signals using the improved AP clustering method based on the "two-step method" of sparse component analysis (SCA), and then the audio signals of pigs are reconstructed by L1-paradigm separation. Five different types of pig audio are selected for experiments to explore the effects of duration and mixing matrix on the blind source separation algorithm by controlling the audio duration and mixing matrix, respectively. With three source signals and two observed signals, the reconstructed signal metrics corresponding to different durations and different mixing matrices perform well. The similarity coefficient is above 0.8, the average recovered signal-to-noise ratio is above 8 dB, and the normalized mean square error is below 0.02. The experimental results show that different audio durations and different mixing matrices have certain effects on the UBSS algorithm, so the recording duration and the spatial location of the recording device need to be considered in practical applications. Compared with the classical UBSS algorithm, the proposed algorithm outperforms the classical blind source separation algorithm in estimating the mixing matrix and separating the mixed audio, which improves the reconstruction quality.

2.
Front Hum Neurosci ; 18: 1201574, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38487104

RESUMEN

Introduction: This study focuses on broadening the applicability of the metaheuristic L1-norm fitted and penalized (L1L1) optimization method in finding a current pattern for multichannel transcranial electrical stimulation (tES). The metaheuristic L1L1 optimization framework defines the tES montage via linear programming by maximizing or minimizing an objective function with respect to a pair of hyperparameters. Methods: In this study, we explore the computational performance and reliability of different optimization packages, algorithms, and search methods in combination with the L1L1 method. The solvers from Matlab R2020b, MOSEK 9.0, Gurobi Optimizer, CVX's SeDuMi 1.3.5, and SDPT3 4.0 were employed to produce feasible results through different linear programming techniques, including Interior-Point (IP), Primal-Simplex (PS), and Dual-Simplex (DS) methods. To solve the metaheuristic optimization task of L1L1, we implement an exhaustive and recursive search along with a well-known heuristic direct search as a reference algorithm. Results: Based on our results, and the given optimization task, Gurobi's IP was, overall, the preferable choice among Interior-Point while MOSEK's PS and DS packages were in the case of Simplex methods. These methods provided substantial computational time efficiency for solving the L1L1 method regardless of the applied search method. Discussion: While the best-performing solvers show that the L1L1 method is suitable for maximizing either focality and intensity, a few of these solvers could not find a bipolar configuration. Part of the discrepancies between these methods can be explained by a different sensitivity with respect to parameter variation or the resolution of the lattice provided.

3.
Comput Biol Med ; 151(Pt A): 106247, 2022 12.
Artículo en Inglés | MEDLINE | ID: mdl-36375415

RESUMEN

A decline in cognitive functioning of the brain termed Alzheimer's Disease (AD) is an irremediable progressive brain disorder, which has no corroborated disease-modifying treatment. Therefore, to slow or avoid disease progression, a greater endeavour has been made to develop techniques for earlier detection, particularly at pre-symptomatic stages. To predict AD, several strategies have been developed. Nevertheless, it is still challenging to predict AD by classifying them into AD, Mild Cognitive Impairment (MCI), along with Normal Control (NC) regarding larger features. By utilizing the Momentum Golden Eagle Optimizer-centric Transient Multi-Layer Perceptron network (Momentum GEO-Transient MLP), an effectual AD prediction technique has been proposed to trounce the aforementioned issues. Firstly, the input images are supplied for post-processing. In post-processing, by employing Patch Wise L1 Norm (PWL1N), the image resizing along with noise removal is engendered. Then, by utilizing Truncate Intensity Based Operation (TIBO) from the post-processed images, the unwanted brain parts are taken away. Next, the skull-stripped images are pre-processed. In this, by deploying Carnot Cycle Entropy-centric Global and Local technique (c2EBGAL), the images are normalized along with ameliorated. Afterward, by implementing Modified Emperor Penguins Colony-centered Sparse Subspace Clustering (MEPC-SSC), the pre-processed images are segmented. Then, for extracting the features, the segmented images are utilized; subsequently, the features being extracted are fed to the Momentum GEO-Transient MLPs.For transferring images fromMRI into more compact higher-level features, this system is wielded for fusing features from diverse layers. The parameters, which minimize the computation complexity, are decreased. For AD classification, the proposed technique is analogized to the prevailing methodologies regardingaccuracy, sensitivity, specificity et cetera along with acquired enhanced outcomes. Thus, the proposed system is apt for the AD diagnosis.


Asunto(s)
Enfermedad de Alzheimer , Disfunción Cognitiva , Imagen por Resonancia Magnética , Animales , Humanos , Enfermedad de Alzheimer/diagnóstico por imagen , Análisis por Conglomerados , Disfunción Cognitiva/diagnóstico por imagen , Imagen por Resonancia Magnética/métodos
4.
Artículo en Inglés | MEDLINE | ID: mdl-35627447

RESUMEN

Acquired immune deficiency syndrome (AIDS) is a serious public health problem. This study aims to establish a combined model of seasonal autoregressive integrated moving average (SARIMA) and Prophet models based on an L1-norm to predict the incidence of AIDS in Henan province, China. The monthly incidences of AIDS in Henan province from 2012 to 2020 were obtained from the Health Commission of Henan Province. A SARIMA model, a Prophet model, and two combined models were adopted to fit the monthly incidence of AIDS using the data from January 2012 to December 2019. The data from January 2020 to December 2020 was used to verify. The mean square error (MSE), mean absolute error (MAE), and mean absolute percentage error (MAPE) were used to compare the prediction effect among the models. The results showed that the monthly incidence fluctuated from 0.05 to 0.50 per 100,000 individuals, and the monthly incidence of AIDS had a certain periodicity in Henan province. In addition, the prediction effect of the Prophet model was better than SARIMA model, the combined model was better than the single models, and the combined model based on the L1-norm had the best effect values (MSE = 0.0056, MAE = 0.0553, MAPE = 43.5337). This indicated that, compared with the L2-norm, the L1-norm improved the prediction accuracy of the combined model. The combined model of SARIMA and Prophet based on the L1-norm is a suitable method to predict the incidence of AIDS in Henan. Our findings can provide theoretical evidence for the government to formulate policies regarding AIDS prevention.


Asunto(s)
Síndrome de Inmunodeficiencia Adquirida , Síndrome de Inmunodeficiencia Adquirida/epidemiología , China/epidemiología , Predicción , Humanos , Incidencia , Modelos Estadísticos
5.
Magn Reson Med ; 88(2): 962-972, 2022 08.
Artículo en Inglés | MEDLINE | ID: mdl-35435267

RESUMEN

PURPOSE: Susceptibility maps are usually derived from local magnetic field estimations by minimizing a functional composed of a data consistency term and a regularization term. The data-consistency term measures the difference between the desired solution and the measured data using typically the L2-norm. It has been proposed to replace this L2-norm with the L1-norm, due to its robustness to outliers and reduction of streaking artifacts arising from highly noisy or strongly perturbed regions. However, in regions with high SNR, the L1-norm yields a suboptimal denoising performance. In this work, we present a hybrid data fidelity approach that uses the L1-norm and subsequently the L2-norm to exploit the strengths of both norms. METHODS: We developed a hybrid data fidelity term approach for QSM (HD-QSM) based on linear susceptibility inversion methods, with total variation regularization. Each functional is solved with ADMM. The HD-QSM approach is a two-stage method that first finds a fast solution of the L1-norm functional and then uses this solution to initialize the L2-norm functional. In both norms we included spatially variable weights that improve the quality of the reconstructions. RESULTS: The HD-QSM approach produced good quantitative reconstructions in terms of structural definition, noise reduction, and avoiding streaking artifacts comparable with nonlinear methods, but with higher computational efficiency. Reconstructions performed with this method achieved first place at the lowest RMS error category in stage 1 of the 2019 QSM Reconstruction Challenge. CONCLUSIONS: The proposed method allows robust and accurate QSM reconstructions, obtaining superior performance to state-of-the-art methods.


Asunto(s)
Mapeo Encefálico , Procesamiento de Imagen Asistido por Computador , Algoritmos , Encéfalo/diagnóstico por imagen , Mapeo Encefálico/métodos , Procesamiento de Imagen Asistido por Computador/métodos , Imagen por Resonancia Magnética/métodos
6.
J Neural Eng ; 19(2)2022 03 30.
Artículo en Inglés | MEDLINE | ID: mdl-35234668

RESUMEN

Objective.Electroencephalogram (EEG)-based motor imagery (MI) brain-computer interface offers a promising way to improve the efficiency of motor rehabilitation and motor skill learning. In recent years, the power of dynamic network analysis for MI classification has been proved. In fact, its usability mainly depends on the accurate estimation of brain connection. However, traditional dynamic network estimation strategies such as adaptive directed transfer function (ADTF) are designed in the L2-norm. Usually, they estimate a series of pseudo connections caused by outliers, which results in biased features and further limits its online application. Thus, how to accurately infer dynamic causal relationship under outlier influence is urgent.Approach.In this work, we proposed a novel ADTF, which solves the dynamic system in the L1-norm space (L1-ADTF), so as to restrict the outlier influence. To enhance its convergence, we designed an iteration strategy with the alternating direction method of multipliers, which could be used for the solution of the dynamic state-space model restricted in the L1-norm space. Furthermore, we compared L1-ADTF to traditional ADTF and its dual extension across both simulation and real EEG experiments.Main results.A quantitative comparison between L1-ADTF and other ADTFs in simulation studies demonstrates that fewer bias errors and more desirable dynamic state transformation patterns can be captured by the L1-ADTF. Application to real MI EEG datasets seriously noised by ocular artifacts also reveals the efficiency of the proposed L1-ADTF approach to extract the time-varying brain neural network patterns, even when more complex noises are involved.Significance.The L1-ADTF may not only be capable of tracking time-varying brain network state drifts robustly but may also be useful in solving a wide range of dynamic systems such as trajectory tracking problems and dynamic neural networks.


Asunto(s)
Algoritmos , Interfaces Cerebro-Computador , Encéfalo , Electroencefalografía/métodos , Imaginación , Redes Neurales de la Computación
7.
Magn Reson Med ; 87(1): 457-473, 2022 01.
Artículo en Inglés | MEDLINE | ID: mdl-34350634

RESUMEN

PURPOSE: The presence of dipole-inconsistent data due to substantial noise or artifacts causes streaking artifacts in quantitative susceptibility mapping (QSM) reconstructions. Often used Bayesian approaches rely on regularizers, which in turn yield reduced sharpness. To overcome this problem, we present a novel L1-norm data fidelity approach that is robust with respect to outliers, and therefore prevents streaking artifacts. METHODS: QSM functionals are solved with linear and nonlinear L1-norm data fidelity terms using functional augmentation, and are compared with equivalent L2-norm methods. Algorithms were tested on synthetic data, with phase inconsistencies added to mimic lesions, QSM Challenge 2.0 data, and in vivo brain images with hemorrhages. RESULTS: The nonlinear L1-norm-based approach achieved the best overall error metric scores and better streaking artifact suppression. Notably, L1-norm methods could reconstruct QSM images without using a brain mask, with similar regularization weights for different data fidelity weighting or masking setups. CONCLUSION: The proposed L1-approach provides a robust method to prevent streaking artifacts generated by dipole-inconsistent data, renders brain mask calculation unessential, and opens novel challenging clinical applications such asassessing brain hemorrhages and cortical layers.


Asunto(s)
Artefactos , Mapeo Encefálico , Algoritmos , Teorema de Bayes , Encéfalo/diagnóstico por imagen , Procesamiento de Imagen Asistido por Computador , Imagen por Resonancia Magnética
8.
Sensors (Basel) ; 23(1)2022 Dec 23.
Artículo en Inglés | MEDLINE | ID: mdl-36616748

RESUMEN

How to accurately identify unknown time-varying external force from measured structural responses is an important engineering problem, which is critical for assessing the safety condition of the structure. In the context of a few available accelerometers, this paper proposes a novel time-varying external force identification method using group sparse regularization based on the prior knowledge in the redundant dictionary. Firstly, the relationship between time-varying external force and acceleration responses is established, and a redundant dictionary is designed to create a sparse expression of external force. Then, the relevance of atoms in the redundant dictionary is revealed, and this prior knowledge is used to determine the group structures of atoms. As a result, a force identification governing equation is formulated, and the group sparse regularization is reasonably introduced to ensure the accuracy of the identified results. The contribution of this paper is that the group structures of atoms are reasonably determined based on prior knowledge, and the complexity in the process for identifying external force from measured acceleration responses is reduced. Finally, the effectiveness of the proposed method is demonstrated by numerical simulations and an experimental structure. The illustrated results show that, compared with the force identification method based on the standard l1-norm regularization, the proposed method can further improve the identified accuracy of unknown external force and greatly enhance the computational efficiency for the force identification problem.


Asunto(s)
Algoritmos
9.
Sensors (Basel) ; 21(13)2021 Jul 05.
Artículo en Inglés | MEDLINE | ID: mdl-34283168

RESUMEN

In this paper, a weighted l1-norm is proposed in a l1-norm-based singular value decomposition (L1-SVD) algorithm, which can suppress spurious peaks and improve accuracy of direction of arrival (DOA) estimation for the low signal-to-noise (SNR) scenarios. The weighted matrix is determined by optimizing the orthogonality of subspace, and the weighted l1-norm is used as the minimum objective function to increase the signal sparsity. Thereby, the weighted matrix makes the l1-norm approximate the original l0-norm. Simulated results of orthogonal frequency division multiplexing (OFDM) signal demonstrate that the proposed algorithm has s narrower main lobe and lower side lobe with the characteristics of fewer snapshots and low sensitivity of misestimated signals, which can improve the resolution and accuracy of DOA estimation. Specifically, the proposed method exhibits a better performance than other works for the low SNR scenarios. Outdoor experimental results of OFDM signals show that the proposed algorithm is superior to other methods with a narrower main lobe and lower side lobe, which can be used for DOA estimation of UAV and pseudo base station.

10.
Phys Med ; 84: 178-185, 2021 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-33901862

RESUMEN

PURPOSE: Conventional x-ray spectrum estimation methods from transmission measurement often lead to inaccurate results when extensive x-ray scatter is present in the measured projection. This study aims to apply the weighted L1-norm scatter correction algorithm in spectrum estimation for reducing residual differences between the estimated and true spectrum. METHOD: The scatter correction algorithm is based on a simple radiographic scattering model where the intensity of scattered x-ray is directly estimated from a transmission measurement. Then, the scatter-corrected measurement is used for the spectrum estimation method that consists of deciding the weights of predefined spectra and representing the spectrum as a linear combination of the predefined spectra with the weights. The performances of the estimation method combined with scatter correction are evaluated on both simulated and experimental data. RESULTS: The results show that the estimated spectra using the scatter-corrected projection nearly match the true spectra. The normalized-root-mean-square-error and the mean energy difference between the estimated spectra and corresponding true spectra are reduced from 5.8% and 1.33 keV without the scatter correction to 3.2% and 0.73 keV with the scatter correction for both simulation and experimental data, respectively. CONCLUSIONS: The proposed method is more accurate for the acquisition of x-ray spectrum than the estimation method without scatter correction and the spectrum can be successfully estimated even the materials of the filters and their thicknesses are unknown. The proposed method has the potential to be used in several diagnostic x-ray imaging applications.


Asunto(s)
Algoritmos , Simulación por Computador , Fantasmas de Imagen , Radiografía , Dispersión de Radiación , Rayos X
11.
Sensors (Basel) ; 21(4)2021 Feb 13.
Artículo en Inglés | MEDLINE | ID: mdl-33668409

RESUMEN

Under mixed sparse line-of-sight/non-line-of-sight (LOS/NLOS) conditions, how to quickly achieve high positioning accuracy is still a challenging task and a critical problem in the last dozen years. To settle this problem, we propose a constrained L1 norm minimization method which can reduce the effects of NLOS bias for improve positioning accuracy and speed up calculation via an iterative method. We can transform the TOA-based positioning problem into a sparse optimization one under mixed sparse LOS/NLOS conditions if we consider NLOS bias as outliers. Thus, a relatively good method to deal with sparse localization problem is L1 norm. Compared with some existing methods, the proposed method not only has the advantages of simple and intuitive principle, but also can neglect NLOS status and corresponding NLOS errors. Experimental results show that our algorithm performs well in terms of computational time and positioning accuracy.

12.
J Electrocardiol ; 62: 190-199, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-32977208

RESUMEN

The inverse problem of electrocardiography (ECG) of computing epicardial potentials from body surface potentials, is an ill-posed problem and needs to be solved by regularization techniques. The L2-norm regularization can cause considerable smoothing of the solution, while the L1-norm scheme promotes a solution with sharp boundaries/gradients between piecewise smooth regions, so L1-norm is widely used in the ECG inverse problem. However, large amount of computation and long computation time are needed in the L1-norm scheme. In this paper, by combining iterative reweight norm (IRN) with a factorization-free preconditioned LSQR algorithm (MLSQR), a new IRN-MLSQR method was proposed to accelerate the convergence speed of the L1-norm scheme. We validated the IRN-MLSQR method using experimental data from isolated canine hearts and clinical procedures in the electrophysiology laboratory. The results showed that the IRN-MLSQR method can significantly reduce the number of iterations and operation time while ensuring the calculation accuracy. The number of iterations of the IRN-MLSQR method is about 60%-70% that of the conventional IRN method, and at the same time, the accuracy of the solution is almost the same as that of the conventional IRN method. The proposed IRN-MLSQR method may be used as a new approach to the inverse problem of ECG.


Asunto(s)
Electrocardiografía , Modelos Cardiovasculares , Algoritmos , Animales , Perros , Corazón
13.
Neural Netw ; 125: 313-329, 2020 May.
Artículo en Inglés | MEDLINE | ID: mdl-32172141

RESUMEN

Multiview Generalized Eigenvalue Proximal Support Vector Machine (MvGEPSVM) is an effective method for multiview data classification proposed recently. However, it ignores discriminations between different views and the agreement of the same view. Moreover, there is no robustness guarantee. In this paper, we propose an improved multiview GEPSVM (IMvGEPSVM) method, which adds a multi-view regularization that can connect different views of the same class and simultaneously considers the maximization of the samples from different classes in heterogeneous views for promoting discriminations. This makes the classification more effective. In addition, L1-norm rather than squared L2-norm is employed to calculate the distances from each of the sample points to the hyperplane so as to reduce the effect of outliers in the proposed model. To solve the resulting objective, an efficient iterative algorithm is presented. Theoretically, we conduct the proof of the algorithm's convergence. Experimental results show the effectiveness of the proposed method.


Asunto(s)
Máquina de Vectores de Soporte/normas , Reconocimiento de Normas Patrones Automatizadas/métodos
14.
J Environ Manage ; 246: 299-313, 2019 Sep 15.
Artículo en Inglés | MEDLINE | ID: mdl-31181479

RESUMEN

Air pollution is very harmful to the industrial production and public health. Therefore, it is necessary to predict the air pollution and release air quality levels to provide guidance for public production and life. In most previous studies, pollutant data were directly used for predictions, which are rarely based on the structural characteristics of the data itself. Therefore, a novel combined forecasting structure based on the L1 norm was designed, aiming at pollution contaminant monitoring and analysis. It comprises analysis, forecast, and evaluation. Firstly, the original data are decomposed into several components. Subsequently, each component is expanded into a matrix time series by phase space reconstruction. The forecast module is then used to carry out the weighted combination of the prediction results of the three models based on the L1 norm to determine the final prediction result and the process parameters are optimized using the multi-tracker optimization algorithm. Moreover, comprehensive fuzzy evaluation was applied to qualitatively analyze the air quality. The daily pollution sources in three cities in China are taken as examples to verify the effectiveness and efficiency of the established combined forecasting structure. The results show that the architecture has a great application potential in the field of air quality prediction.


Asunto(s)
Contaminantes Atmosféricos , Contaminación del Aire , China , Ciudades , Monitoreo del Ambiente , Predicción
15.
Magn Reson Imaging ; 61: 207-223, 2019 09.
Artículo en Inglés | MEDLINE | ID: mdl-31009687

RESUMEN

An effective retrospective correction method is introduced in this paper for intensity inhomogeneity which is an inherent artifact in MR images. Intensity inhomogeneity problem is formulated as the decomposition of acquired image into true image and bias field which are expected to have sparse approximation in suitable transform domains based on their known properties. Piecewise constant nature of the true image lends itself to have a sparse approximation in framelet domain. While spatially smooth property of the bias field supports a sparse representation in Fourier domain. The algorithm attains optimal results by seeking the sparsest solutions for the unknown variables in the search space through L1 norm minimization. The objective function associated with defined problem is convex and is efficiently solved by the linearized alternating direction method. Thus, the method estimates the optimal true image and bias field simultaneously in an L1 norm minimization framework by promoting sparsity of the solutions in suitable transform domains. Furthermore, the methodology doesn't require any preprocessing, any predefined specifications or parametric models that are critically controlled by user-defined parameters. The qualitative and quantitative validation of the proposed methodology in simulated and real human brain MR images demonstrates the efficacy and superiority in performance compared to some of the distinguished algorithms for intensity inhomogeneity correction.


Asunto(s)
Encéfalo/diagnóstico por imagen , Procesamiento de Imagen Asistido por Computador/métodos , Imagen por Resonancia Magnética , Algoritmos , Artefactos , Simulación por Computador , Bases de Datos Factuales , Análisis de Fourier , Humanos , Modelos Teóricos , Reproducibilidad de los Resultados , Estudios Retrospectivos
16.
Neural Netw ; 114: 47-59, 2019 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-30878915

RESUMEN

Twin support vector machine (TWSVM) is a classical and effective classifier for binary classification. However, its robustness cannot be guaranteed due to the utilization of squared L2-norm distance that can usually exaggerate the influence of outliers. In this paper, we propose a new robust capped L1-norm twin support vector machine (CTWSVM), which sustains the advantages of TWSVM and promotes the robustness in solving a binary classification problem with outliers. The solution of the proposed method can be achieved by optimizing a pair of capped L1-norm related problems using a newly-designed effective iterative algorithm. Also, we present some theoretical analysis on existence of local optimum and convergence of the algorithm. Extensive experiments on an artificial dataset and several UCI datasets demonstrate the robustness and feasibility of our proposed CTWSVM.


Asunto(s)
Máquina de Vectores de Soporte/normas , Algoritmos , Clasificación/métodos
17.
Int J Mol Sci ; 20(4)2019 Feb 18.
Artículo en Inglés | MEDLINE | ID: mdl-30781701

RESUMEN

Feature selection and sample clustering play an important role in bioinformatics. Traditional feature selection methods separate sparse regression and embedding learning. Later, to effectively identify the significant features of the genomic data, Joint Embedding Learning and Sparse Regression (JELSR) is proposed. However, since there are many redundancy and noise values in genomic data, the sparseness of this method is far from enough. In this paper, we propose a strengthened version of JELSR by adding the L1-norm constraint on the regularization term based on a previous model, and call it LJELSR, to further improve the sparseness of the method. Then, we provide a new iterative algorithm to obtain the convergence solution. The experimental results show that our method achieves a state-of-the-art level both in identifying differentially expressed genes and sample clustering on different genomic data compared to previous methods. Additionally, the selected differentially expressed genes may be of great value in medical research.


Asunto(s)
Algoritmos , Análisis por Conglomerados , Neoplasias del Colon/genética , Bases de Datos como Asunto , Neoplasias Esofágicas/genética , Perfilación de la Expresión Génica , Humanos , Análisis de Regresión
18.
Cereb Cortex ; 29(8): 3232-3240, 2019 07 22.
Artículo en Inglés | MEDLINE | ID: mdl-30137249

RESUMEN

The hierarchical nature of language requires human brain to internally parse connected-speech and incrementally construct abstract linguistic structures. Recent research revealed multiple neural processing timescales underlying grammar-based configuration of linguistic hierarchies. However, little is known about where in the whole cerebral cortex such temporally scaled neural processes occur. This study used novel magnetoencephalography source imaging techniques combined with a unique language stimulation paradigm to segregate cortical maps synchronized to 3 levels of linguistic units (i.e., words, phrases, and sentences). Notably, distinct ensembles of cortical loci were identified to feature structures at different levels. The superior temporal gyrus was found to be involved in processing all 3 linguistic levels while distinct ensembles of other brain regions were recruited to encode each linguistic level. Neural activities in the right motor cortex only followed the rhythm of monosyllabic words which have clear acoustic boundaries, whereas the left anterior temporal lobe and the left inferior frontal gyrus were selectively recruited in processing phrases or sentences. Our results ground a multi-timescale hierarchical neural processing of speech in neuroanatomical reality with specific sets of cortices responsible for different levels of linguistic units.


Asunto(s)
Lenguaje , Corteza Motora/fisiología , Corteza Prefrontal/fisiología , Percepción del Habla/fisiología , Lóbulo Temporal/fisiología , Adulto , Mapeo Encefálico , Corteza Cerebral/diagnóstico por imagen , Corteza Cerebral/fisiología , Femenino , Voluntarios Sanos , Humanos , Imagen por Resonancia Magnética , Magnetoencefalografía , Masculino , Corteza Motora/diagnóstico por imagen , Corteza Prefrontal/diagnóstico por imagen , Lóbulo Temporal/diagnóstico por imagen , Adulto Joven
19.
Front Physiol ; 9: 1708, 2018.
Artículo en Inglés | MEDLINE | ID: mdl-30555347

RESUMEN

The electrocardiographic imaging inverse problem is ill-posed. Regularization has to be applied to stabilize the problem and solve for a realistic solution. Here, we assess different regularization methods for solving the inverse problem. In this study, we assess (i) zero order Tikhonov regularization (ZOT) in conjunction with the Method of Fundamental Solutions (MFS), (ii) ZOT regularization using the Finite Element Method (FEM), and (iii) the L1-Norm regularization of the current density on the heart surface combined with FEM. Moreover, we apply different approaches for computing the optimal regularization parameter, all based on the Generalized Singular Value Decomposition (GSVD). These methods include Generalized Cross Validation (GCV), Robust Generalized Cross Validation (RGCV), ADPC, U-Curve and Composite REsidual and Smoothing Operator (CRESO) methods. Both simulated and experimental data are used for this evaluation. Results show that the RGCV approach provides the best results to determine the optimal regularization parameter using both the FEM-ZOT and the FEM-L1-Norm. However for the MFS-ZOT, the GCV outperformed all the other regularization parameter choice methods in terms of relative error and correlation coefficient. Regarding the epicardial potential reconstruction, FEM-L1-Norm clearly outperforms the other methods using the simulated data but, using the experimental data, FEM based methods perform as well as MFS. Finally, the use of FEM-L1-Norm combined with RGCV provides robust results in the pacing site localization.

20.
J Mach Learn Res ; 182018 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-30416396

RESUMEN

To find optimal decision rule, Fan et al. (2016) proposed an innovative concordance-assisted learning algorithm which is based on maximum rank correlation estimator. It makes better use of the available information through pairwise comparison. However the objective function is discontinuous and computationally hard to optimize. In this paper, we consider a convex surrogate loss function to solve this problem. In addition, our algorithm ensures sparsity of decision rule and renders easy interpretation. We derive the L 2 error bound of the estimated coefficients under ultra-high dimension. Simulation results of various settings and application to STAR*D both illustrate that the proposed method can still estimate optimal treatment regime successfully when the number of covariates is large.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA