Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 54
Filtrar
1.
Artículo en Inglés | MEDLINE | ID: mdl-38875094

RESUMEN

Auditability and verifiability are critical elements in establishing trustworthiness in federated learning (FL). These principles promote transparency, accountability, and independent validation of FL processes. Incorporating auditability and verifiability is imperative for building trust and ensuring the robustness of FL methodologies. Typical FL architectures rely on a trustworthy central authority to manage the FL process. However, reliance on a central authority could become a single point of failure, making it an attractive target for cyber-attacks and insider frauds. Moreover, the central entity lacks auditability and verifiability, which undermines the privacy and security that FL aims to ensure. This article proposes an auditable and verifiable decentralized FL (DFL) framework. We first develop a smart-contract-based monitoring system for DFL participants. This monitoring system is then deployed to each DFL participant and executed when the local model training is initiated. The monitoring system records necessary information during the local training process for auditing purposes. Afterward, each DFL participant sends the local model and monitoring system to the respective blockchain node. The blockchain nodes representing each DFL participant exchange the local models and use the monitoring system to validate each local model. To ensure an auditable and verifiable decentralized aggregation procedure, we record the aggregation steps taken by each blockchain node in the aggregation contract. Following the aggregation phase, each blockchain node applies a multisignature scheme to the aggregated model, producing a globally verifiable model. Based on the signed global model and the aggregation contract, each blockchain node implements a consensus protocol to store the validated global model in tamper-proof storage. To evaluate the performance of our proposed model, we conducted a series of experiments with different machine learning architectures and datasets, including CIFAR-10, F-MNIST, and MedMNIST. The experimental results indicate a slight increase in time consumption compared with the state-of-the-art, serving as a tradeoff to ensure auditability and verifiability. The proposed blockchain-enabled DFL also saves up to 95% communication costs for the participant side.

3.
IEEE Trans Biomed Eng ; 70(4): 1231-1241, 2023 04.
Artículo en Inglés | MEDLINE | ID: mdl-36215340

RESUMEN

OBJECTIVE: Transcranial direct current stimulation (tDCS) is a non-invasive brain stimulation technique used to generate conduction currents in the head and disrupt brain functions. To rapidly evaluate the tDCS-induced current density in near real-time, this paper proposes a deep learning-based emulator, named DeeptDCS. METHODS: The emulator leverages Attention U-net taking the volume conductor models (VCMs) of head tissues as inputs and outputting the three-dimensional current density distribution across the entire head. The electrode configurations are also incorporated into VCMs without increasing the number of input channels; this enables the straightforward incorporation of the non-parametric features of electrodes (e.g., thickness, shape, size, and position) in the training and testing of the proposed emulator. RESULTS: Attention U-net outperforms standard U-net and its other three variants (Residual U-net, Attention Residual U-net, and Multi-scale Residual U-net) in terms of accuracy. The generalization ability of DeeptDCS to non-trained electrode configurations can be greatly enhanced through fine-tuning the model. The computational time required by one emulation via DeeptDCS is a fraction of a second. CONCLUSION: DeeptDCS is at least two orders of magnitudes faster than a physics-based open-source simulator, while providing satisfactorily accurate results. SIGNIFICANCE: The high computational efficiency permits the use of DeeptDCS in applications requiring its repetitive execution, such as uncertainty quantification and optimization studies of tDCS.


Asunto(s)
Aprendizaje Profundo , Estimulación Transcraneal de Corriente Directa , Estimulación Transcraneal de Corriente Directa/métodos , Encéfalo/fisiología , Cabeza , Electrodos
4.
Neural Netw ; 147: 175-185, 2022 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-35042155

RESUMEN

The Tracking-by-segmentation framework is widely used in visual tracking to handle severe appearance change such as deformation and occlusion. Tracking-by-segmentation methods first segment the target object from the background, then use the segmentation result to estimate the target state. In existing methods, target segmentation is formulated as a superpixel labeling problem constrained by a target likelihood constraint, a spatial smoothness constraint and a temporal consistency constraint. The target likelihood is calculated by a discriminative part model trained independently from the superpixel labeling framework and updated online using historical tracking results as pseudo-labels. Due to the lack of spatial and temporal constraints and inaccurate pseudo-labels, the discriminative model is unreliable and may lead to tracking failure. This paper addresses the aforementioned problems by integrating the objective function of model training into the target segmentation optimization framework. Thus, during the optimization process, the discriminative model can be constrained by spatial and temporal constraints and provides more accurate target likelihoods for part labeling, and the results produce more reliable pseudo-labels for model learning. Moreover, we also propose a supervision switch mechanism to detect erroneous pseudo-labels caused by a severe change in data distribution and switch the classifier to a semi-supervised setting in such a case. Evaluation results on OTB2013, OTB2015 and TC-128 benchmarks demonstrate the effectiveness of the proposed tracking algorithm.


Asunto(s)
Algoritmos , Procesamiento de Imagen Asistido por Computador , Aprendizaje
5.
Chin J Traumatol ; 24(6): 311-319, 2021 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-34503907

RESUMEN

Rib fracture is the most common injury in chest trauma. Most of patients with rib fractures were treated conservatively, but up to 50% of patients, especially those with combined injury such as flail chest, presented chronic pain or chest wall deformities, and more than 30% had long-term disabilities, unable to retain a full-time job. In the past two decades, surgery for rib fractures has achieving good outcomes. However, in clinic, there are still some problems including inconsistency in surgical indications and quality control in medical services. Before the year of 2018, there were 3 guidelines on the management of regional traumatic rib fractures were published at home and abroad, focusing on the guidance of the overall treatment decisions and plans; another clinical guideline about the surgical treatment of rib fractures lacks recent related progress in surgical treatment of rib fractures. The Chinese Society of Traumatology, Chinese Medical Association, and the Chinese College of Trauma Surgeons, Chinese Medical Doctor Association organized experts from cardiothoracic surgery, trauma surgery, acute care surgery, orthopedics and other disciplines to participate together, following the principle of evidence-based medicine and in line with the scientific nature and practicality, formulated the Chinese consensus for surgical treatment of traumatic rib fractures (STTRF 2021). This expert consensus put forward some clear, applicable, and graded recommendations from seven aspects: preoperative imaging evaluation, surgical indications, timing of surgery, surgical methods, rib fracture sites for surgical fixation, internal fixation method and material selection, treatment of combined injuries in rib fractures, in order to provide guidance and reference for surgical treatment of traumatic rib fractures.


Asunto(s)
Tórax Paradójico , Fracturas de las Costillas , Traumatismos Torácicos , China , Consenso , Fijación Interna de Fracturas , Humanos , Fracturas de las Costillas/diagnóstico por imagen , Fracturas de las Costillas/cirugía
6.
Neural Netw ; 143: 303-313, 2021 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-34174677

RESUMEN

In this paper, we propose a novel transductive pseudo-labeling based method for deep semi-supervised image recognition. Inspired from the superiority of pseudo labels inferred by label propagation compared with those inferred from network, we argue that information flow from labeled data to unlabeled data should be kept noiseless and with minimum loss. Previous research works use scarce labeled data for feature learning and solely consider the relationship between two feature vectors to construct the similarity graph in feature space, which causes two problems that ultimately lead to noisy and incomplete information flow from labeled data to unlabeled data. The first problem is that the learned feature mapping is highly likely to be biased and can easily over-fit noise. The second problem is the loss of local geometry information in feature space during label propagation. Accordingly, we firstly propose to incorporate self-supervised learning into feature learning for cleaner information flow in feature space during subsequent label propagation. Secondly, we propose to use reconstruction concept to measure pairwise similarity in feature space, such that local geometry information can be preserved. Ablation study confirms synergistic effects from features learned with self-supervision and similarity graph with local geometry preserving. Extensive experiments conducted on benchmark datasets have verified the effectiveness of our proposed method.


Asunto(s)
Benchmarking
7.
Neural Netw ; 139: 24-32, 2021 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-33677376

RESUMEN

Semi-supervised learning has largely alleviated the strong demand for large amount of annotations in deep learning. However, most of the methods have adopted a common assumption that there is always labeled data from the same class of unlabeled data, which is impractical and restricted for real-world applications. In this research work, our focus is on semi-supervised learning when the categories of unlabeled data and labeled data are disjoint from each other. The main challenge is how to effectively leverage knowledge in labeled data to unlabeled data when they are independent from each other, and not belonging to the same categories. Previous state-of-the-art methods have proposed to construct pairwise similarity pseudo labels as supervising signals. However, two issues are commonly inherent in these methods: (1) All of previous methods are comprised of multiple training phases, which makes it difficult to train the model in an end-to-end fashion. (2) Strong dependence on the quality of pairwise similarity pseudo labels limits the performance as pseudo labels are vulnerable to noise and bias. Therefore, we propose to exploit the use of self-supervision as auxiliary task during model training such that labeled data and unlabeled data will share the same set of surrogate labels and overall supervising signals can have strong regularization. By doing so, all modules in the proposed algorithm can be trained simultaneously, which will boost the learning capability as end-to-end learning can be achieved. Moreover, we propose to utilize local structure information in feature space during pairwise pseudo label construction, as local properties are more robust to noise. Extensive experiments have been conducted on three frequently used visual datasets, i.e., CIFAR-10, CIFAR-100 and SVHN, in this paper. Experiment results have indicated the effectiveness of our proposed algorithm as we have achieved new state-of-the-art performance for novel visual categories learning for these three datasets.


Asunto(s)
Algoritmos , Reconocimiento de Normas Patrones Automatizadas/clasificación , Aprendizaje Automático Supervisado/clasificación
8.
IEEE Trans Cybern ; 51(10): 5116-5129, 2021 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-31443059

RESUMEN

Convolutional dictionary learning (CDL) aims to learn a structured and shift-invariant dictionary to decompose signals into sparse representations. While yielding superior results compared to traditional sparse coding methods on various signal and image processing tasks, most CDL methods have difficulties handling large data, because they have to process all images in the dataset in a single pass. Therefore, recent research has focused on online CDL (OCDL) which updates the dictionary with sequentially incoming signals. In this article, a novel OCDL algorithm is proposed based on a local, slice-based representation of sparse codes. Such representation has been found useful in batch CDL problems, where the convolutional sparse coding and dictionary learning problem could be handled in a local way similar to traditional sparse coding problems, but it has never been explored under online scenarios before. We show, in this article, that the proposed algorithm is a natural extension of the traditional patch-based online dictionary learning algorithm, and the dictionary is updated in a similar memory efficient way too. On the other hand, it can be viewed as an improvement of existing second-order OCDL algorithms. Theoretical analysis shows that our algorithm converges and has lower time complexity than existing counterpart that yields exactly the same output. Extensive experiments are performed on various benchmarking datasets, which show that our algorithm outperforms state-of-the-art batch and OCDL algorithms in terms of reconstruction objectives.

9.
Neural Netw ; 130: 49-59, 2020 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-32623112

RESUMEN

Principal component analysis network (PCANet), as an unsupervised shallow network, demonstrates noticeable effectiveness on datasets of various volumes. It carries a two-layer convolution with PCA as filter learning method, followed by a block-wise histogram post-processing stage. Following the structure of PCANet, extreme learning machine auto-encoder (ELM-AE) variants are employed to replace the PCA's role, which come from extreme learning machine network (ELMNet) and hierarchical ELMNet. ELMNet emphasizes the importance of orthogonal projection while overlooking non-linearity. The latter introduces complex pre-processing to overcome drawback of non-linear ELM-AE. In this paper, we analyze intrinsic characteristics of ELM-AE variants and accordingly propose a regularized ELM-AE, which combines non-linearity learning capability and approximately orthogonal projection. Experiments on image classification show the effectiveness compared to supervised convolutional neural networks and related shallow networks on unsupervised feature learning.


Asunto(s)
Aprendizaje Profundo , Programas Informáticos , Análisis de Componente Principal
10.
Neural Netw ; 123: 331-342, 2020 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-31901564

RESUMEN

Dictionary learning is a widely adopted approach for image classification. Existing methods focus either on finding a dictionary that produces discriminative sparse representation, or on enforcing priors that best describe the dataset distribution. In many cases, the dataset size is often small with large intra-class variability and nondiscriminative feature space. In this work we propose a simple and effective framework called ELM-DDL to address these issues. Specifically, we represent input features with Extreme Learning Machine (ELM) with orthogonal output projection, which enables diverse representation on nonlinear hidden space and task specific feature learning on output space. The embeddings are further regularized via a maximum margin criterion (MMC) to maximize the inter-class variance and minimize intra-class variance. For dictionary learning, we design a novel weighted class specific ℓ1,2 norm to regularize the sparse coding vectors, which promotes uniformity of the sparse patterns of samples belonging to the same class and suppresses support overlaps of different classes. We show that such regularization is robust, discriminative and easy to optimize. The proposed method is combined with a sparse representation classifier (SRC) to evaluate on benchmark datasets. Results show that our approach achieves state-of-the-art performance compared to other dictionary learning methods.


Asunto(s)
Aprendizaje Automático , Procesamiento de Imagen Asistido por Computador/métodos
11.
IEEE Trans Cybern ; 50(3): 1146-1156, 2020 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-30629529

RESUMEN

Noise that afflicts natural images, regardless of the source, generally disturbs the perception of image quality by introducing a high-frequency random element that, when severe, can mask image content. Except at very low levels, where it may play a purpose, it is annoying. There exist significant statistical differences between distortion-free natural images and noisy images that become evident upon comparing the empirical probability distribution histograms of their discrete wavelet transform (DWT) coefficients. The DWT coefficients of low- or no-noise natural images have leptokurtic, peaky distributions with heavy tails; while noisy images tend to be platykurtic with less peaky distributions and shallower tails. The sample kurtosis is a natural measure of the peakedness and tail weight of the distributions of random variables. Here, we study the efficacy of the sample kurtosis of image wavelet coefficients as a feature driving, an extreme learning machine which learns to map kurtosis values into perceptual quality scores. The model is trained and tested on five types of noisy images, including additive white Gaussian noise, additive Gaussian color noise, impulse noise, masked noise, and high-frequency noise from the LIVE, CSIQ, TID2008, and TID2013 image quality databases. The experimental results show that the trained model has better quality evaluation performance on noisy images than existing blind noise assessment models, while also outperforming general-purpose blind and full-reference image quality assessment methods.


Asunto(s)
Procesamiento de Imagen Asistido por Computador/métodos , Aprendizaje Automático , Modelos Estadísticos , Análisis de Ondículas
12.
Neural Netw ; 122: 395-406, 2020 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-31785540

RESUMEN

Recently, preserving geometry information of data while learning representations have attracted increasing attention in intelligent machine fault diagnosis. Existing geometry preserving methods require to predefine the similarities between data points in the original data space. The predefined affinity matrix, which is also known as the similarity matrix, is then used to preserve geometry information during the process of representations learning. Hence, the data representations are learned under the assumption of a fixed and known prior knowledge, i.e., similarities between data points. However, the assumed prior knowledge is difficult to precisely determine the real relationships between data points, especially in high dimensional space. Also, using two separated steps to learn affinity matrix and data representations may not be optimal and universal for data classification. In this paper, based on the extreme learning machine autoencoder (ELM-AE), we propose to learn the data representations and the affinity matrix simultaneously. The affinity matrix is treated as a variable and unified in the objective function of ELM-AE. Instead of predefining and fixing the affinity matrix, the proposed method adjusts the similarities by taking into account its capability of capturing the geometry information in both original data space and non-linearly mapped representation space. Meanwhile, the geometry information of original data can be preserved in the embedded representations with the help of the affinity matrix. Experimental results on several benchmark datasets demonstrate the effectiveness of the proposed method, and the empirical study also shows it is an efficient tool on machine fault diagnosis.


Asunto(s)
Inteligencia Artificial , Aprendizaje Automático , Algoritmos , Atención
13.
Sensors (Basel) ; 19(24)2019 Dec 14.
Artículo en Inglés | MEDLINE | ID: mdl-31847409

RESUMEN

Advanced chemometric analysis is required for rapid and reliable determination of physical and/or chemical components in complex gas mixtures. Based on infrared (IR) spectroscopic/sensing techniques, we propose an advanced regression model based on the extreme learning machine (ELM) algorithm for quantitative chemometric analysis. The proposed model makes two contributions to the field of advanced chemometrics. First, an ELM-based autoencoder (AE) was developed for reducing the dimensionality of spectral signals and learning important features for regression. Second, the fast regression ability of ELM architecture was directly used for constructing the regression model. In this contribution, nitrogen oxide mixtures (i.e., N2O/NO2/NO) found in vehicle exhaust were selected as a relevant example of a real-world gas mixture. Both simulated data and experimental data acquired using Fourier transform infrared spectroscopy (FTIR) were analyzed by the proposed chemometrics model. By comparing the numerical results with those obtained using conventional principle components regression (PCR) and partial least square regression (PLSR) models, the proposed model was verified to offer superior robustness and performance in quantitative IR spectral analysis.

14.
Chin J Traumatol ; 22(3): 129-133, 2019 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-31076162

RESUMEN

PURPOSE: To summarize and analyze the early treatment of multiple injuries combined with severe pelvic fractures, especially focus on the hemostasis methods for severe pelvic fractures, so as to improve the successful rate of rescue for the fatal hemorrhagic shock caused by pelvic fractures. METHODS: A retrospective analysis was conducted in 68 cases of multiple trauma combined with severe pelvic fractures in recent 10 years (from Jan. 2006 to Dec. 2015). There were 57 males and 11 females. Their age ranged from 19 to 75 years, averaging 42 years. Causes of injury included traffic accidents in 34 cases (2 cases of truck rolling), high falling injuries in 17 cases, crashing injuries in 15 cases, steel cable wound in 1 case, and seat belt traction injury in 1 case. There were 31 cases of head injury, 11 cases of chest injury, 56 cases of abdominal and pelvic injuries, and 37 cases of spinal and limb injuries. Therapeutic methods included early anti-shock measures, surgical hemostasis based on internal iliac artery devasculization for pelvic hemorrhage, and early treatment for combined organ damage and complications included embolization and repair of the liver, spleen and kidney, splenectomy, nephrectomy, intestinal resection, colostomy, bladder ostomy, and urethral repair, etc. Patients in this series received blood transfusion volume of 1200-10,000 mL, with an average volume of 2850 mL. Postoperative follow-up ranged from 6 months to 1.5 years. RESULTS: The average score of ISS in this series was 38.6 points. 49 cases were successfully treated and the total survival rate was 72.1%. Totally 19 patients died (average ISS score 42.4), including 6 cases of hemorrhagic shock, 8 cases of brain injury, 1 case of cardiac injury, 2 cases of pulmonary infection, 1 case of pulmonary embolism, and 1 case of multiple organ failure. Postoperative complications included 1 case of urethral stricture (after secondary repair), 1 case of sexual dysfunction (combined with urethral rupture), 1 case of lower limb amputation (femoral artery thrombosis), and 18 cases of consumptive coagulopathy. CONCLUSION: The early treatment of multiple injuries combined with severe pelvic fractures should focus on pelvic hemostasis. Massive bleeding-induced hemorrhagic shock is one of the main causes of poor prognosis. The technique of internal iliac artery devasculization including ligation and embolization can be used as an effective measure to stop or reduce bleeding. Consumptive coagulopathy is difficult to deal with, which should be detected and treated as soon as possible after surgical measures have been performed. The effect of using recombinant factor VII in treating consumptive coagulopathy is satisfactory.


Asunto(s)
Fracturas Óseas/terapia , Traumatismo Múltiple/terapia , Huesos Pélvicos/lesiones , Adulto , Embolización Terapéutica/métodos , Factor VII/administración & dosificación , Femenino , Fracturas Óseas/complicaciones , Hemostasis Quirúrgica , Humanos , Arteria Ilíaca/cirugía , Puntaje de Gravedad del Traumatismo , Ligadura , Masculino , Persona de Mediana Edad , Traumatismo Múltiple/complicaciones , Pronóstico , Proteínas Recombinantes/administración & dosificación , Estudios Retrospectivos , Choque Hemorrágico/etiología , Choque Hemorrágico/prevención & control , Adulto Joven
15.
Artículo en Inglés | MEDLINE | ID: mdl-30932850

RESUMEN

In many practical transfer learning scenarios, the feature distribution is different across the source and target domains (i.e., nonindependent identical distribution). Maximum mean discrepancy (MMD), as a domain discrepancy metric, has achieved promising performance in unsupervised domain adaptation (DA). We argue that the MMD-based DA methods ignore the data locality structure, which, up to some extent, would cause the negative transfer effect. The locality plays an important role in minimizing the nonlinear local domain discrepancy underlying the marginal distributions. For better exploiting the domain locality, a novel local generative discrepancy metric-based intermediate domain generation learning called Manifold Criterion guided Transfer Learning (MCTL) is proposed in this paper. The merits of the proposed MCTL are fourfold: 1) the concept of manifold criterion (MC) is first proposed as a measure validating the distribution matching across domains, and DA is achieved if the MC is satisfied; 2) the proposed MC can well guide the generation of the intermediate domain sharing similar distribution with the target domain, by minimizing the local domain discrepancy; 3) a global generative discrepancy metric is presented, such that both the global and local discrepancies can be effectively and positively reduced; and 4) a simplified version of MCTL called MCTL-S is presented under a perfect domain generation assumption for more generic learning scenario. Experiments on a number of benchmark visual transfer tasks demonstrate the superiority of the proposed MC guided generative transfer method, by comparing with the other state-of-the-art methods. The source code is available in https://github.com/wangshanshanCQU/MCTL.

16.
IEEE Trans Cybern ; 49(3): 947-960, 2019 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-29994190

RESUMEN

Electronic tongue (E-Tongue), as a novel taste analysis tool, shows a promising perspective for taste recognition. In this paper, we constructed a voltammetric E-Tongue system and measured 13 different kinds of liquid samples, such as tea, wine, beverage, functional materials, etc. Owing to the noise of system and a variety of environmental conditions, the acquired E-Tongue data shows inseparable patterns. To this end, from the viewpoint of algorithm, we propose a local discriminant preservation projection (LDPP) model, an under-studied subspace learning algorithm, that concerns the local discrimination and neighborhood structure preservation. In contrast with other conventional subspace projection methods, LDPP has two merits. On one hand, with local discrimination it has a higher tolerance to abnormal data or outliers. On the other hand, it can project the data to a more separable space with local structure preservation. Further, support vector machine, extreme learning machine (ELM), and kernelized ELM (KELM) have been used as classifiers for taste recognition in E-Tongue. Experimental results demonstrate that the proposed E-Tongue is effective for multiple tastes recognition in both efficiency and effectiveness. Particularly, the proposed LDPP-based KELM classifier model achieves the best taste recognition performance of 98%. The developed benchmark data sets and codes will be released and downloaded in http://www.leizhang.tk/ tempcode.html.

18.
Sleep ; 40(10)2017 10 01.
Artículo en Inglés | MEDLINE | ID: mdl-29029305

RESUMEN

Study Objectives: Automated sleep staging has been previously limited by a combination of clinical and physiological heterogeneity. Both factors are in principle addressable with large data sets that enable robust calibration. However, the impact of sample size remains uncertain. The objectives are to investigate the extent to which machine learning methods can approximate the performance of human scorers when supplied with sufficient training cases and to investigate how staging performance depends on the number of training patients, contextual information, model complexity, and imbalance between sleep stage proportions. Methods: A total of 102 features were extracted from six electroencephalography (EEG) channels in routine polysomnography. Two thousand nights were partitioned into equal (n = 1000) training and testing sets for validation. We used epoch-by-epoch Cohen's kappa statistics to measure the agreement between classifier output and human scorer according to American Academy of Sleep Medicine scoring criteria. Results: Epoch-by-epoch Cohen's kappa improved with increasing training EEG recordings until saturation occurred (n = ~300). The kappa value was further improved by accounting for contextual (temporal) information, increasing model complexity, and adjusting the model training procedure to account for the imbalance of stage proportions. The final kappa on the testing set was 0.68. Testing on more EEG recordings leads to kappa estimates with lower variance. Conclusion: Training with a large data set enables automated sleep staging that compares favorably with human scorers. Because testing was performed on a large and heterogeneous data set, the performance estimate has low variance and is likely to generalize broadly.


Asunto(s)
Electroencefalografía/métodos , Procesamiento Automatizado de Datos/métodos , Polisomnografía/métodos , Fases del Sueño/fisiología , Adulto , Femenino , Humanos , Aprendizaje Automático , Masculino , Persona de Mediana Edad , Variaciones Dependientes del Observador , Reproducibilidad de los Resultados , Sueño/fisiología , Síndromes de la Apnea del Sueño/fisiopatología
19.
IEEE Trans Cybern ; 47(1): 232-243, 2017 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-26863686

RESUMEN

Numerous state-of-the-art perceptual image quality assessment (IQA) algorithms share a common two-stage process: distortion description followed by distortion effects pooling. As for the first stage, the distortion descriptors or measurements are expected to be effective representatives of human visual variations, while the second stage should well express the relationship among quality descriptors and the perceptual visual quality. However, most of the existing quality descriptors (e.g., luminance, contrast, and gradient) do not seem to be consistent with human perception, and the effects pooling is often done in ad-hoc ways. In this paper, we propose a novel full-reference IQA metric. It applies non-negative matrix factorization (NMF) to measure image degradations by making use of the parts-based representation of NMF. On the other hand, a new machine learning technique [extreme learning machine (ELM)] is employed to address the limitations of the existing pooling techniques. Compared with neural networks and support vector regression, ELM can achieve higher learning accuracy with faster learning speed. Extensive experimental results demonstrate that the proposed metric has better performance and lower computational complexity in comparison with the relevant state-of-the-art approaches.

20.
Neural Comput ; 28(10): 2181-212, 2016 10.
Artículo en Inglés | MEDLINE | ID: mdl-27557107

RESUMEN

Polychronous neuronal group (PNG), a type of cell assembly, is one of the putative mechanisms for neural information representation. According to the reader-centric definition, some readout neurons can become selective to the information represented by polychronous neuronal groups under ongoing activity. Here, in computational models, we show that the frequently activated polychronous neuronal groups can be learned by readout neurons with joint weight-delay spike-timing-dependent plasticity. The identity of neurons in the group and their expected spike timing at millisecond scale can be recovered from the incoming weights and delays of the readout neurons. The detection performance can be further improved by two layers of readout neurons. In this way, the detection of polychronous neuronal groups becomes an intrinsic part of the network, and the readout neurons become differentiated members in the group to indicate whether subsets of the group have been activated according to their spike timing. The readout spikes representing this information can be used to analyze how PNGs interact with each other or propagate to downstream networks for higher-level processing.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...