Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 9 de 9
Filter
Add more filters










Database
Language
Publication year range
1.
J Chem Inf Model ; 64(7): 2331-2344, 2024 Apr 08.
Article in English | MEDLINE | ID: mdl-37642660

ABSTRACT

Federated multipartner machine learning has been touted as an appealing and efficient method to increase the effective training data volume and thereby the predictivity of models, particularly when the generation of training data is resource-intensive. In the landmark MELLODDY project, indeed, each of ten pharmaceutical companies realized aggregated improvements on its own classification or regression models through federated learning. To this end, they leveraged a novel implementation extending multitask learning across partners, on a platform audited for privacy and security. The experiments involved an unprecedented cross-pharma data set of 2.6+ billion confidential experimental activity data points, documenting 21+ million physical small molecules and 40+ thousand assays in on-target and secondary pharmacodynamics and pharmacokinetics. Appropriate complementary metrics were developed to evaluate the predictive performance in the federated setting. In addition to predictive performance increases in labeled space, the results point toward an extended applicability domain in federated learning. Increases in collective training data volume, including by means of auxiliary data resulting from single concentration high-throughput and imaging assays, continued to boost predictive performance, albeit with a saturating return. Markedly higher improvements were observed for the pharmacokinetics and safety panel assay-based task subsets.


Subject(s)
Benchmarking , Quantitative Structure-Activity Relationship , Biological Assay , Machine Learning
2.
NPJ Digit Med ; 3: 119, 2020.
Article in English | MEDLINE | ID: mdl-33015372

ABSTRACT

Data-driven machine learning (ML) has emerged as a promising approach for building accurate and robust statistical models from medical data, which is collected in huge volumes by modern healthcare systems. Existing medical data is not fully exploited by ML primarily because it sits in data silos and privacy concerns restrict access to this data. However, without access to sufficient data, ML will be prevented from reaching its full potential and, ultimately, from making the transition from research to clinical practice. This paper considers key factors contributing to this issue, explores how federated learning (FL) may provide a solution for the future of digital health and highlights the challenges and considerations that need to be addressed.

3.
IEEE Trans Neural Syst Rehabil Eng ; 26(4): 758-769, 2018 04.
Article in English | MEDLINE | ID: mdl-29641380

ABSTRACT

Sleep stage classification constitutes an important preliminary exam in the diagnosis of sleep disorders. It is traditionally performed by a sleep expert who assigns to each 30 s of the signal of a sleep stage, based on the visual inspection of signals such as electroencephalograms (EEGs), electrooculograms (EOGs), electrocardiograms, and electromyograms (EMGs). We introduce here the first deep learning approach for sleep stage classification that learns end-to-end without computing spectrograms or extracting handcrafted features, that exploits all multivariate and multimodal polysomnography (PSG) signals (EEG, EMG, and EOG), and that can exploit the temporal context of each 30-s window of data. For each modality, the first layer learns linear spatial filters that exploit the array of sensors to increase the signal-to-noise ratio, and the last layer feeds the learnt representation to a softmax classifier. Our model is compared to alternative automatic approaches based on convolutional networks or decisions trees. Results obtained on 61 publicly available PSG records with up to 20 EEG channels demonstrate that our network architecture yields the state-of-the-art performance. Our study reveals a number of insights on the spatiotemporal distribution of the signal of interest: a good tradeoff for optimal classification performance measured with balanced accuracy is to use 6 EEG with 2 EOG (left and right) and 3 EMG chin channels. Also exploiting 1 min of data before and after each data segment offers the strongest improvement when a limited number of channels are available. As sleep experts, our system exploits the multivariate and multimodal nature of PSG signals in order to deliver the state-of-the-art classification performance with a small computational cost.


Subject(s)
Computer Systems , Deep Learning , Polysomnography/classification , Sleep Stages , Algorithms , Decision Trees , Electroencephalography/classification , Electroencephalography/statistics & numerical data , Electromyography/classification , Electromyography/statistics & numerical data , Electrooculography/classification , Electrooculography/statistics & numerical data , Expert Systems , Humans , Multivariate Analysis , Polysomnography/statistics & numerical data , Signal Processing, Computer-Assisted
4.
Front Hum Neurosci ; 12: 88, 2018.
Article in English | MEDLINE | ID: mdl-29568267

ABSTRACT

Recent research has shown that auditory closed-loop stimulation can enhance sleep slow oscillations (SO) to improve N3 sleep quality and cognition. Previous studies have been conducted in lab environments. The present study aimed to validate and assess the performance of a novel ambulatory wireless dry-EEG device (WDD), for auditory closed-loop stimulation of SO during N3 sleep at home. The performance of the WDD to detect N3 sleep automatically and to send auditory closed-loop stimulation on SO were tested on 20 young healthy subjects who slept with both the WDD and a miniaturized polysomnography (part 1) in both stimulated and sham nights within a double blind, randomized and crossover design. The effects of auditory closed-loop stimulation on delta power increase were assessed after one and 10 nights of stimulation on an observational pilot study in the home environment including 90 middle-aged subjects (part 2).The first part, aimed at assessing the quality of the WDD as compared to a polysomnograph, showed that the sensitivity and specificity to automatically detect N3 sleep in real-time were 0.70 and 0.90, respectively. The stimulation accuracy of the SO ascending-phase targeting was 45 ± 52°. The second part of the study, conducted in the home environment, showed that the stimulation protocol induced an increase of 43.9% of delta power in the 4 s window following the first stimulation (including evoked potentials and SO entrainment effect). The increase of SO response to auditory stimulation remained at the same level after 10 consecutive nights. The WDD shows good performances to automatically detect in real-time N3 sleep and to send auditory closed-loop stimulation on SO accurately. These stimulation increased the SO amplitude during N3 sleep without any adaptation effect after 10 consecutive nights. This tool provides new perspectives to figure out novel sleep EEG biomarkers in longitudinal studies and can be interesting to conduct broad studies on the effects of auditory stimulation during sleep.

5.
Neural Netw ; 76: 39-45, 2016 Apr.
Article in English | MEDLINE | ID: mdl-26849424

ABSTRACT

Echo State Networks are efficient time-series predictors, which highly depend on the value of the spectral radius of the reservoir connectivity matrix. Based on recent results on the mean field theory of driven random recurrent neural networks, enabling the computation of the largest Lyapunov exponent of an ESN, we develop a cheap algorithm to establish a local and operational version of the Echo State Property.


Subject(s)
Neural Networks, Computer , Algorithms , Humans
6.
Neural Netw ; 56: 10-21, 2014 Aug.
Article in English | MEDLINE | ID: mdl-24815743

ABSTRACT

A method is provided for designing and training noise-driven recurrent neural networks as models of stochastic processes. The method unifies and generalizes two known separate modeling approaches, Echo State Networks (ESN) and Linear Inverse Modeling (LIM), under the common principle of relative entropy minimization. The power of the new method is demonstrated on a stochastic approximation of the El Niño phenomenon studied in climate research.


Subject(s)
Entropy , Neural Networks, Computer , Nonlinear Dynamics , Stochastic Processes , Algorithms , Computer Simulation , El Nino-Southern Oscillation , Linear Models , Time
7.
PLoS One ; 8(11): e78917, 2013.
Article in English | MEDLINE | ID: mdl-24236067

ABSTRACT

Deriving tractable reduced equations of biological neural networks capturing the macroscopic dynamics of sub-populations of neurons has been a longstanding problem in computational neuroscience. In this paper, we propose a reduction of large-scale multi-population stochastic networks based on the mean-field theory. We derive, for a wide class of spiking neuron models, a system of differential equations of the type of the usual Wilson-Cowan systems describing the macroscopic activity of populations, under the assumption that synaptic integration is linear with random coefficients. Our reduction involves one unknown function, the effective non-linearity of the network of populations, which can be analytically determined in simple cases, and numerically computed in general. This function depends on the underlying properties of the cells, and in particular the noise level. Appropriate parameters and functions involved in the reduction are given for different models of neurons: McKean, Fitzhugh-Nagumo and Hodgkin-Huxley models. Simulations of the reduced model show a precise agreement with the macroscopic dynamics of the networks for the first two models.


Subject(s)
Computer Simulation , Models, Neurological , Nerve Net/physiology , Synapses/physiology , Action Potentials , Algorithms , Linear Models , Nerve Net/cytology , Neurons/physiology , Signal-To-Noise Ratio
8.
Neural Comput ; 25(11): 2815-32, 2013 Nov.
Article in English | MEDLINE | ID: mdl-24001342

ABSTRACT

Identifying, formalizing, and combining biological mechanisms that implement known brain functions, such as prediction, is a main aspect of research in theoretical neuroscience. In this letter, the mechanisms of spike-timing-dependent plasticity and homeostatic plasticity, combined in an original mathematical formalism, are shown to shape recurrent neural networks into predictors. Following a rigorous mathematical treatment, we prove that they implement the online gradient descent of a distance between the network activity and its stimuli. The convergence to an equilibrium, where the network can spontaneously reproduce or predict its stimuli, does not suffer from bifurcation issues usually encountered in learning in recurrent neural networks.


Subject(s)
Brain/physiology , Homeostasis/physiology , Neural Networks, Computer , Neuronal Plasticity/physiology , Humans
9.
Neural Comput ; 24(9): 2346-83, 2012 Sep.
Article in English | MEDLINE | ID: mdl-22594830

ABSTRACT

We show how a Hopfield network with modifiable recurrent connections undergoing slow Hebbian learning can extract the underlying geometry of an input space. First, we use a slow and fast analysis to derive an averaged system whose dynamics derives from an energy function and therefore always converges to equilibrium points. The equilibria reflect the correlation structure of the inputs, a global object extracted through local recurrent interactions only. Second, we use numerical methods to illustrate how learning extracts the hidden geometrical structure of the inputs. Indeed, multidimensional scaling methods make it possible to project the final connectivity matrix onto a Euclidean distance matrix in a high-dimensional space, with the neurons labeled by spatial position within this space. The resulting network structure turns out to be roughly convolutional. The residual of the projection defines the nonconvolutional part of the connectivity, which is minimized in the process. Finally, we show how restricting the dimension of the space where the neurons live gives rise to patterns similar to cortical maps. We motivate this using an energy efficiency argument based on wire length minimization. Finally, we show how this approach leads to the emergence of ocular dominance or orientation columns in primary visual cortex via the self-organization of recurrent rather than feedforward connections. In addition, we establish that the nonconvolutional (or long-range) connectivity is patchy and is co-aligned in the case of orientation learning.


Subject(s)
Learning/physiology , Models, Neurological , Neural Networks, Computer , Neural Pathways/physiology , Animals , Cerebral Cortex/cytology , Computer Simulation , Humans , Membrane Potentials , Neuronal Plasticity , Orientation , Synapses
SELECTION OF CITATIONS
SEARCH DETAIL
...