Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 4 de 4
Filter
Add more filters










Database
Language
Publication year range
1.
bioRxiv ; 2024 Jul 08.
Article in English | MEDLINE | ID: mdl-39026817

ABSTRACT

How do we make good decisions in uncertain environments? In psychology and neuroscience, the classic answer is that we calculate the value of each option and then compare the values to choose the most rewarding, modulo some exploratory noise. An ethologist, conversely, would argue that we commit to one option until its value drops below a threshold, at which point we start exploring other options. In order to determine which view better describes human decision-making, we developed a novel, foraging-inspired sequential decision-making model and used it to ask whether humans compare to threshold ("Forage") or compare alternatives ("Reinforcement-Learn" [RL]). We found that the foraging model was a better fit for participant behavior, better predicted the participants' tendency to repeat choices, and predicted the existence of held-out participants with a pattern of choice that was almost impossible under RL. Together, these results suggest that humans use foraging computations, rather than RL, even in classic reinforcement learning tasks.

2.
J Neural Eng ; 21(4)2024 Jul 26.
Article in English | MEDLINE | ID: mdl-38981500

ABSTRACT

Objective.To evaluate the inter- and intra-rater reliability for the identification of bad channels among neurologists, EEG Technologists, and naïve research personnel, and to compare their performance with the automated bad channel detection (ABCD) algorithm for detecting bad channels.Approach.Six Neurologists, ten EEG Technologists, and six naïve research personnel (22 raters in total) were asked to rate 1440 real intracranial EEG channels as good or bad. Intra- and interrater kappa statistics were calculated for each group. We then compared each group to the ABCD algorithm which uses spectral and temporal domain features to classify channels as good or bad.Main results.Analysis of channel ratings from our participants revealed variable intra-rater reliability within each group, with no significant differences across groups. Inter-rater reliability was moderate among neurologists and EEG Technologists but minimal among naïve participants. Neurologists demonstrated a slightly higher consistency in ratings than EEG Technologists. Both groups occasionally misclassified flat channels as good, and participants generally focused on low-frequency content for their assessments. The ABCD algorithm, in contrast, relied more on high-frequency content. A logistic regression model showed a linear relationship between the algorithm's ratings and user responses for predominantly good channels, but less so for channels rated as bad. Sensitivity and specificity analyses further highlighted differences in rating patterns among the groups, with neurologists showing higher sensitivity and naïve personnel higher specificity.Significance.Our study reveals the bias in human assessments of intracranial electroencephalography (iEEG) data quality and the tendency of even experienced professionals to overlook certain bad channels, highlighting the need for standardized, unbiased methods. The ABCD algorithm, outperforming human raters, suggests the potential of automated solutions for more reliable iEEG interpretation and seizure characterization, offering a reliable approach free from human biases.


Subject(s)
Algorithms , Humans , Reproducibility of Results , Observer Variation , Electrocorticography/methods , Electrocorticography/standards , Electroencephalography/methods , Electroencephalography/standards , Neurologists/statistics & numerical data , Neurologists/standards
3.
bioRxiv ; 2024 Jul 23.
Article in English | MEDLINE | ID: mdl-38895240

ABSTRACT

Decision-making in uncertain environments often leads to varied outcomes. Understanding how individuals interpret the causes of unexpected feedback is crucial for adaptive behavior and mental well-being. Uncertainty can be broadly categorized into two components: volatility and stochasticity. Volatility is about how quickly conditions change, impacting results. Stochasticity, on the other hand, refers to outcomes affected by random chance or "luck". Understanding these factors enables individuals to have more effective environmental analysis and strategy implementation (explore or exploit) for future decisions. This study investigates how anxiety and apathy, two prevalent affective states, influence the perceptions of uncertainty and exploratory behavior. Participants (N = 1001) completed a restless three-armed bandit task that was analyzed using latent state models. Anxious individuals perceived uncertainty as more volatile, leading to increased exploration and learning rates, especially after reward omission. Conversely, apathetic individuals viewed uncertainty as more stochastic, resulting in decreased exploration and learning rates. The perceived volatility-to-stochasticity ratio mediated the anxiety-exploration relationship post-adverse outcomes. Dimensionality reduction showed exploration and uncertainty estimation to be distinct but related latent factors shaping a manifold of adaptive behavior that is modulated by anxiety and apathy. These findings reveal distinct computational mechanisms for how anxiety and apathy influence decision-making, providing a framework for understanding cognitive and affective processes in neuropsychiatric disorders.

4.
Neuroimage ; 290: 120557, 2024 Apr 15.
Article in English | MEDLINE | ID: mdl-38423264

ABSTRACT

BACKGROUND: Time series analysis is critical for understanding brain signals and their relationship to behavior and cognition. Cluster-based permutation tests (CBPT) are commonly used to analyze a variety of electrophysiological signals including EEG, MEG, ECoG, and sEEG data without a priori assumptions about specific temporal effects. However, two major limitations of CBPT include the inability to directly analyze experiments with multiple fixed effects and the inability to account for random effects (e.g. variability across subjects). Here, we propose a flexible multi-step hypothesis testing strategy using CBPT with Linear Mixed Effects Models (LMEs) and Generalized Linear Mixed Effects Models (GLMEs) that can be applied to a wide range of experimental designs and data types. METHODS: We first evaluate the statistical robustness of LMEs and GLMEs using simulated data distributions. Second, we apply a multi-step hypothesis testing strategy to analyze ERPs and broadband power signals extracted from human ECoG recordings collected during a simple image viewing experiment with image category and novelty as fixed effects. Third, we assess the statistical power differences between analyzing signals with CBPT using LMEs compared to CBPT using separate t-tests run on each fixed effect through simulations that emulate broadband power signals. Finally, we apply CBPT using GLMEs to high-gamma burst data to demonstrate the extension of the proposed method to the analysis of nonlinear data. RESULTS: First, we found that LMEs and GLMEs are robust statistical models. In simple simulations LMEs produced highly congruent results with other appropriately applied linear statistical models, but LMEs outperformed many linear statistical models in the analysis of "suboptimal" data and maintained power better than analyzing individual fixed effects with separate t-tests. GLMEs also performed similarly to other nonlinear statistical models. Second, in real world human ECoG data, LMEs performed at least as well as separate t-tests when applied to predefined time windows or when used in conjunction with CBPT. Additionally, fixed effects time courses extracted with CBPT using LMEs from group-level models of pseudo-populations replicated latency effects found in individual category-selective channels. Third, analysis of simulated broadband power signals demonstrated that CBPT using LMEs was superior to CBPT using separate t-tests in identifying time windows with significant fixed effects especially for small effect sizes. Lastly, the analysis of high-gamma burst data using CBPT with GLMEs produced results consistent with CBPT using LMEs applied to broadband power data. CONCLUSIONS: We propose a general approach for statistical analysis of electrophysiological data using CBPT in conjunction with LMEs and GLMEs. We demonstrate that this method is robust for experiments with multiple fixed effects and applicable to the analysis of linear and nonlinear data. Our methodology maximizes the statistical power available in a dataset across multiple experimental variables while accounting for hierarchical random effects and controlling FWER across fixed effects. This approach substantially improves power leading to better reproducibility. Additionally, CBPT using LMEs and GLMEs can be used to analyze individual channels or pseudo-population data for the comparison of functional or anatomical groups of data.


Subject(s)
Brain , Research Design , Humans , Reproducibility of Results , Brain/physiology , Models, Statistical , Linear Models
SELECTION OF CITATIONS
SEARCH DETAIL