Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 5 de 5
Filter
Add more filters











Database
Language
Publication year range
1.
Proc Natl Acad Sci U S A ; 121(39): e2302098121, 2024 Sep 24.
Article in English | MEDLINE | ID: mdl-39302968

ABSTRACT

A standard practice in statistical hypothesis testing is to mention the P-value alongside the accept/reject decision. We show the advantages of mentioning an e-value instead. With P-values, it is not clear how to use an extreme observation (e.g. [Formula: see text]) for getting better frequentist decisions. With e-values it is straightforward, since they provide Type-I risk control in a generalized Neyman-Pearson setting with the decision task (a general loss function) determined post hoc, after observation of the data-thereby providing a handle on "roving [Formula: see text]'s." When Type-II risks are taken into consideration, the only admissible decision rules in the post hoc setting turn out to be e-value-based. Similarly, if the loss incurred when specifying a faulty confidence interval is not fixed in advance, standard confidence intervals and distributions may fail, whereas e-confidence sets and e-posteriors still provide valid risk guarantees. Sufficiently powerful e-values have by now been developed for a range of classical testing problems. We discuss the main challenges for wider development and deployment.

2.
Psychiatry Res ; 326: 115328, 2023 08.
Article in English | MEDLINE | ID: mdl-37429173

ABSTRACT

INTRODUCTION: We developed and tested a Bayesian network(BN) model to predict ECT remission for depression, with non-response as a secondary outcome. METHODS: We performed a systematic literature search on clinically available predictors. We combined these predictors with variables from a dataset of clinical ECT trajectories (performed in the University Medical Center Utrecht) to create priors and train the BN. Temporal validation was performed in an independent sample. RESULTS: The systematic literature search yielded three meta-analyses, which provided prior knowledge on outcome predictors. The clinical dataset consisted of 248 treatment trajectories in the training set and 44 trajectories in the test set at the same medical center. The AUC for the primary outcome remission estimated on an independent validation set was 0.686 (95%CI 0.513-0.859) (AUC values of 0.505 - 0.763 observed in 5-fold cross validation of the model within the train set). Accuracy 0.73 (balanced accuracy 0.67), sensitivity 0.55, specificity 0.79, after temporal validation in the independent sample. Prior literature information marginally reduced CI width. DISCUSSION: A BN model comprised of prior knowledge and clinical data can predict remission of depression after ECT with reasonable performance. This approach can be used to make outcome predictions in psychiatry, and offers a methodological framework to weigh additional information, such as patient characteristics, symptoms and biomarkers. In time, it may be used to improve shared decision-making in clinical practice.


Subject(s)
Electroconvulsive Therapy , Humans , Depression/therapy , Bayes Theorem , Prognosis , Biomarkers , Treatment Outcome
3.
Philos Trans A Math Phys Eng Sci ; 381(2247): 20220146, 2023 May 15.
Article in English | MEDLINE | ID: mdl-36970821

ABSTRACT

We develop a representation of a decision maker's uncertainty based on e-variables. Like the Bayesian posterior, this e-posterior allows for making predictions against arbitrary loss functions that may not be specified ex ante. Unlike the Bayesian posterior, it provides risk bounds that have frequentist validity irrespective of prior adequacy: if the e-collection (which plays a role analogous to the Bayesian prior) is chosen badly, the bounds get loose rather than wrong, making e-posterior minimax decision rules safer than Bayesian ones. The resulting quasi-conditional paradigm is illustrated by re-interpreting a previous influential partial Bayes-frequentist unification, Kiefer-Berger-Brown-Wolpert conditional frequentist tests, in terms of e-posteriors. This article is part of the theme issue 'Bayesian inference: challenges, perspectives, and prospects'.

4.
BMC Psychiatry ; 22(1): 407, 2022 06 17.
Article in English | MEDLINE | ID: mdl-35715745

ABSTRACT

BACKGROUND: Developing predictive models for precision psychiatry is challenging because of unavailability of the necessary data: extracting useful information from existing electronic health record (EHR) data is not straightforward, and available clinical trial datasets are often not representative for heterogeneous patient groups. The aim of this study was constructing a natural language processing (NLP) pipeline that extracts variables for building predictive models from EHRs. We specifically tailor the pipeline for extracting information on outcomes of psychiatry treatment trajectories, applicable throughout the entire spectrum of mental health disorders ("transdiagnostic"). METHODS: A qualitative study into beliefs of clinical staff on measuring treatment outcomes was conducted to construct a candidate list of variables to extract from the EHR. To investigate if the proposed variables are suitable for measuring treatment effects, resulting themes were compared to transdiagnostic outcome measures currently used in psychiatry research and compared to the HDRS (as a gold standard) through systematic review, resulting in an ideal set of variables. To extract these from EHR data, a semi-rule based NLP pipeline was constructed and tailored to the candidate variables using Prodigy. Classification accuracy and F1-scores were calculated and pipeline output was compared to HDRS scores using clinical notes from patients admitted in 2019 and 2020. RESULTS: Analysis of 34 questionnaires answered by clinical staff resulted in four themes defining treatment outcomes: symptom reduction, general well-being, social functioning and personalization. Systematic review revealed 242 different transdiagnostic outcome measures, with the 36-item Short-Form Survey for quality of life (SF36) being used most consistently, showing substantial overlap with the themes from the qualitative study. Comparing SF36 to HDRS scores in 26 studies revealed moderate to good correlations (0.62-0.79) and good positive predictive values (0.75-0.88). The NLP pipeline developed with notes from 22,170 patients reached an accuracy of 95 to 99 percent (F1 scores: 0.38 - 0.86) on detecting these themes, evaluated on data from 361 patients. CONCLUSIONS: The NLP pipeline developed in this study extracts outcome measures from the EHR that cater specifically to the needs of clinical staff and align with outcome measures used to detect treatment effects in clinical trials.


Subject(s)
Natural Language Processing , Psychiatry , Electronic Health Records , Humans , Information Storage and Retrieval , Quality of Life
5.
Psychon Bull Rev ; 28(3): 795-812, 2021 Jun.
Article in English | MEDLINE | ID: mdl-33210222

ABSTRACT

Recently, optional stopping has been a subject of debate in the Bayesian psychology community. Rouder (Psychonomic Bulletin & Review 21(2), 301-308, 2014) argues that optional stopping is no problem for Bayesians, and even recommends the use of optional stopping in practice, as do (Wagenmakers, Wetzels, Borsboom, van der Maas & Kievit, Perspectives on Psychological Science 7, 627-633, 2012). This article addresses the question of whether optional stopping is problematic for Bayesian methods, and specifies under which circumstances and in which sense it is and is not. By slightly varying and extending Rouder's (Psychonomic Bulletin & Review 21(2), 301-308, 2014) experiments, we illustrate that, as soon as the parameters of interest are equipped with default or pragmatic priors-which means, in most practical applications of Bayes factor hypothesis testing-resilience to optional stopping can break down. We distinguish between three types of default priors, each having their own specific issues with optional stopping, ranging from no-problem-at-all (type 0 priors) to quite severe (type II priors).


Subject(s)
Data Interpretation, Statistical , Psychometrics , Research Design , Bayes Theorem , Humans , Psychometrics/methods , Psychometrics/standards
SELECTION OF CITATIONS
SEARCH DETAIL