Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 6 de 6
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
J Med Imaging (Bellingham) ; 10(3): 036003, 2023 May.
Artículo en Inglés | MEDLINE | ID: mdl-37323123

RESUMEN

Purpose: Random matrix theory (RMT) is an increasingly useful tool for understanding large, complex systems. Prior studies have examined functional magnetic resonance imaging (fMRI) scans using tools from RMT, with some success. However, RMT computations are highly sensitive to a number of analytic choices, and the robustness of findings involving RMT remains in question. We systematically investigate the usefulness of RMT on a wide variety of fMRI datasets using a rigorous predictive framework. Approach: We develop open-source software to efficiently compute RMT features from fMRI images and examine the cross-validated predictive potential of eigenvalue and RMT-based features ("eigenfeatures") with classic machine-learning classifiers. We systematically vary pre-processing extent, normalization procedures, RMT unfolding procedures, and feature selection and compare the impact of these analytic choices on the distributions of cross-validated prediction performance for each combination of dataset binary classification task, classifier, and feature. To deal with class imbalance, we use the area under the receiver operating characteristic curve (AUROC) as the main performance metric. Results: Across all classification tasks and analytic choices, we find RMT- and eigenvalue-based "eigenfeatures" to have predictive utility more often than not (82.4% of median AUROCs>0.5; median AUROC range across classification tasks 0.47 to 0.64). Simple baseline reductions on source timeseries, by contrast, were less useful (58.8% of median AUROCs>0.5, median AUROC range across classification tasks 0.42 to 0.62). Additionally, eigenfeature AUROC distributions were overall more right-tailed than baseline features, suggesting greater predictive potential. However, performance distributions were wide and often significantly affected by analytic choices. Conclusions: Eigenfeatures clearly have potential for understanding fMRI functional connectivity in a wide variety of scenarios. The utility of these features is strongly dependent on analytic decisions, suggesting caution when interpreting past and future studies applying RMT to fMRI. However, our study demonstrates that the inclusion of RMT statistics in fMRI investigations could improve prediction performances across a wide variety of phenomena.

2.
Diagnostics (Basel) ; 13(7)2023 Apr 01.
Artículo en Inglés | MEDLINE | ID: mdl-37046533

RESUMEN

Supervised machine learning classification is the most common example of artificial intelligence (AI) in industry and in academic research. These technologies predict whether a series of measurements belong to one of multiple groups of examples on which the machine was previously trained. Prior to real-world deployment, all implementations need to be carefully evaluated with hold-out validation, where the algorithm is tested on different samples than it was provided for training, in order to ensure the generalizability and reliability of AI models. However, established methods for performing hold-out validation do not assess the consistency of the mistakes that the AI model makes during hold-out validation. Here, we show that in addition to standard methods, an enhanced technique for performing hold-out validation-that also assesses the consistency of the sample-wise mistakes made by the learning algorithm-can assist in the evaluation and design of reliable and predictable AI models. The technique can be applied to the validation of any supervised learning classification application, and we demonstrate the use of the technique on a variety of example biomedical diagnostic applications, which help illustrate the importance of producing reliable AI models. The validation software created is made publicly available, assisting anyone developing AI models for any supervised classification application in the creation of more reliable and predictable technologies.

3.
Cardiol Young ; 33(3): 388-395, 2023 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-35373725

RESUMEN

BACKGROUND: Although serum lactate levels are widely accepted markers of haemodynamic instability, an alternative method to evaluate haemodynamic stability/instability continuously and non-invasively may assist in improving the standard of patient care. We hypothesise that blood lactate in paediatric ICU patients can be predicted using machine learning applied to arterial waveforms and perioperative characteristics. METHODS: Forty-eight post-operative children, median age 4 months (2.9-11.8 interquartile range), mean baseline heart rate of 131 beats per minute (range 33-197), mean lactate level at admission of 22.3 mg/dL (range 6.3-71.1), were included. Morphological arterial waveform characteristics were acquired and analysed. Predicting lactate levels was accomplished using regression-based supervised learning algorithms, evaluated with hold-out cross-validation, including, basing prediction on the currently acquired physiological measurements along with those acquired at admission, as well as adding the most recent lactate measurement and the time since that measurement as prediction parameters. Algorithms were assessed with mean absolute error, the average of the absolute differences between actual and predicted lactate concentrations. Low values represent superior model performance. RESULTS: The best performing algorithm was the tuned random forest, which yielded a mean absolute error of 3.38 mg/dL when predicting blood lactate with updated ground truth from the most recent blood draw. CONCLUSIONS: The random forest is capable of predicting serum lactate levels by analysing perioperative variables, including the arterial pressure waveform. Thus, machine learning can predict patient blood lactate levels, a proxy for haemodynamic instability, non-invasively, continuously and with accuracy that may demonstrate clinical utility.


Asunto(s)
Procedimientos Quirúrgicos Cardíacos , Aprendizaje Automático , Humanos , Niño , Lactante , Algoritmos , Ácido Láctico , Unidades de Cuidado Intensivo Pediátrico
4.
Front Neurosci ; 16: 926426, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-36046472

RESUMEN

We have performed a morphological analysis of patients with schizophrenia and compared them with healthy controls. Our analysis includes the use of publicly available automated extraction tools to assess regional cortical thickness (inclusive of within region cortical thickness variability) from structural magnetic resonance imaging (MRI), to characterize group-wise abnormalities associated with schizophrenia based on a publicly available dataset. We have also performed a correlation analysis between the automatically extracted biomarkers and a variety of patient clinical variables available. Finally, we also present the results of a machine learning analysis. Results demonstrate regional cortical thickness abnormalities in schizophrenia. We observed a correlation (rho = 0.474) between patients' depression and the average cortical thickness of the right medial orbitofrontal cortex. Our leading machine learning technology evaluated was the support vector machine with stepwise feature selection, yielding a sensitivity of 92% and a specificity of 74%, based on regional brain measurements, including from the insula, superior frontal, caudate, calcarine sulcus, gyrus rectus, and rostral middle frontal regions. These results imply that advanced analytic techniques combining MRI with automated biomarker extraction can be helpful in characterizing patients with schizophrenia.

5.
Int J Dev Neurosci ; 81(7): 655-662, 2021 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-34308560

RESUMEN

Neuroscience studies are very often tasked with identifying measurable differences between two groups of subjects, typically one group with a pathological condition and one group representing control subjects. It is often expected that the measurements acquired for comparing groups are also affected by a variety of additional patient characteristics such as sex, age, and comorbidities. Multivariable regression (MVR) is a statistical analysis technique commonly employed in neuroscience studies to "control for" or "adjust for" secondary effects (such as sex, age, and comorbidities) in order to ensure that the main study findings are focused on actual differences between the groups of interest associated with the condition under investigation. It is common practice in the neuroscience literature to utilize MVR to control for secondary effects; however, at present, it is not typically possible to assess whether the MVR adjustments correct for more error than they introduce. In common neuroscience practice, MVR models are not validated and no attempt to characterize deficiencies in the MVR model is made. In this article, we demonstrate how standard hold-out validation techniques (commonly used in machine learning analyses) that involve repeatedly randomly dividing datasets into training and testing samples can be adapted to the assessment of stability and reliability of MVR models with a publicly available neurological magnetic resonance imaging (MRI) dataset of patients with schizophrenia. Results demonstrate that MVR can introduce measurement error up to 30.06% and, on average across all considered measurements, introduce 9.84% error on this dataset. When hold-out validated MVR does not agree with the results of the standard use of MVR, the use of MVR in the given application is unstable. Thus, this paper helps evaluate the extent to which the simplistic use of MVR introduces study error in neuroscientific analyses with an analysis of patients with schizophrenia.


Asunto(s)
Encéfalo/diagnóstico por imagen , Esquizofrenia/diagnóstico por imagen , Humanos , Aprendizaje Automático , Imagen por Resonancia Magnética , Reproducibilidad de los Resultados , Estudios Retrospectivos
6.
J Pers Soc Psychol ; 113(2): 254-261, 2017 08.
Artículo en Inglés | MEDLINE | ID: mdl-28714731

RESUMEN

Finkel, Eastwick, and Reis (2016; FER2016) argued the post-2011 methodological reform movement has focused narrowly on replicability, neglecting other essential goals of research. We agree multiple scientific goals are essential, but argue, however, a more fine-grained language, conceptualization, and approach to replication is needed to accomplish these goals. Replication is the general empirical mechanism for testing and falsifying theory. Sufficiently methodologically similar replications, also known as direct replications, test the basic existence of phenomena and ensure cumulative progress is possible a priori. In contrast, increasingly methodologically dissimilar replications, also known as conceptual replications, test the relevance of auxiliary hypotheses (e.g., manipulation and measurement issues, contextual factors) required to productively investigate validity and generalizability. Without prioritizing replicability, a field is not empirically falsifiable. We also disagree with FER2016's position that "bigger samples are generally better, but . . . that very large samples could have the downside of commandeering resources that would have been better invested in other studies" (abstract). We identify problematic assumptions involved in FER2016's modifications of our original research-economic model, and present an improved model that quantifies when (and whether) it is reasonable to worry that increasing statistical power will engender potential trade-offs. Sufficiently powering studies (i.e., >80%) maximizes both research efficiency and confidence in the literature (research quality). Given that we are in agreement with FER2016 on all key open science points, we are eager to start seeing the accelerated rate of cumulative knowledge development of social psychological phenomena such a sufficiently transparent, powered, and falsifiable approach will generate. (PsycINFO Database Record


Asunto(s)
Ansiedad , Investigación , Humanos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...