Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 274
Filtrar
Más filtros

Banco de datos
Tipo del documento
Intervalo de año de publicación
1.
Brief Bioinform ; 23(6)2022 11 19.
Artículo en Inglés | MEDLINE | ID: mdl-36155620

RESUMEN

Understanding ncRNA-protein interaction is of critical importance to unveil ncRNAs' functions. Here, we propose an integrated package LION which comprises a new method for predicting ncRNA/lncRNA-protein interaction as well as a comprehensive strategy to meet the requirement of customisable prediction. Experimental results demonstrate that our method outperforms its competitors on multiple benchmark datasets. LION can also improve the performance of some widely used tools and build adaptable models for species- and tissue-specific prediction. We expect that LION will be a powerful and efficient tool for the prediction and analysis of ncRNA/lncRNA-protein interaction. The R Package LION is available on GitHub at https://github.com/HAN-Siyu/LION/.


Asunto(s)
ARN Largo no Codificante , ARN no Traducido/genética
2.
Brief Bioinform ; 23(2)2022 03 10.
Artículo en Inglés | MEDLINE | ID: mdl-34981111

RESUMEN

Large metabolomics datasets inevitably contain unwanted technical variations which can obscure meaningful biological signals and affect how this information is applied to personalized healthcare. Many methods have been developed to handle unwanted variations. However, the underlying assumptions of many existing methods only hold for a few specific scenarios. Some tools remove technical variations with models trained on quality control (QC) samples which may not generalize well on subject samples. Additionally, almost none of the existing methods supports datasets with multiple types of QC samples, which greatly limits their performance and flexibility. To address these issues, a non-parametric method TIGER (Technical variation elImination with ensemble learninG architEctuRe) is developed in this study and released as an R package (https://CRAN.R-project.org/package=TIGERr). TIGER integrates the random forest algorithm into an adaptable ensemble learning architecture. Evaluation results show that TIGER outperforms four popular methods with respect to robustness and reliability on three human cohort datasets constructed with targeted or untargeted metabolomics data. Additionally, a case study aiming to identify age-associated metabolites is performed to illustrate how TIGER can be used for cross-kit adjustment in a longitudinal analysis with experimental data of three time-points generated by different analytical kits. A dynamic website is developed to help evaluate the performance of TIGER and examine the patterns revealed in our longitudinal analysis (https://han-siyu.github.io/TIGER_web/). Overall, TIGER is expected to be a powerful tool for metabolomics data analysis.


Asunto(s)
Algoritmos , Metabolómica , Humanos , Aprendizaje Automático , Metabolómica/métodos , Reproducibilidad de los Resultados , Proyectos de Investigación
3.
Brain Behav Immun ; 115: 470-479, 2024 01.
Artículo en Inglés | MEDLINE | ID: mdl-37972877

RESUMEN

Artificial intelligence (AI) is often used to describe the automation of complex tasks that we would attribute intelligence to. Machine learning (ML) is commonly understood as a set of methods used to develop an AI. Both have seen a recent boom in usage, both in scientific and commercial fields. For the scientific community, ML can solve bottle necks created by complex, multi-dimensional data generated, for example, by functional brain imaging or *omics approaches. ML can here identify patterns that could not have been found using traditional statistic approaches. However, ML comes with serious limitations that need to be kept in mind: their tendency to optimise solutions for the input data means it is of crucial importance to externally validate any findings before considering them more than a hypothesis. Their black-box nature implies that their decisions usually cannot be understood, which renders their use in medical decision making problematic and can lead to ethical issues. Here, we present an introduction for the curious to the field of ML/AI. We explain the principles as commonly used methods as well as recent methodological advancements before we discuss risks and what we see as future directions of the field. Finally, we show practical examples of neuroscience to illustrate the use and limitations of ML.


Asunto(s)
Inteligencia Artificial , Aprendizaje Automático
4.
Diabetes Obes Metab ; 26(7): 2722-2731, 2024 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-38618987

RESUMEN

AIM: Hypertension and diabetes mellitus (DM) are major causes of morbidity and mortality, with growing burdens in low-income countries where they are underdiagnosed and undertreated. Advances in machine learning may provide opportunities to enhance diagnostics in settings with limited medical infrastructure. MATERIALS AND METHODS: A non-interventional study was conducted to develop and validate a machine learning algorithm to estimate cardiovascular clinical and laboratory parameters. At two sites in Kenya, digital retinal fundus photographs were collected alongside blood pressure (BP), laboratory measures and medical history. The performance of machine learning models, originally trained using data from the UK Biobank, were evaluated for their ability to estimate BP, glycated haemoglobin, estimated glomerular filtration rate and diagnoses from fundus images. RESULTS: In total, 301 participants were enrolled. Compared with the UK Biobank population used for algorithm development, participants from Kenya were younger and would probably report Black/African ethnicity, with a higher body mass index and prevalence of DM and hypertension. The mean absolute error was comparable or slightly greater for systolic BP, diastolic BP, glycated haemoglobin and estimated glomerular filtration rate. The model trained to identify DM had an area under the receiver operating curve of 0.762 (0.818 in the UK Biobank) and the hypertension model had an area under the receiver operating curve of 0.765 (0.738 in the UK Biobank). CONCLUSIONS: In a Kenyan population, machine learning models estimated cardiovascular parameters with comparable or slightly lower accuracy than in the population where they were trained, suggesting model recalibration may be appropriate. This study represents an incremental step toward leveraging machine learning to make early cardiovascular screening more accessible, particularly in resource-limited settings.


Asunto(s)
Enfermedades Cardiovasculares , Aprendizaje Profundo , Factores de Riesgo de Enfermedad Cardiaca , Humanos , Kenia/epidemiología , Masculino , Femenino , Persona de Mediana Edad , Estudios Prospectivos , Adulto , Enfermedades Cardiovasculares/epidemiología , Enfermedades Cardiovasculares/diagnóstico , Enfermedades Cardiovasculares/etiología , Hipertensión/epidemiología , Hipertensión/complicaciones , Hipertensión/diagnóstico , Algoritmos , Fotograbar , Fondo de Ojo , Anciano , Diabetes Mellitus/epidemiología , Factores de Riesgo , Retinopatía Diabética/epidemiología , Retinopatía Diabética/diagnóstico
5.
Pharm Res ; 41(5): 833-837, 2024 May.
Artículo en Inglés | MEDLINE | ID: mdl-38698195

RESUMEN

Currently, the lengthy time needed to bring new drugs to market or to implement postapproval changes causes multiple problems, such as delaying patients access to new lifesaving or life-enhancing medications and slowing the response to emergencies that require new treatments. However, new technologies are available that can help solve these problems. The January 2023 NIPTE pathfinding workshop on accelerating drug product development and approval included a session in which participants considered the current state of product formulation and process development, barriers to acceleration of the development timeline, and opportunities for overcoming these barriers using new technologies. The authors participated in this workshop, and in this article have shared their perspective of some of the ways forward, including advanced manufacturing techniques and adaptive development. In addition, there is a need for paradigm shifts in regulatory processes, increased pre-competitive collaboration, and a shared strategy among regulators, industry, and academia.


Asunto(s)
Aprobación de Drogas , Humanos , Desarrollo de Medicamentos/métodos , Industria Farmacéutica/métodos , Tecnología Farmacéutica/métodos , Preparaciones Farmacéuticas/química , Química Farmacéutica/métodos , Composición de Medicamentos/métodos
6.
Br J Anaesth ; 2024 Sep 24.
Artículo en Inglés | MEDLINE | ID: mdl-39322472

RESUMEN

BACKGROUND: We lack evidence on the cumulative effectiveness of machine learning (ML)-driven interventions in perioperative settings. Therefore, we conducted a systematic review to appraise the evidence on the impact of ML-driven interventions on perioperative outcomes. METHODS: Ovid MEDLINE, CINAHL, Embase, Scopus, PubMed, and ClinicalTrials.gov were searched to identify randomised controlled trials (RCTs) evaluating the effectiveness of ML-driven interventions in surgical inpatient populations. The review was registered with PROSPERO (CRD42023433163) and conducted according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. Meta-analysis was conducted for outcomes with two or more studies using a random-effects model, and vote counting was conducted for other outcomes. RESULTS: Among 13 included RCTs, three types of ML-driven interventions were evaluated: Hypotension Prediction Index (HPI) (n=5), Nociception Level Index (NoL) (n=7), and a scheduling system (n=1). Compared with the standard care, HPI led to a significant decrease in absolute hypotension (n=421, P=0.003, I2=75%) and relative hypotension (n=208, P<0.0001, I2=0%); NoL led to significantly lower mean pain scores in the post-anaesthesia care unit (PACU) (n=191, P=0.004, I2=19%). NoL showed no significant impact on intraoperative opioid consumption (n=339, P=0.31, I2=92%) or PACU opioid consumption (n=339, P=0.11, I2=0%). No significant difference in hospital length of stay (n=361, P=0.81, I2=0%) and PACU stay (n=267, P=0.44, I2=0) was found between HPI and NoL. CONCLUSIONS: HPI decreased the duration of intraoperative hypotension, and NoL decreased postoperative pain scores, but no significant impact on other clinical outcomes was found. We highlight the need to address both methodological and clinical practice gaps to ensure the successful future implementation of ML-driven interventions. SYSTEMATIC REVIEW PROTOCOL: CRD42023433163 (PROSPERO).

7.
Br J Anaesth ; 133(3): 476-478, 2024 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-38902116

RESUMEN

The increased availability of large clinical datasets together with increasingly sophisticated computing power has facilitated development of numerous risk prediction models for various adverse perioperative outcomes, including acute kidney injury (AKI). The rationale for developing such models is straightforward. However, despite numerous purported benefits, the uptake of preoperative prediction models into clinical practice has been limited. Barriers to implementation of predictive models, including limitations in their discrimination and accuracy, as well as their ability to meaningfully impact clinical practice and patient outcomes, are increasingly recognised. Some of the purported benefits of predictive modelling, particularly when applied to postoperative AKI, might not fare well under detailed scrutiny. Future research should address existing limitations and seek to demonstrate both benefit to patients and value to healthcare systems from implementation of these models in clinical practice.


Asunto(s)
Lesión Renal Aguda , Macrodatos , Complicaciones Posoperatorias , Humanos , Lesión Renal Aguda/diagnóstico , Lesión Renal Aguda/epidemiología , Complicaciones Posoperatorias/epidemiología , Complicaciones Posoperatorias/diagnóstico , Medición de Riesgo/métodos , Modelos Estadísticos , Valor Predictivo de las Pruebas
8.
Environ Res ; 245: 117979, 2024 Mar 15.
Artículo en Inglés | MEDLINE | ID: mdl-38142727

RESUMEN

Mycotoxins are toxic fungal metabolites that may occur in crops. Mycotoxins may carry-over into bovine milk if bovines ingest mycotoxin-contaminated feed. Due to climate change, there may be a potential increase in the prevalence and concentration of mycotoxins in crops. However, the toxicity to humans and the carry-over rate of mycotoxins from feed to milk from bovines varies considerably. This research aimed to rank emerging and existing mycotoxins under different climate change scenarios based on their occurrence in milk and their toxicity to humans. The quantitative risk ranking took a probabilistic approach, using Monte-Carlo simulation to take account of input uncertainties and variabilities. Mycotoxins were ranked based on their hazard quotient, calculated using estimated daily intake and tolerable daily intake values. Four climate change scenarios were assessed, including an Irish baseline model in addition to best-case, worst-case, and most likely scenarios, corresponding to equivalent Intergovernmental Panel on Climate Change (IPCC) scenarios. This research prioritised aflatoxin B1, zearalenone, and T-2 and HT-2 toxin as potential human health hazards for adults and children compared to other mycotoxins under all scenarios. Relatively lower risks were found to be associated with mycophenolic acid, enniatins, and deoxynivalenol. Overall, the carry-over rate of mycotoxins, the milk consumption, and the concentration of mycotoxins in silage, maize, and wheat were found to be the most sensitive parameters (positively correlated) of this probabilistic model. Though climate change may impact mycotoxin prevalence and concentration in crops, the carry-over rate notably affects the final concentration of mycotoxin in milk to a greater extent. The results obtained in this study facilitate the identification of risk reduction measures to limit mycotoxin contamination of dairy products, considering potential climate change influences.


Asunto(s)
Micotoxinas , Niño , Humanos , Animales , Micotoxinas/toxicidad , Micotoxinas/análisis , Leche/química , Cambio Climático , Alimentación Animal/análisis , Contaminación de Alimentos/análisis , Productos Agrícolas
9.
Transfus Med ; 34(5): 333-343, 2024 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-39113629

RESUMEN

Artificial intelligence (AI) uses sophisticated algorithms to "learn" from large volumes of data. This could be used to optimise recruitment of blood donors through predictive modelling of future blood supply, based on previous donation and transfusion demand. We sought to assess utilisation of predictive modelling and AI blood establishments (BE) and conducted predictive modelling to illustrate its use. A BE survey of data modelling and AI was disseminated to the International Society of Blood transfusion members. Additional anonymzed data were obtained from Italy, Singapore and the United States (US) to build predictive models for each region, using January 2018 through August 2019 data to determine likelihood of donation within a prescribed number of months. Donations were from March 2020 to June 2021. Ninety ISBT members responded to the survey. Predictive modelling was used by 33 (36.7%) respondents and 12 (13.3%) reported AI use. Forty-four (48.9%) indicated their institutions do not utilise predictive modelling nor AI to predict transfusion demand or optimise donor recruitment. In the predictive modelling case study involving three sites, the most important variable for predicting donor return was number of previous donations for Italy and the US, and donation frequency for Singapore. Donation rates declined in each region during COVID-19. Throughout the observation period the predictive model was able to consistently identify those individuals who were most likely to return to donate blood. The majority of BE do not use predictive modelling and AI. The effectiveness of predictive model in determining likelihood of donor return was validated; implementation of this method could prove useful for BE operations.


Asunto(s)
Donantes de Sangre , COVID-19 , Pandemias , SARS-CoV-2 , Humanos , COVID-19/epidemiología , Italia/epidemiología , Femenino , Masculino , Singapur/epidemiología , Estados Unidos , Inteligencia Artificial , Selección de Donante , Encuestas y Cuestionarios
10.
Anaesthesia ; 79(4): 389-398, 2024 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-38369686

RESUMEN

Complications are common following major surgery and are associated with increased use of healthcare resources, disability and mortality. Continued reliance on mortality estimates risks harming patients and health systems, but existing tools for predicting complications are unwieldy and inaccurate. We aimed to systematically construct an accurate pre-operative model for predicting major postoperative complications; compare its performance against existing tools; and identify sources of inaccuracy in predictive models more generally. Complete patient records from the UK Peri-operative Quality Improvement Programme dataset were analysed. Major complications were defined as Clavien-Dindo grade ≥ 2 for novel models. In a 75% train:25% test split cohort, we developed a pipeline of increasingly complex models, prioritising pre-operative predictors using Least Absolute Shrinkage and Selection Operators (LASSO). We defined the best model in the training cohort by the lowest Akaike's information criterion, balancing accuracy and simplicity. Of the 24,983 included cases, 6389 (25.6%) patients developed major complications. Potentially modifiable risk factors (pain, reduced mobility and smoking) were retained. The best-performing model was highly complex, specifying individual hospital complication rates and 11 patient covariates. This novel model showed substantially superior performance over generic and specific prediction models and scores. We have developed a novel complications model with good internal accuracy, re-prioritised predictor variables and identified hospital-level variation as an important, but overlooked, source of inaccuracy in existing tools. The complexity of the best-performing model does, however, highlight the need for a step-change in clinical risk prediction to automate the delivery of informative risk estimates in clinical systems.


Asunto(s)
Complicaciones Posoperatorias , Mejoramiento de la Calidad , Humanos , Complicaciones Posoperatorias/etiología , Factores de Riesgo , Fumar , Dolor
11.
Br J Clin Psychol ; 63(2): 137-155, 2024 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-38111213

RESUMEN

OBJECTIVE: Previous research on psychotherapy treatment response has mainly focused on outpatients or clinical trial data which may have low ecological validity regarding naturalistic inpatient samples. To reduce treatment failures by proactively screening for patients at risk of low treatment response, gain more knowledge about risk factors and to evaluate treatments, accurate insights about predictors of treatment response in naturalistic inpatient samples are needed. METHODS: We compared the performance of different machine learning algorithms in predicting treatment response, operationalized as a substantial reduction in symptom severity as expressed in the Patient Health Questionnaire Anxiety and Depression Scale. To achieve this goal, we used different sets of variables-(a) demographics, (b) physical indicators, (c) psychological indicators and (d) treatment-related variables-in a naturalistic inpatient sample (N = 723) to specify their joint and unique contribution to treatment success. RESULTS: There was a strong link between symptom severity at baseline and post-treatment (R2 = .32). When using all available variables, both machine learning algorithms outperformed the linear regressions and led to an increment in predictive performance of R2 = .12. Treatment-related variables were the most predictive, followed psychological indicators. Physical indicators and demographics were negligible. CONCLUSIONS: Treatment response in naturalistic inpatient settings can be predicted to a considerable degree by using baseline indicators. Regularization via machine learning algorithms leads to higher predictive performances as opposed to including nonlinear and interaction effects. Heterogenous aspects of mental health have incremental predictive value and should be considered as prognostic markers when modelling treatment processes.


Asunto(s)
Aprendizaje Automático , Humanos , Masculino , Femenino , Adulto , Persona de Mediana Edad , Psicoterapia/métodos , Resultado del Tratamiento , Evaluación de Resultado en la Atención de Salud/estadística & datos numéricos , Anciano , Pacientes Internos/psicología , Índice de Severidad de la Enfermedad , Adulto Joven , Publicación de Preinscripción
12.
J Oral Rehabil ; 51(9): 1770-1777, 2024 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-38840513

RESUMEN

BACKGROUND: A quantitative approach to predict expected muscle activity and mandibular movement from non-invasive hard tissue assessments remains unexplored. OBJECTIVES: This study investigated the predictive potential of normalised muscle activity during various jaw movements combined with temporomandibular joint (TMJ) vibration analyses to predict expected maximum lateral deviation during mouth opening. METHOD: Sixty-six participants underwent electrognathography (EGN), surface electromyography (EMG) and joint vibration analyses (JVA). They performed maximum mouth opening, lateral excursion and anterior protrusion as jaw movement activities in a single session. Multiple predictive models were trained from synthetic observations generated from the 66 human observations. Muscle function intensity and activity duration were normalised and a decision support system with branching logic was developed to predict lateral deviation. Performance of the models in predicting temporalis, masseter and digastric muscle activity from hard tissue data was evaluated through root mean squared error (RMSE) and mean absolute error. RESULTS: Temporalis muscle intensity ranged from 0.135 ± 0.056, masseter from 0.111 ± 0.053 and digastric from 0.120 ± 0.051. Muscle activity duration varied with temporalis at 112.23 ± 126.81 ms, masseter at 101.02 ± 121.34 ms and digastric at 168.13 ± 222.82 ms. XGBoost predicted muscle intensity and activity duration and scored an RMSE of 0.03-0.05. Jaw deviations were successfully predicted with a MAE of 0.9 mm. CONCLUSION: Applying deep learning to EGN, EMG and JVA data can establish a quantifiable relationship between muscles and hard tissue movement within the TMJ complex and can predict jaw deviations.


Asunto(s)
Electromiografía , Músculos Masticadores , Rango del Movimiento Articular , Articulación Temporomandibular , Humanos , Articulación Temporomandibular/fisiología , Femenino , Masculino , Adulto , Músculos Masticadores/fisiología , Rango del Movimiento Articular/fisiología , Adulto Joven , Movimiento/fisiología , Vibración
13.
BMC Oral Health ; 24(1): 122, 2024 Jan 23.
Artículo en Inglés | MEDLINE | ID: mdl-38263027

RESUMEN

BACKGROUND: Since AI algorithms can analyze patient data, medical records, and imaging results to suggest treatment plans and predict outcomes, they have the potential to support pathologists and clinicians in the diagnosis and treatment of oral and maxillofacial pathologies, just like every other area of life in which it is being used. The goal of the current study was to examine all of the trends being investigated in the area of oral and maxillofacial pathology where AI has been possibly involved in helping practitioners. METHODS: We started by defining the important terms in our investigation's subject matter. Following that, relevant databases like PubMed, Scopus, and Web of Science were searched using keywords and synonyms for each concept, such as "machine learning," "diagnosis," "treatment planning," "image analysis," "predictive modelling," and "patient monitoring." For more papers and sources, Google Scholar was also used. RESULTS: The majority of the 9 studies that were chosen were on how AI can be utilized to diagnose malignant tumors of the oral cavity. AI was especially helpful in creating prediction models that aided pathologists and clinicians in foreseeing the development of oral and maxillofacial pathology in specific patients. Additionally, predictive models accurately identified patients who have a high risk of developing oral cancer as well as the likelihood of the disease returning after treatment. CONCLUSIONS: In the field of oral and maxillofacial pathology, AI has the potential to enhance diagnostic precision, personalize care, and ultimately improve patient outcomes. The development and application of AI in healthcare, however, necessitates careful consideration of ethical, legal, and regulatory challenges. Additionally, because AI is still a relatively new technology, caution must be taken when applying it to this industry.


Asunto(s)
Algoritmos , Inteligencia Artificial , Humanos , Procesamiento de Imagen Asistido por Computador , Registros Médicos , Boca/patología , Cara/patología
14.
Trop Anim Health Prod ; 56(8): 262, 2024 Sep 19.
Artículo en Inglés | MEDLINE | ID: mdl-39298007

RESUMEN

The purpose of this study was to evaluate the performance of various prediction models in estimating the growth and morphological traits of pure Hair, Alpine × Hair F1 (AHF1), and Saanen × Hair F1 (SHF1) hybrid offspring at yearling age by employing early body measurement records from birth till 9th month combined with meteorological data, in an extensive natural pasture-based system. The study also included other factors such as sex, farm, doe and buck IDs, birth type, gestation length, age of the doe at birth etc. For this purpose, seven different machine learning algorithms-linear regression, artificial neural network (ANN), support vector machines (SVM), decision tree, random forest, extra gradient boosting (XGB) and ExtraTree - were applied to the data coming from 1530 goat offspring in Türkiye. Early predictions of growth and morphological traits at yearling age; such as live weight (LW), body length (BL), wither height (WH), rump height (RH), rump width (RW), leg circumference (LC), shinbone girth (SG), chest width (CW), chest girth (CG) and chest depth (CD) were performed by using birth date measurements only, up to month-3, month-6 and month-9 records. Satisfactory predictive performances were achieved once the records after 6th month were used. In extensive natural pasture-based systems, this approach may serve as an effective indirect selection method for breeders. Using month-9 records, the predictions were improved, where LW and BL were found with the highest performance in terms of coefficient of determination (R2 score of 0.81 ± 0.00) by ExtraTree. As one of the rarely applied machine learning models in animal studies, we have shown the capacity of this algorithm. Overall, the current study offers utilization of the meteorological data combined with animal records by machine learning models as an alternative decision-making tool for goat farming.


Asunto(s)
Cabras , Aprendizaje Automático , Animales , Cabras/crecimiento & desarrollo , Cabras/anatomía & histología , Femenino , Masculino , Redes Neurales de la Computación , Cruzamiento
15.
Neuroimage ; 276: 120213, 2023 08 01.
Artículo en Inglés | MEDLINE | ID: mdl-37268097

RESUMEN

Predictions of task-based functional magnetic resonance imaging (fMRI) from task-free resting-state (rs) fMRI have gained popularity over the past decade. This method holds a great promise for studying individual variability in brain function without the need to perform highly demanding tasks. However, in order to be broadly used, prediction models must prove to generalize beyond the dataset they were trained on. In this work, we test the generalizability of prediction of task-fMRI from rs-fMRI across sites, MRI vendors and age-groups. Moreover, we investigate the data requirements for successful prediction. We use the Human Connectome Project (HCP) dataset to explore how different combinations of training sample sizes and number of fMRI datapoints affect prediction success in various cognitive tasks. We then apply models trained on HCP data to predict brain activations in data from a different site, a different MRI vendor (Phillips vs. Siemens scanners) and a different age group (children from the HCP-development project). We demonstrate that, depending on the task, a training set of approximately 20 participants with 100 fMRI timepoints each yields the largest gain in model performance. Nevertheless, further increasing sample size and number of timepoints results in significantly improved predictions, until reaching approximately 450-600 training participants and 800-1000 timepoints. Overall, the number of fMRI timepoints influences prediction success more than the sample size. We further show that models trained on adequate amounts of data successfully generalize across sites, vendors and age groups and provide predictions that are both accurate and individual-specific. These findings suggest that large-scale publicly available datasets may be utilized to study brain function in smaller, unique samples.


Asunto(s)
Conectoma , Fenómenos Fisiológicos del Sistema Nervioso , Niño , Humanos , Encéfalo/diagnóstico por imagen , Encéfalo/fisiología , Conectoma/métodos , Imagen por Resonancia Magnética/métodos , Tamaño de la Muestra
16.
Eur J Neurosci ; 57(3): 490-510, 2023 02.
Artículo en Inglés | MEDLINE | ID: mdl-36512321

RESUMEN

Cognitive reserve supports cognitive function in the presence of pathology or atrophy. Functional neuroimaging may enable direct and accurate measurement of cognitive reserve which could have considerable clinical potential. The present study aimed to develop and validate a measure of cognitive reserve using task-based fMRI data that could then be applied to independent resting-state data. Connectome-based predictive modelling with leave-one-out cross-validation was applied to predict a residual measure of cognitive reserve using task-based functional connectivity from the Cognitive Reserve/Reference Ability Neural Network studies (n = 220, mean age = 51.91 years, SD = 17.04 years). This model generated summary measures of connectivity strength that accurately predicted a residual measure of cognitive reserve in unseen participants. The theoretical validity of these measures was established via a positive correlation with a socio-behavioural proxy of cognitive reserve (verbal intelligence) and a positive correlation with global cognition, independent of brain structure. This fitted model was then applied to external test data: resting-state functional connectivity data from The Irish Longitudinal Study on Ageing (TILDA, n = 294, mean age = 68.3 years, SD = 7.18 years). The network-strength predicted measures were not positively associated with a residual measure of cognitive reserve nor with measures of verbal intelligence and global cognition. The present study demonstrated that task-based functional connectivity data can be used to generate theoretically valid measures of cognitive reserve. Further work is needed to establish if, and how, measures of cognitive reserve derived from task-based functional connectivity can be applied to independent resting-state data.


Asunto(s)
Reserva Cognitiva , Conectoma , Humanos , Persona de Mediana Edad , Anciano , Conectoma/métodos , Estudios Longitudinales , Imagen por Resonancia Magnética/métodos , Encéfalo/diagnóstico por imagen , Red Nerviosa/diagnóstico por imagen
17.
J Viral Hepat ; 30(9): 746-755, 2023 09.
Artículo en Inglés | MEDLINE | ID: mdl-37415492

RESUMEN

Chronic hepatitis C (HCV) is a primary cause of hepatocellular carcinoma (HCC). Although antiviral treatment reduces risk of HCC, few studies quantify the impact of treatment on long-term risk in the era of direct-acting antivirals (DAA). Using data from the Chronic Hepatitis Cohort Study, we evaluated the impact of treatment type (DAA, interferon-based [IFN], or none) and outcome (sustained virological response [SVR] or treatment failure [TF]) on risk of HCC. We then developed and validated a predictive risk model. 17186 HCV patients were followed until HCC, death or last follow-up. We used extended landmark modelling, with time-varying covariates and propensity score justification and generalized estimating equations with a link function for discrete time-to-event data. Death was considered a competing risk. We observed 586 HCC cases across 104,000 interval-years of follow-up. SVR from DAA or IFN-based treatment reduced risk of HCC (aHR 0.13, 95% CI 0.08-0.20; and aHR 0.45, 95% CI 0.31-0.65); DAA SVR reduced risk more than IFN SVR (aHR 0.29, 95% CI 0.17-0.48). Independent of treatment, cirrhosis was the strongest risk factor for HCC (aHR 3.94, 95% CI 3.17-4.89 vs. no cirrhosis). Other risk factors included male sex, White race and genotype 3. Our six-variable predictive model had 'excellent' accuracy (AUROC 0.94) in independent validation. Our novel landmark interval-based model identified HCC risk factors across antiviral treatment status and interactions with cirrhosis. This model demonstrated excellent predictive accuracy in a large, racially diverse cohort of patients and could be adapted for 'real world' HCC monitoring.


Asunto(s)
Carcinoma Hepatocelular , Hepatitis C Crónica , Hepatitis C , Neoplasias Hepáticas , Humanos , Masculino , Carcinoma Hepatocelular/epidemiología , Carcinoma Hepatocelular/etiología , Carcinoma Hepatocelular/prevención & control , Antivirales/uso terapéutico , Hepatitis C Crónica/complicaciones , Hepatitis C Crónica/tratamiento farmacológico , Neoplasias Hepáticas/etiología , Neoplasias Hepáticas/complicaciones , Estudios de Cohortes , Medición de Riesgo , Respuesta Virológica Sostenida , Cirrosis Hepática/complicaciones , Hepatitis C/tratamiento farmacológico
18.
Hum Reprod ; 38(10): 1918-1926, 2023 10 03.
Artículo en Inglés | MEDLINE | ID: mdl-37581894

RESUMEN

STUDY QUESTION: Can machine learning predict the number of oocytes retrieved from controlled ovarian hyperstimulation (COH)? SUMMARY ANSWER: Three machine-learning models were successfully trained to predict the number of oocytes retrieved from COH. WHAT IS KNOWN ALREADY: A number of previous studies have identified and built predictive models on factors that influence the number of oocytes retrieved during COH. Many of these studies are, however, limited in the fact that they only consider a small number of variables in isolation. STUDY DESIGN, SIZE, DURATION: This study was a retrospective analysis of a dataset of 11,286 cycles performed at a single centre in France between 2009 and 2020 with the aim of building a predictive model for the number of oocytes retrieved from ovarian stimulation. The analysis was carried out by a data analysis team external to the centre using the Substra framework. The Substra framework enabled the data analysis team to send computer code to run securely on the centre's on-premises server. In this way, a high level of data security was achieved as the data analysis team did not have direct access to the data, nor did the data leave the centre at any point during the study. PARTICIPANTS/MATERIALS, SETTING, METHODS: The Light Gradient Boosting Machine algorithm was used to produce three predictive models: one that directly predicted the number of oocytes retrieved and two that predicted which of a set of bins provided by two clinicians the number of oocytes retrieved fell into. The resulting models were evaluated on a held-out test set and compared to linear and logistic regression baselines. In addition, the models themselves were analysed to identify the parameters that had the biggest impact on their predictions. MAIN RESULTS AND THE ROLE OF CHANCE: On average, the model that directly predicted the number of oocytes retrieved deviated from the ground truth by 4.21 oocytes. The model that predicted the first clinician's bins deviated by 0.73 bins whereas the model for the second clinician deviated by 0.62 bins. For all models, performance was best within the first and third quartiles of the target variable, with the model underpredicting extreme values of the target variable (no oocytes and large numbers of oocytes retrieved). Nevertheless, the erroneous predictions made for these extreme cases were still within the vicinity of the true value. Overall, all three models agreed on the importance of each feature which was estimated using Shapley Additive Explanation (SHAP) values. The feature with the highest mean absolute SHAP value (and thus the highest importance) was the antral follicle count, followed by basal AMH and FSH. Of the other hormonal features, basal TSH, LH, and testosterone levels were similarly important and baseline LH was the least important. The treatment characteristic with the highest SHAP value was the initial dose of gonadotropins. LIMITATIONS, REASONS FOR CAUTION: The models produced in this study were trained on a cohort from a single centre. They should thus not be used in clinical practice until trained and evaluated on a larger cohort more representative of the general population. WIDER IMPLICATIONS OF FINDINGS: These predictive models for the number of oocytes retrieved from COH may be useful in clinical practice, assisting clinicians in optimizing COH protocols for individual patients. Our work also demonstrates the promise of using the Substra framework for allowing external researchers to provide clinically relevant insights on sensitive fertility data in a fully secure, trustworthy manner and opens a number of exciting avenues for accelerating future research. STUDY FUNDING/COMPETING INTEREST(S): This study was funded by the French Public Bank of Investment as part of the Healthchain Consortium. T.Fe., C.He., J.C., C.J., C.-A.P., and C.Hi. are employed by Apricity. C.Hi. has received consulting fees and honoraria from Vitrolife, Merck Serono, Ferring, Cooper Surgical, Dibimed, Apricity, and Fairtility and travel support from Fairtility and Vitrolife, participates on an advisory board for Merck Serono, was the founder and organizer of the AI Fertility conference, has stock in Aria Fertility, TMRW, Fairtility, Apricity, and IVF Professionals, and received free equipment from Planar in exchange for first user feedback. C.J. has received a grant from BPI. J.C. has also received a grant from BPI, is a member of the Merck AI advisory board, and is a board member of Labelia Labs. C.He has a contract for medical writing of this manuscript by CHU Nantes and has received travel support from Apricity. A.R. haș received honoraria from Ferring and Organon. T.Fe. has received a grant from BPI. TRIAL REGISTRATION NUMBER: N/A.


Asunto(s)
Tasa de Natalidad , Síndrome de Hiperestimulación Ovárica , Masculino , Femenino , Humanos , Estudios Retrospectivos , Resultado del Tratamiento , Inducción de la Ovulación/métodos , Oocitos , Fertilización In Vitro/métodos
19.
Exp Physiol ; 108(3): 465-479, 2023 03.
Artículo en Inglés | MEDLINE | ID: mdl-36763088

RESUMEN

NEW FINDINGS: What is the central question of this study? What is the predictive relationship between self-reported scales to quantify perceptions of fatigue during exercise and gold standard measures used to quantify the development of neuromuscular fatigue? What is the main finding and its importance? No scale was determined to be substantively more effective than another. However, the number of ongoing contractions performed was shown to be a better predictor of fatigue in the motor system than any of the subjective scales. ABSTRACT: The purpose of this study was to determine the relationship between transcranial magnetic stimulation (TMS) measures of performance fatigability and commonly used scales that quantify perceptions of fatigue during exercise. Twenty healthy participants (age 23 ± 3 years, 10 female) performed 10 submaximal isometric elbow flexions at 20% maximal voluntary contraction (MVC) for 2 min, separated by 45 s of rest. Biceps brachii muscle electromyography and elbow flexion torque responses to single-pulse TMS were obtained at the end of each contraction to assess central factors of performance fatigability. A rating of perceived exertion (RPE) scale, Omnibus Resistance scale, Likert scale, Rating of Fatigue scale and a visual analogue scale (VAS) were used to assess perceptions of fatigue at the end of each contraction. The RPE (root mean square error (RMSE) = 0.144) and Rating of Fatigue (RMSE = 0.145) scales were the best predictors of decline in MVC torque, whereas the Likert (RMSE= 0.266) and RPE (RMSE= 0.268) scales were the best predictors of electromyographic amplitude. Although the Likert (RMSE = 7.6) and Rating of Fatigue (RMSE = 7.6) scales were the best predictors of voluntary muscle activation of any scale, the number of contractions performed during the protocol was a better predictor (RMSE = 7.3). The ability of the scales to predict TMS measures of performance fatigability were in general similar. Interestingly, the number of contractions performed was a better predictor of TMS measures than the scales themselves.


Asunto(s)
Articulación del Codo , Codo , Humanos , Femenino , Adulto Joven , Adulto , Codo/fisiología , Fatiga Muscular/fisiología , Contracción Isométrica/fisiología , Músculo Esquelético/fisiología , Electromiografía/métodos , Contracción Muscular/fisiología , Estimulación Eléctrica/métodos
20.
Malar J ; 22(1): 356, 2023 Nov 21.
Artículo en Inglés | MEDLINE | ID: mdl-37990242

RESUMEN

BACKGROUND: Geostatistical analysis of health data is increasingly used to model spatial variation in malaria prevalence, burden, and other metrics. Traditional inference methods for geostatistical modelling are notoriously computationally intensive, motivating the development of newer, approximate methods for geostatistical analysis or, more broadly, computational modelling of spatial processes. The appeal of faster methods is particularly great as the size of the region and number of spatial locations being modelled increases. METHODS: This work presents an applied comparison of four proposed 'fast' computational methods for spatial modelling and the software provided to implement them-Integrated Nested Laplace Approximation (INLA), tree boosting with Gaussian processes and mixed effect models (GPBoost), Fixed Rank Kriging (FRK) and Spatial Random Forests (SpRF). The four methods are illustrated by estimating malaria prevalence on two different spatial scales-country and continent. The performance of the four methods is compared on these data in terms of accuracy, computation time, and ease of implementation. RESULTS: Two of these methods-SpRF and GPBoost-do not scale well as the data size increases, and so are likely to be infeasible for larger-scale analysis problems. The two remaining methods-INLA and FRK-do scale well computationally, however the resulting model fits are very sensitive to the user's modelling assumptions and parameter choices. The binomial observation distribution commonly used for disease prevalence mapping with INLA fails to account for small-scale overdispersion present in the malaria prevalence data, which can lead to poor predictions. Selection of an appropriate alternative such as the Beta-binomial distribution is required to produce a reliable model fit. The small-scale random effect term in FRK overcomes this pitfall, but FRK model estimates are very reliant on providing a sufficient number and appropriate configuration of basis functions. Unfortunately the computation time for FRK increases rapidly with increasing basis resolution. CONCLUSIONS: INLA and FRK both enable scalable geostatistical modelling of malaria prevalence data. However care must be taken when using both methods to assess the fit of the model to data and plausibility of predictions, in order to select appropriate model assumptions and parameters.


Asunto(s)
Malaria , Modelos Estadísticos , Humanos , Simulación por Computador , Programas Informáticos , Análisis Espacial , Malaria/epidemiología , Teorema de Bayes
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA