Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 281
Filtrar
1.
Brief Bioinform ; 25(6)2024 Sep 23.
Artigo em Inglês | MEDLINE | ID: mdl-39397426

RESUMO

The assessment of the allergenic potential of chemicals, crucial for ensuring public health safety, faces challenges in accuracy and raises ethical concerns due to reliance on animal testing. This paper presents a novel bioinformatic protocol designed to address the critical challenge of predicting immune responses to chemical sensitizers without the use of animal testing. The core innovation lies in the integration of advanced bioinformatics tools, including the Universal Immune System Simulator (UISS), which models detailed immune system dynamics. By leveraging data from structural predictions and docking simulations, our approach provides a more accurate and ethical method for chemical safety evaluations, especially in distinguishing between skin and respiratory sensitizers. Our approach integrates a comprehensive eight-step process, beginning with the meticulous collection of chemical and protein data from databases like PubChem and the Protein Data Bank. Following data acquisition, structural predictions are performed using cutting-edge tools such as AlphaFold to model proteins whose structures have not been previously elucidated. This structural information is then utilized in subsequent docking simulations, leveraging both ligand-protein and protein-protein interactions to predict how chemical compounds may trigger immune responses. The core novelty of our method lies in the application of UISS-an advanced agent-based modelling system that simulates detailed immune system dynamics. By inputting the results from earlier stages, including docking scores and potential epitope identifications, UISS meticulously forecasts the type and severity of immune responses, distinguishing between Th1-mediated skin and Th2-mediated respiratory allergic reactions. This ability to predict distinct immune pathways is a crucial advance over current methods, which often cannot differentiate between the sensitization mechanisms. To validate the accuracy and robustness of our approach, we applied the protocol to well-known sensitizers: 2,4-dinitrochlorobenzene for skin allergies and trimellitic anhydride for respiratory allergies. The results clearly demonstrate the protocol's ability to differentiate between these distinct immune responses, underscoring its potential for replacing traditional animal-based testing methods. The results not only support the potential of our method to replace animal testing in chemical safety assessments but also highlight its role in enhancing the understanding of chemical-induced immune reactions. Through this innovative integration of computational biology and immunological modelling, our protocol offers a transformative approach to toxicological evaluations, increasing the reliability of safety assessments.


Assuntos
Alérgenos , Biologia Computacional , Biologia Computacional/métodos , Humanos , Alérgenos/química , Alérgenos/imunologia , Simulação de Acoplamento Molecular , Hipersensibilidade Respiratória/induzido quimicamente , Hipersensibilidade Respiratória/imunologia , Pele/efeitos dos fármacos , Pele/imunologia , Hipersensibilidade , Animais
2.
Brief Bioinform ; 23(2)2022 03 10.
Artigo em Inglês | MEDLINE | ID: mdl-34981111

RESUMO

Large metabolomics datasets inevitably contain unwanted technical variations which can obscure meaningful biological signals and affect how this information is applied to personalized healthcare. Many methods have been developed to handle unwanted variations. However, the underlying assumptions of many existing methods only hold for a few specific scenarios. Some tools remove technical variations with models trained on quality control (QC) samples which may not generalize well on subject samples. Additionally, almost none of the existing methods supports datasets with multiple types of QC samples, which greatly limits their performance and flexibility. To address these issues, a non-parametric method TIGER (Technical variation elImination with ensemble learninG architEctuRe) is developed in this study and released as an R package (https://CRAN.R-project.org/package=TIGERr). TIGER integrates the random forest algorithm into an adaptable ensemble learning architecture. Evaluation results show that TIGER outperforms four popular methods with respect to robustness and reliability on three human cohort datasets constructed with targeted or untargeted metabolomics data. Additionally, a case study aiming to identify age-associated metabolites is performed to illustrate how TIGER can be used for cross-kit adjustment in a longitudinal analysis with experimental data of three time-points generated by different analytical kits. A dynamic website is developed to help evaluate the performance of TIGER and examine the patterns revealed in our longitudinal analysis (https://han-siyu.github.io/TIGER_web/). Overall, TIGER is expected to be a powerful tool for metabolomics data analysis.


Assuntos
Algoritmos , Metabolômica , Humanos , Aprendizado de Máquina , Metabolômica/métodos , Reprodutibilidade dos Testes , Projetos de Pesquisa
3.
Brief Bioinform ; 23(6)2022 11 19.
Artigo em Inglês | MEDLINE | ID: mdl-36155620

RESUMO

Understanding ncRNA-protein interaction is of critical importance to unveil ncRNAs' functions. Here, we propose an integrated package LION which comprises a new method for predicting ncRNA/lncRNA-protein interaction as well as a comprehensive strategy to meet the requirement of customisable prediction. Experimental results demonstrate that our method outperforms its competitors on multiple benchmark datasets. LION can also improve the performance of some widely used tools and build adaptable models for species- and tissue-specific prediction. We expect that LION will be a powerful and efficient tool for the prediction and analysis of ncRNA/lncRNA-protein interaction. The R Package LION is available on GitHub at https://github.com/HAN-Siyu/LION/.


Assuntos
RNA Longo não Codificante , RNA não Traduzido/genética
4.
Brain Behav Immun ; 115: 470-479, 2024 01.
Artigo em Inglês | MEDLINE | ID: mdl-37972877

RESUMO

Artificial intelligence (AI) is often used to describe the automation of complex tasks that we would attribute intelligence to. Machine learning (ML) is commonly understood as a set of methods used to develop an AI. Both have seen a recent boom in usage, both in scientific and commercial fields. For the scientific community, ML can solve bottle necks created by complex, multi-dimensional data generated, for example, by functional brain imaging or *omics approaches. ML can here identify patterns that could not have been found using traditional statistic approaches. However, ML comes with serious limitations that need to be kept in mind: their tendency to optimise solutions for the input data means it is of crucial importance to externally validate any findings before considering them more than a hypothesis. Their black-box nature implies that their decisions usually cannot be understood, which renders their use in medical decision making problematic and can lead to ethical issues. Here, we present an introduction for the curious to the field of ML/AI. We explain the principles as commonly used methods as well as recent methodological advancements before we discuss risks and what we see as future directions of the field. Finally, we show practical examples of neuroscience to illustrate the use and limitations of ML.


Assuntos
Inteligência Artificial , Aprendizado de Máquina
5.
Diabetes Obes Metab ; 26(7): 2722-2731, 2024 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-38618987

RESUMO

AIM: Hypertension and diabetes mellitus (DM) are major causes of morbidity and mortality, with growing burdens in low-income countries where they are underdiagnosed and undertreated. Advances in machine learning may provide opportunities to enhance diagnostics in settings with limited medical infrastructure. MATERIALS AND METHODS: A non-interventional study was conducted to develop and validate a machine learning algorithm to estimate cardiovascular clinical and laboratory parameters. At two sites in Kenya, digital retinal fundus photographs were collected alongside blood pressure (BP), laboratory measures and medical history. The performance of machine learning models, originally trained using data from the UK Biobank, were evaluated for their ability to estimate BP, glycated haemoglobin, estimated glomerular filtration rate and diagnoses from fundus images. RESULTS: In total, 301 participants were enrolled. Compared with the UK Biobank population used for algorithm development, participants from Kenya were younger and would probably report Black/African ethnicity, with a higher body mass index and prevalence of DM and hypertension. The mean absolute error was comparable or slightly greater for systolic BP, diastolic BP, glycated haemoglobin and estimated glomerular filtration rate. The model trained to identify DM had an area under the receiver operating curve of 0.762 (0.818 in the UK Biobank) and the hypertension model had an area under the receiver operating curve of 0.765 (0.738 in the UK Biobank). CONCLUSIONS: In a Kenyan population, machine learning models estimated cardiovascular parameters with comparable or slightly lower accuracy than in the population where they were trained, suggesting model recalibration may be appropriate. This study represents an incremental step toward leveraging machine learning to make early cardiovascular screening more accessible, particularly in resource-limited settings.


Assuntos
Doenças Cardiovasculares , Aprendizado Profundo , Fatores de Risco de Doenças Cardíacas , Humanos , Quênia/epidemiologia , Masculino , Feminino , Pessoa de Meia-Idade , Estudos Prospectivos , Adulto , Doenças Cardiovasculares/epidemiologia , Doenças Cardiovasculares/diagnóstico , Doenças Cardiovasculares/etiologia , Hipertensão/epidemiologia , Hipertensão/complicações , Hipertensão/diagnóstico , Algoritmos , Fotografação , Fundo de Olho , Idoso , Diabetes Mellitus/epidemiologia , Fatores de Risco , Retinopatia Diabética/epidemiologia , Retinopatia Diabética/diagnóstico
6.
Pharm Res ; 41(5): 833-837, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38698195

RESUMO

Currently, the lengthy time needed to bring new drugs to market or to implement postapproval changes causes multiple problems, such as delaying patients access to new lifesaving or life-enhancing medications and slowing the response to emergencies that require new treatments. However, new technologies are available that can help solve these problems. The January 2023 NIPTE pathfinding workshop on accelerating drug product development and approval included a session in which participants considered the current state of product formulation and process development, barriers to acceleration of the development timeline, and opportunities for overcoming these barriers using new technologies. The authors participated in this workshop, and in this article have shared their perspective of some of the ways forward, including advanced manufacturing techniques and adaptive development. In addition, there is a need for paradigm shifts in regulatory processes, increased pre-competitive collaboration, and a shared strategy among regulators, industry, and academia.


Assuntos
Aprovação de Drogas , Humanos , Desenvolvimento de Medicamentos/métodos , Indústria Farmacêutica/métodos , Tecnologia Farmacêutica/métodos , Preparações Farmacêuticas/química , Química Farmacêutica/métodos , Composição de Medicamentos/métodos
7.
Artigo em Inglês | MEDLINE | ID: mdl-39428098

RESUMO

Artificial Intelligence (AI) is reshaping allergy and immunology by integrating cutting-edge technology to enhance patient outcomes and redefine clinical practices and research. This review examines AI's evolving role, emphasizing its impact on diagnostic accuracy, personalized treatments, and innovative research methodologies. AI has advanced diagnostic tools, such as models predicting allergen sensitivity, and enhanced immunotherapy strategies. Its ability to process extensive datasets has enabled deeper understanding of allergic diseases and immune system responses, leading to more accurate, effective and tailored treatments. Furthermore, AI is facilitating personalized care through AI-driven allergen mapping, automated patient monitoring, and targeted immunotherapy. The integration of AI into clinical practice promises a future where allergy and immunology are characterized by precisely customized healthcare solutions. This review adheres to PRISMA flowchart, with a comprehensive analysis of databases, including Scopus, Web of Science, PubMed, and preprint platforms using keywords related to AI and allergy and immunology. From an initial pool of 192 studies, 20 documents were selected based on inclusion criteria. Our findings highlight how AI is transforming allergy and immunology by enhancing patient care, research methodologies, and clinical innovation, offering a glimpse into the near future of technology-driven healthcare in these fields.

8.
Br J Anaesth ; 2024 Sep 24.
Artigo em Inglês | MEDLINE | ID: mdl-39322472

RESUMO

BACKGROUND: We lack evidence on the cumulative effectiveness of machine learning (ML)-driven interventions in perioperative settings. Therefore, we conducted a systematic review to appraise the evidence on the impact of ML-driven interventions on perioperative outcomes. METHODS: Ovid MEDLINE, CINAHL, Embase, Scopus, PubMed, and ClinicalTrials.gov were searched to identify randomised controlled trials (RCTs) evaluating the effectiveness of ML-driven interventions in surgical inpatient populations. The review was registered with PROSPERO (CRD42023433163) and conducted according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. Meta-analysis was conducted for outcomes with two or more studies using a random-effects model, and vote counting was conducted for other outcomes. RESULTS: Among 13 included RCTs, three types of ML-driven interventions were evaluated: Hypotension Prediction Index (HPI) (n=5), Nociception Level Index (NoL) (n=7), and a scheduling system (n=1). Compared with the standard care, HPI led to a significant decrease in absolute hypotension (n=421, P=0.003, I2=75%) and relative hypotension (n=208, P<0.0001, I2=0%); NoL led to significantly lower mean pain scores in the post-anaesthesia care unit (PACU) (n=191, P=0.004, I2=19%). NoL showed no significant impact on intraoperative opioid consumption (n=339, P=0.31, I2=92%) or PACU opioid consumption (n=339, P=0.11, I2=0%). No significant difference in hospital length of stay (n=361, P=0.81, I2=0%) and PACU stay (n=267, P=0.44, I2=0) was found between HPI and NoL. CONCLUSIONS: HPI decreased the duration of intraoperative hypotension, and NoL decreased postoperative pain scores, but no significant impact on other clinical outcomes was found. We highlight the need to address both methodological and clinical practice gaps to ensure the successful future implementation of ML-driven interventions. SYSTEMATIC REVIEW PROTOCOL: CRD42023433163 (PROSPERO).

9.
Br J Anaesth ; 133(3): 476-478, 2024 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-38902116

RESUMO

The increased availability of large clinical datasets together with increasingly sophisticated computing power has facilitated development of numerous risk prediction models for various adverse perioperative outcomes, including acute kidney injury (AKI). The rationale for developing such models is straightforward. However, despite numerous purported benefits, the uptake of preoperative prediction models into clinical practice has been limited. Barriers to implementation of predictive models, including limitations in their discrimination and accuracy, as well as their ability to meaningfully impact clinical practice and patient outcomes, are increasingly recognised. Some of the purported benefits of predictive modelling, particularly when applied to postoperative AKI, might not fare well under detailed scrutiny. Future research should address existing limitations and seek to demonstrate both benefit to patients and value to healthcare systems from implementation of these models in clinical practice.


Assuntos
Injúria Renal Aguda , Big Data , Complicações Pós-Operatórias , Humanos , Injúria Renal Aguda/diagnóstico , Injúria Renal Aguda/epidemiologia , Complicações Pós-Operatórias/epidemiologia , Complicações Pós-Operatórias/diagnóstico , Medição de Risco/métodos , Modelos Estatísticos , Valor Preditivo dos Testes
10.
Environ Res ; 245: 117979, 2024 Mar 15.
Artigo em Inglês | MEDLINE | ID: mdl-38142727

RESUMO

Mycotoxins are toxic fungal metabolites that may occur in crops. Mycotoxins may carry-over into bovine milk if bovines ingest mycotoxin-contaminated feed. Due to climate change, there may be a potential increase in the prevalence and concentration of mycotoxins in crops. However, the toxicity to humans and the carry-over rate of mycotoxins from feed to milk from bovines varies considerably. This research aimed to rank emerging and existing mycotoxins under different climate change scenarios based on their occurrence in milk and their toxicity to humans. The quantitative risk ranking took a probabilistic approach, using Monte-Carlo simulation to take account of input uncertainties and variabilities. Mycotoxins were ranked based on their hazard quotient, calculated using estimated daily intake and tolerable daily intake values. Four climate change scenarios were assessed, including an Irish baseline model in addition to best-case, worst-case, and most likely scenarios, corresponding to equivalent Intergovernmental Panel on Climate Change (IPCC) scenarios. This research prioritised aflatoxin B1, zearalenone, and T-2 and HT-2 toxin as potential human health hazards for adults and children compared to other mycotoxins under all scenarios. Relatively lower risks were found to be associated with mycophenolic acid, enniatins, and deoxynivalenol. Overall, the carry-over rate of mycotoxins, the milk consumption, and the concentration of mycotoxins in silage, maize, and wheat were found to be the most sensitive parameters (positively correlated) of this probabilistic model. Though climate change may impact mycotoxin prevalence and concentration in crops, the carry-over rate notably affects the final concentration of mycotoxin in milk to a greater extent. The results obtained in this study facilitate the identification of risk reduction measures to limit mycotoxin contamination of dairy products, considering potential climate change influences.


Assuntos
Micotoxinas , Criança , Humanos , Animais , Micotoxinas/toxicidade , Micotoxinas/análise , Leite/química , Mudança Climática , Ração Animal/análise , Contaminação de Alimentos/análise , Produtos Agrícolas
11.
Transfus Med ; 34(5): 333-343, 2024 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-39113629

RESUMO

Artificial intelligence (AI) uses sophisticated algorithms to "learn" from large volumes of data. This could be used to optimise recruitment of blood donors through predictive modelling of future blood supply, based on previous donation and transfusion demand. We sought to assess utilisation of predictive modelling and AI blood establishments (BE) and conducted predictive modelling to illustrate its use. A BE survey of data modelling and AI was disseminated to the International Society of Blood transfusion members. Additional anonymzed data were obtained from Italy, Singapore and the United States (US) to build predictive models for each region, using January 2018 through August 2019 data to determine likelihood of donation within a prescribed number of months. Donations were from March 2020 to June 2021. Ninety ISBT members responded to the survey. Predictive modelling was used by 33 (36.7%) respondents and 12 (13.3%) reported AI use. Forty-four (48.9%) indicated their institutions do not utilise predictive modelling nor AI to predict transfusion demand or optimise donor recruitment. In the predictive modelling case study involving three sites, the most important variable for predicting donor return was number of previous donations for Italy and the US, and donation frequency for Singapore. Donation rates declined in each region during COVID-19. Throughout the observation period the predictive model was able to consistently identify those individuals who were most likely to return to donate blood. The majority of BE do not use predictive modelling and AI. The effectiveness of predictive model in determining likelihood of donor return was validated; implementation of this method could prove useful for BE operations.


Assuntos
Doação de Sangue , Doadores de Sangue , COVID-19 , Pandemias , Feminino , Humanos , Masculino , Inteligência Artificial , COVID-19/epidemiologia , COVID-19/prevenção & controle , Seleção do Doador , Itália/epidemiologia , SARS-CoV-2 , Singapura/epidemiologia , Inquéritos e Questionários , Estados Unidos , Doação de Sangue/estatística & dados numéricos
12.
Anaesthesia ; 79(4): 389-398, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-38369686

RESUMO

Complications are common following major surgery and are associated with increased use of healthcare resources, disability and mortality. Continued reliance on mortality estimates risks harming patients and health systems, but existing tools for predicting complications are unwieldy and inaccurate. We aimed to systematically construct an accurate pre-operative model for predicting major postoperative complications; compare its performance against existing tools; and identify sources of inaccuracy in predictive models more generally. Complete patient records from the UK Peri-operative Quality Improvement Programme dataset were analysed. Major complications were defined as Clavien-Dindo grade ≥ 2 for novel models. In a 75% train:25% test split cohort, we developed a pipeline of increasingly complex models, prioritising pre-operative predictors using Least Absolute Shrinkage and Selection Operators (LASSO). We defined the best model in the training cohort by the lowest Akaike's information criterion, balancing accuracy and simplicity. Of the 24,983 included cases, 6389 (25.6%) patients developed major complications. Potentially modifiable risk factors (pain, reduced mobility and smoking) were retained. The best-performing model was highly complex, specifying individual hospital complication rates and 11 patient covariates. This novel model showed substantially superior performance over generic and specific prediction models and scores. We have developed a novel complications model with good internal accuracy, re-prioritised predictor variables and identified hospital-level variation as an important, but overlooked, source of inaccuracy in existing tools. The complexity of the best-performing model does, however, highlight the need for a step-change in clinical risk prediction to automate the delivery of informative risk estimates in clinical systems.


Assuntos
Complicações Pós-Operatórias , Melhoria de Qualidade , Humanos , Complicações Pós-Operatórias/etiologia , Fatores de Risco , Fumar , Dor
13.
Br J Clin Psychol ; 63(2): 137-155, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38111213

RESUMO

OBJECTIVE: Previous research on psychotherapy treatment response has mainly focused on outpatients or clinical trial data which may have low ecological validity regarding naturalistic inpatient samples. To reduce treatment failures by proactively screening for patients at risk of low treatment response, gain more knowledge about risk factors and to evaluate treatments, accurate insights about predictors of treatment response in naturalistic inpatient samples are needed. METHODS: We compared the performance of different machine learning algorithms in predicting treatment response, operationalized as a substantial reduction in symptom severity as expressed in the Patient Health Questionnaire Anxiety and Depression Scale. To achieve this goal, we used different sets of variables-(a) demographics, (b) physical indicators, (c) psychological indicators and (d) treatment-related variables-in a naturalistic inpatient sample (N = 723) to specify their joint and unique contribution to treatment success. RESULTS: There was a strong link between symptom severity at baseline and post-treatment (R2 = .32). When using all available variables, both machine learning algorithms outperformed the linear regressions and led to an increment in predictive performance of R2 = .12. Treatment-related variables were the most predictive, followed psychological indicators. Physical indicators and demographics were negligible. CONCLUSIONS: Treatment response in naturalistic inpatient settings can be predicted to a considerable degree by using baseline indicators. Regularization via machine learning algorithms leads to higher predictive performances as opposed to including nonlinear and interaction effects. Heterogenous aspects of mental health have incremental predictive value and should be considered as prognostic markers when modelling treatment processes.


Assuntos
Aprendizado de Máquina , Humanos , Masculino , Feminino , Adulto , Pessoa de Meia-Idade , Psicoterapia/métodos , Resultado do Tratamento , Avaliação de Resultados em Cuidados de Saúde/estatística & dados numéricos , Idoso , Pacientes Internados/psicologia , Índice de Gravidade de Doença , Adulto Jovem , Publicação Pré-Registro
14.
Artigo em Inglês | MEDLINE | ID: mdl-39462894

RESUMO

Artificial intelligence (AI) applications are complex and rapidly evolving, and thus often poorly understood, but have potentially profound implications for public health. We offer a primer for public health professionals that explains some of the key concepts involved and examines how these applications might be used in the response to a future pandemic. They include early outbreak detection, predictive modelling, healthcare management, risk communication, and health surveillance. Artificial intelligence applications, especially predictive algorithms, have the ability to anticipate outbreaks by integrating diverse datasets such as social media, meteorological data, and mobile phone movement data. Artificial intelligence-powered tools can also optimise healthcare delivery by managing the allocation of resources and reducing healthcare workers' exposure to risks. In resource distribution, they can anticipate demand and optimise logistics, while AI-driven robots can minimise physical contact in healthcare settings. Artificial intelligence also shows promise in supporting public health decision-making by simulating the social and economic impacts of different policy interventions. These simulations help policymakers evaluate complex scenarios such as lockdowns and resource allocation. Additionally, it can enhance public health messaging, with AI-generated health communications shown to be more effective than human-generated messages in some cases. However, there are risks, such as privacy concerns, biases in models, and the potential for 'false confirmations', where AI reinforces incorrect decisions. Despite these challenges, we argue that AI will become increasingly important in public health crises, but only if integrated thoughtfully into existing systems and processes.

15.
J Oral Rehabil ; 51(9): 1770-1777, 2024 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-38840513

RESUMO

BACKGROUND: A quantitative approach to predict expected muscle activity and mandibular movement from non-invasive hard tissue assessments remains unexplored. OBJECTIVES: This study investigated the predictive potential of normalised muscle activity during various jaw movements combined with temporomandibular joint (TMJ) vibration analyses to predict expected maximum lateral deviation during mouth opening. METHOD: Sixty-six participants underwent electrognathography (EGN), surface electromyography (EMG) and joint vibration analyses (JVA). They performed maximum mouth opening, lateral excursion and anterior protrusion as jaw movement activities in a single session. Multiple predictive models were trained from synthetic observations generated from the 66 human observations. Muscle function intensity and activity duration were normalised and a decision support system with branching logic was developed to predict lateral deviation. Performance of the models in predicting temporalis, masseter and digastric muscle activity from hard tissue data was evaluated through root mean squared error (RMSE) and mean absolute error. RESULTS: Temporalis muscle intensity ranged from 0.135 ± 0.056, masseter from 0.111 ± 0.053 and digastric from 0.120 ± 0.051. Muscle activity duration varied with temporalis at 112.23 ± 126.81 ms, masseter at 101.02 ± 121.34 ms and digastric at 168.13 ± 222.82 ms. XGBoost predicted muscle intensity and activity duration and scored an RMSE of 0.03-0.05. Jaw deviations were successfully predicted with a MAE of 0.9 mm. CONCLUSION: Applying deep learning to EGN, EMG and JVA data can establish a quantifiable relationship between muscles and hard tissue movement within the TMJ complex and can predict jaw deviations.


Assuntos
Eletromiografia , Músculos da Mastigação , Amplitude de Movimento Articular , Articulação Temporomandibular , Humanos , Articulação Temporomandibular/fisiologia , Feminino , Masculino , Adulto , Músculos da Mastigação/fisiologia , Amplitude de Movimento Articular/fisiologia , Adulto Jovem , Movimento/fisiologia , Vibração
16.
BMC Oral Health ; 24(1): 122, 2024 Jan 23.
Artigo em Inglês | MEDLINE | ID: mdl-38263027

RESUMO

BACKGROUND: Since AI algorithms can analyze patient data, medical records, and imaging results to suggest treatment plans and predict outcomes, they have the potential to support pathologists and clinicians in the diagnosis and treatment of oral and maxillofacial pathologies, just like every other area of life in which it is being used. The goal of the current study was to examine all of the trends being investigated in the area of oral and maxillofacial pathology where AI has been possibly involved in helping practitioners. METHODS: We started by defining the important terms in our investigation's subject matter. Following that, relevant databases like PubMed, Scopus, and Web of Science were searched using keywords and synonyms for each concept, such as "machine learning," "diagnosis," "treatment planning," "image analysis," "predictive modelling," and "patient monitoring." For more papers and sources, Google Scholar was also used. RESULTS: The majority of the 9 studies that were chosen were on how AI can be utilized to diagnose malignant tumors of the oral cavity. AI was especially helpful in creating prediction models that aided pathologists and clinicians in foreseeing the development of oral and maxillofacial pathology in specific patients. Additionally, predictive models accurately identified patients who have a high risk of developing oral cancer as well as the likelihood of the disease returning after treatment. CONCLUSIONS: In the field of oral and maxillofacial pathology, AI has the potential to enhance diagnostic precision, personalize care, and ultimately improve patient outcomes. The development and application of AI in healthcare, however, necessitates careful consideration of ethical, legal, and regulatory challenges. Additionally, because AI is still a relatively new technology, caution must be taken when applying it to this industry.


Assuntos
Algoritmos , Inteligência Artificial , Humanos , Processamento de Imagem Assistida por Computador , Prontuários Médicos , Boca/patologia , Face/patologia
17.
Trop Anim Health Prod ; 56(8): 262, 2024 Sep 19.
Artigo em Inglês | MEDLINE | ID: mdl-39298007

RESUMO

The purpose of this study was to evaluate the performance of various prediction models in estimating the growth and morphological traits of pure Hair, Alpine × Hair F1 (AHF1), and Saanen × Hair F1 (SHF1) hybrid offspring at yearling age by employing early body measurement records from birth till 9th month combined with meteorological data, in an extensive natural pasture-based system. The study also included other factors such as sex, farm, doe and buck IDs, birth type, gestation length, age of the doe at birth etc. For this purpose, seven different machine learning algorithms-linear regression, artificial neural network (ANN), support vector machines (SVM), decision tree, random forest, extra gradient boosting (XGB) and ExtraTree - were applied to the data coming from 1530 goat offspring in Türkiye. Early predictions of growth and morphological traits at yearling age; such as live weight (LW), body length (BL), wither height (WH), rump height (RH), rump width (RW), leg circumference (LC), shinbone girth (SG), chest width (CW), chest girth (CG) and chest depth (CD) were performed by using birth date measurements only, up to month-3, month-6 and month-9 records. Satisfactory predictive performances were achieved once the records after 6th month were used. In extensive natural pasture-based systems, this approach may serve as an effective indirect selection method for breeders. Using month-9 records, the predictions were improved, where LW and BL were found with the highest performance in terms of coefficient of determination (R2 score of 0.81 ± 0.00) by ExtraTree. As one of the rarely applied machine learning models in animal studies, we have shown the capacity of this algorithm. Overall, the current study offers utilization of the meteorological data combined with animal records by machine learning models as an alternative decision-making tool for goat farming.


Assuntos
Cabras , Aprendizado de Máquina , Animais , Cabras/crescimento & desenvolvimento , Cabras/anatomia & histologia , Feminino , Masculino , Redes Neurais de Computação , Cruzamento
18.
Neuroimage ; 276: 120213, 2023 08 01.
Artigo em Inglês | MEDLINE | ID: mdl-37268097

RESUMO

Predictions of task-based functional magnetic resonance imaging (fMRI) from task-free resting-state (rs) fMRI have gained popularity over the past decade. This method holds a great promise for studying individual variability in brain function without the need to perform highly demanding tasks. However, in order to be broadly used, prediction models must prove to generalize beyond the dataset they were trained on. In this work, we test the generalizability of prediction of task-fMRI from rs-fMRI across sites, MRI vendors and age-groups. Moreover, we investigate the data requirements for successful prediction. We use the Human Connectome Project (HCP) dataset to explore how different combinations of training sample sizes and number of fMRI datapoints affect prediction success in various cognitive tasks. We then apply models trained on HCP data to predict brain activations in data from a different site, a different MRI vendor (Phillips vs. Siemens scanners) and a different age group (children from the HCP-development project). We demonstrate that, depending on the task, a training set of approximately 20 participants with 100 fMRI timepoints each yields the largest gain in model performance. Nevertheless, further increasing sample size and number of timepoints results in significantly improved predictions, until reaching approximately 450-600 training participants and 800-1000 timepoints. Overall, the number of fMRI timepoints influences prediction success more than the sample size. We further show that models trained on adequate amounts of data successfully generalize across sites, vendors and age groups and provide predictions that are both accurate and individual-specific. These findings suggest that large-scale publicly available datasets may be utilized to study brain function in smaller, unique samples.


Assuntos
Conectoma , Fenômenos Fisiológicos do Sistema Nervoso , Criança , Humanos , Encéfalo/diagnóstico por imagem , Encéfalo/fisiologia , Conectoma/métodos , Imageamento por Ressonância Magnética/métodos , Tamanho da Amostra
19.
Eur J Neurosci ; 57(3): 490-510, 2023 02.
Artigo em Inglês | MEDLINE | ID: mdl-36512321

RESUMO

Cognitive reserve supports cognitive function in the presence of pathology or atrophy. Functional neuroimaging may enable direct and accurate measurement of cognitive reserve which could have considerable clinical potential. The present study aimed to develop and validate a measure of cognitive reserve using task-based fMRI data that could then be applied to independent resting-state data. Connectome-based predictive modelling with leave-one-out cross-validation was applied to predict a residual measure of cognitive reserve using task-based functional connectivity from the Cognitive Reserve/Reference Ability Neural Network studies (n = 220, mean age = 51.91 years, SD = 17.04 years). This model generated summary measures of connectivity strength that accurately predicted a residual measure of cognitive reserve in unseen participants. The theoretical validity of these measures was established via a positive correlation with a socio-behavioural proxy of cognitive reserve (verbal intelligence) and a positive correlation with global cognition, independent of brain structure. This fitted model was then applied to external test data: resting-state functional connectivity data from The Irish Longitudinal Study on Ageing (TILDA, n = 294, mean age = 68.3 years, SD = 7.18 years). The network-strength predicted measures were not positively associated with a residual measure of cognitive reserve nor with measures of verbal intelligence and global cognition. The present study demonstrated that task-based functional connectivity data can be used to generate theoretically valid measures of cognitive reserve. Further work is needed to establish if, and how, measures of cognitive reserve derived from task-based functional connectivity can be applied to independent resting-state data.


Assuntos
Reserva Cognitiva , Conectoma , Humanos , Pessoa de Meia-Idade , Idoso , Conectoma/métodos , Estudos Longitudinais , Imageamento por Ressonância Magnética/métodos , Encéfalo/diagnóstico por imagem , Rede Nervosa/diagnóstico por imagem
20.
J Viral Hepat ; 30(9): 746-755, 2023 09.
Artigo em Inglês | MEDLINE | ID: mdl-37415492

RESUMO

Chronic hepatitis C (HCV) is a primary cause of hepatocellular carcinoma (HCC). Although antiviral treatment reduces risk of HCC, few studies quantify the impact of treatment on long-term risk in the era of direct-acting antivirals (DAA). Using data from the Chronic Hepatitis Cohort Study, we evaluated the impact of treatment type (DAA, interferon-based [IFN], or none) and outcome (sustained virological response [SVR] or treatment failure [TF]) on risk of HCC. We then developed and validated a predictive risk model. 17186 HCV patients were followed until HCC, death or last follow-up. We used extended landmark modelling, with time-varying covariates and propensity score justification and generalized estimating equations with a link function for discrete time-to-event data. Death was considered a competing risk. We observed 586 HCC cases across 104,000 interval-years of follow-up. SVR from DAA or IFN-based treatment reduced risk of HCC (aHR 0.13, 95% CI 0.08-0.20; and aHR 0.45, 95% CI 0.31-0.65); DAA SVR reduced risk more than IFN SVR (aHR 0.29, 95% CI 0.17-0.48). Independent of treatment, cirrhosis was the strongest risk factor for HCC (aHR 3.94, 95% CI 3.17-4.89 vs. no cirrhosis). Other risk factors included male sex, White race and genotype 3. Our six-variable predictive model had 'excellent' accuracy (AUROC 0.94) in independent validation. Our novel landmark interval-based model identified HCC risk factors across antiviral treatment status and interactions with cirrhosis. This model demonstrated excellent predictive accuracy in a large, racially diverse cohort of patients and could be adapted for 'real world' HCC monitoring.


Assuntos
Carcinoma Hepatocelular , Hepatite C Crônica , Hepatite C , Neoplasias Hepáticas , Humanos , Masculino , Carcinoma Hepatocelular/epidemiologia , Carcinoma Hepatocelular/etiologia , Carcinoma Hepatocelular/prevenção & controle , Antivirais/uso terapêutico , Hepatite C Crônica/complicações , Hepatite C Crônica/tratamento farmacológico , Neoplasias Hepáticas/etiologia , Neoplasias Hepáticas/complicações , Estudos de Coortes , Medição de Risco , Resposta Viral Sustentada , Cirrose Hepática/complicações , Hepatite C/tratamento farmacológico
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA