Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 1.573
Filtrar
1.
Med Decis Making ; : 272989X241263368, 2024 Aug 02.
Artigo em Inglês | MEDLINE | ID: mdl-39092556

RESUMO

BACKGROUND: Noninvasive prenatal testing (NIPT) was developed to improve the accuracy of prenatal screening to detect chromosomal abnormalities. Published economic analyses have yielded different incremental cost-effective ratios (ICERs), leading to conclusions of NIPT being dominant, cost-effective, and cost-ineffective. These analyses have used different model structures, and the extent to which these structural variations have contributed to differences in ICERs is unclear. AIM: To assess the impact of different model structures on the cost-effectiveness of NIPT for the detection of trisomy 21 (T21; Down syndrome). METHODS: A systematic review identified economic models comparing NIPT to conventional screening. The key variations in identified model structures were the number of health states and modeling approach. New models with different structures were developed in TreeAge and populated with consistent parameters to enable a comparison of the impact of selected structural variations on results. RESULTS: The review identified 34 economic models. Based on these findings, demonstration models were developed: 1) a decision tree with 3 health states, 2) a decision tree with 5 health states, 3) a microsimulation with 3 health states, and 4) a microsimulation with 5 health states. The base-case ICER from each model was 1) USD$34,474 (2023)/quality-adjusted life-year (QALY), 2) USD$14,990 (2023)/QALY, (3) USD$54,983 (2023)/QALY, and (4) NIPT was dominated. CONCLUSION: Model-structuring choices can have a large impact on the ICER and conclusions regarding cost-effectiveness, which may inadvertently affect policy decisions to support or not support funding for NIPT. The use of reference models could improve international consistency in health policy decision making for prenatal screening. HIGHLIGHTS: NIPT is a clinical area in which a variety of modeling approaches have been published, with wide variation in reported cost-effectiveness.This study shows that when broader contextual factors are held constant, varying the model structure yields results that range from NIPT being less effective and more expensive than conventional screening (i.e., NIPT was dominated) through to NIPT being more effective and more expensive than conventional screening with an ICER of USD$54,983 (2023)/QALY.Model-structuring choices may inadvertently affect policy decisions to support or not support funding of NIPT. Reference models could improve international consistency in health policy decision making for prenatal screening.

2.
Trop Med Int Health ; 2024 Aug 02.
Artigo em Inglês | MEDLINE | ID: mdl-39095942

RESUMO

Female genital schistosomiasis is a chronic gynaecological disease caused by the waterborne parasite Schistosoma (S.) haematobium. It affects an estimated 30-56 million girls and women globally, mostly in sub-Saharan Africa where it is endemic, and negatively impacts their sexual and reproductive life. Recent studies found evidence of an association between female genital schistosomiasis and increased prevalence of HIV and cervical precancer lesions. Despite the large population at risk, the burden and impact of female genital schistosomiasis are scarcely documented, resulting in neglect and insufficient resource allocation. There is currently no standardised method for individual or population-based female genital schistosomiasis screening and diagnosis which hinders accurate assessment of disease burden in endemic countries. To optimise financial allocations for female genital schistosomiasis screening, it is necessary to explore the cost-effectiveness of different strategies by combining cost and impact estimates. Yet, no economic evaluation has explored the value for money of alternative screening methods. This paper describes a novel application of health decision analytical modelling to evaluate the cost-effectiveness of different female genital schistosomiasis screening strategies across endemic settings. The model combines a decision tree for female genital schistosomiasis screening strategies, and a Markov model for the natural history of cervical cancer to estimate the cost per disability-adjusted life-years averted for different screening strategies, stratified by HIV status. It is a starting point for discussion and for supporting priority setting in a data-sparse environment.

3.
Health Sci Rep ; 7(7): e2266, 2024 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-39055612

RESUMO

Introduction: Death due to covid-19 is one of the biggest health challenges in the world. There are many models that can predict death due to COVID-19. This study aimed to fit and compare Decision Tree (DT), Support Vector Machine (SVM), and AdaBoost models to predict death due to COVID-19. Methods: To describe the variables, mean (SD) and frequency (%) were reported. To determine the relationship between the variables and the death caused by COVID-19, chi-square test was performed with a significance level of 0.05. To compare DT, SVM and AdaBoost models for predicting death due to COVID-19 from sensitivity, specificity, accuracy and the area under the rock curve under R software using psych, caTools, random over-sampling examples, rpart, rpartplot packages was done. Results: Out of the total of 23,054 patients studied, 10,935 cases (46.5%) were women, and 12,569 cases (53.5%) were men. Additionally, the mean age of the patients was 54.9 ± 21.0 years. There is a statistically significant relationship between gender, fever, cough, muscle pain, smell and taste, abdominal pain, nausea and vomiting, diarrhea, anorexia, dizziness, chest pain, intubation, cancer, diabetes, chronic blood disease, Violation of immunity, pregnancy, Dialysis, chronic lung disease with the death of covid-19 patients showed (p < 0.05). The results showed that the sensitivity, specificity, accuracy and the area under the receiver operating characteristic curve were respectively 0.60, 0.68, 0.71, and 0.75 in the DT model, 0.54, 0.62, 0.63, and 0.71 in the SVM model, and 0.59, 0.65, 0.69 and 0.74 in the AdaBoost model. Conclusion: The results showed that DT had a high predictive power compared to other data mining models. Therefore, it is suggested to researchers in different fields to use DT to predict the studied variables. Also, it is suggested to use other approaches such as random forest or XGBoost to improve the accuracy in future studies.

4.
J Food Sci ; 2024 Jul 23.
Artigo em Inglês | MEDLINE | ID: mdl-39042466

RESUMO

Salt intake reduction is a global concern. In particular, Japanese consume higher amounts of salt than those of other ethnicities. The sodium content is mentioned on the label of industrially prepared dishes with an intention of reducing salt intake. This study aimed to evaluate the difference between the actual sodium content and labeled salt value of industrially prepared Japanese single dishes. Samples labeled "estimated" were collected and classified as Japanese, Western, and Chinese cuisines. The sodium content ranged from 180 to 1011 mg/100 g. The sodium content was higher than their reported values in other countries. Specifically, Chinese dishes contained high amounts of sodium, although the chloride content was similar across cuisine styles. Further, the molar ratio (i.e., sodium/chloride) had no significant effect on the difference between the actual content and labeled value. The measured salt contents were 20% higher than the labeled values. The results of decision tree analysis indicated that if the labeled salt value of stir-fried foods is determined by calculation, the actual sodium content is much higher than the labeled salt value. These findings are crucial for customers, dietitian, and researchers as they refer to the labeled salt value to determine the sodium content of industrially prepared foods.

5.
Sci Rep ; 14(1): 17028, 2024 Jul 24.
Artigo em Inglês | MEDLINE | ID: mdl-39043798

RESUMO

Parkinson's disease (PD) and inflammatory bowel disease (IBD) are chronic diseases affecting the central nervous system and gastrointestinal tract, respectively. Recent research suggests a bidirectional relationship between neurodegeneration in PD and intestinal inflammation in IBD. PD patients may experience gastrointestinal dysfunction over a decade before motor symptom onset, and IBD may increase the risk of developing PD. Despite the "gut-brain axis" concept, the underlying pathophysiological mechanisms of this potential association remain unclear. This study aimed to investigate the biological mechanisms of differentially expressed genes in PD and IBD using bioinformatics tools, providing novel insights into the co-diagnosis and treatment of these diseases. We constructed a gene marker for disease diagnosis and identified five important genes (BTK, NCF2, CRH, FCGR3A and SERPINA3). Through nomogram and decision tree analyses, we found that both the IBD and PD required only the expression levels of BTK and NCF2 for accurate discrimination. Additionally, small molecule drugs RO-90-7501 and MST-312 may be useful for the treatment of both IBD and PD. These findings offer new perspectives on the co-diagnosis and treatment of PD and IBD, and suggest that targeting BTK may be a promising therapeutic strategy for both diseases.


Assuntos
Doenças Inflamatórias Intestinais , Doença de Parkinson , Doença de Parkinson/genética , Doença de Parkinson/metabolismo , Humanos , Doenças Inflamatórias Intestinais/genética , Doenças Inflamatórias Intestinais/metabolismo , Doenças Inflamatórias Intestinais/complicações , Biologia Computacional/métodos , Masculino , Tirosina Quinase da Agamaglobulinemia/genética , Tirosina Quinase da Agamaglobulinemia/metabolismo , Feminino , Perfilação da Expressão Gênica , Biomarcadores , Receptores de IgG/genética , Receptores de IgG/metabolismo
6.
Comput Biol Med ; 179: 108919, 2024 Jul 23.
Artigo em Inglês | MEDLINE | ID: mdl-39047502

RESUMO

Research on disease detection by leveraging machine learning techniques has been under significant focus. The use of machine learning techniques is important to detect critical diseases promptly and provide the appropriate treatment. Disease detection is a vital and sensitive task and while machine learning models may provide a robust solution, they can come across as complex and unintuitive. Therefore, it is important to gauge a better understanding of the predictions and trust the results. This paper takes up the crucial task of skin disease detection and introduces a hybrid machine learning model combining SVM and XGBoost for the detection task. The proposed model outperformed the existing machine learning models - Support Vector Machine (SVM), decision tree, and XGBoost with an accuracy of 99.26%. The increased accuracy is essential for detecting skin disease due to the similarity in the symptoms which make it challenging to differentiate between the different conditions. In order to foster trust and gain insights into the results we turn to the promising field of Explainable Artificial Intelligence (XAI). We explore two such frameworks for local as well as global explanations for these machine learning models namely, SHapley Additive exPlanations (SHAP) and Local Interpretable Model-agnostic Explanations (LIME).

7.
Fish Shellfish Immunol ; 152: 109788, 2024 Jul 23.
Artigo em Inglês | MEDLINE | ID: mdl-39053586

RESUMO

In the process of screening for probiotic strains, there are no clearly established bacterial phenotypic markers which could be used for the prediction of their in vivo mechanism of action. In this work, we demonstrate for the first time that Machine Learning (ML) methods can be used for accurately predicting the in vivo immunomodulatory activity of probiotic strains based on their cell surface phenotypic features using a snail host-microbe interaction model. A broad range of snail gut presumptive probiotics, including 240 new lactic acid bacterial strains (Lactobacillus, Leuconostoc, Lactococcus, and Enterococcus), were isolated and characterized based on their capacity to withstand snails' gastrointestinal defense barriers, such as the pedal mucus, gastric mucus, gastric juices, and acidic pH, in association with their cell surface hydrophobicity, autoaggregation, and biofilm formation ability. The implemented ML pipeline predicted with high accuracy (88 %) strains with a strong capacity to enhance chemotaxis and phagocytic activity of snails' hemolymph cells, while also revealed bacterial autoaggregation and cell surface hydrophobicity as the most important parameters that significantly affect host immune responses. The results show that ML approaches may be useful to derive a predictive understanding of host-probiotic interactions, while also highlighted the use of snails as an efficient animal model for screening presumptive probiotic strains in the light of their interaction with cellular innate immune responses.

8.
Entropy (Basel) ; 26(7)2024 Jul 11.
Artigo em Inglês | MEDLINE | ID: mdl-39056955

RESUMO

We introduce NodeFlow, a flexible framework for probabilistic regression on tabular data that combines Neural Oblivious Decision Ensembles (NODEs) and Conditional Continuous Normalizing Flows (CNFs). It offers improved modeling capabilities for arbitrary probabilistic distributions, addressing the limitations of traditional parametric approaches. In NodeFlow, the NODE captures complex relationships in tabular data through a tree-like structure, while the conditional CNF utilizes the NODE's output space as a conditioning factor. The training process of NodeFlow employs standard gradient-based learning, facilitating the end-to-end optimization of the NODEs and CNF-based density estimation. This approach ensures outstanding performance, ease of implementation, and scalability, making NodeFlow an appealing choice for practitioners and researchers. Comprehensive assessments on benchmark datasets underscore NodeFlow's efficacy, revealing its achievement of state-of-the-art outcomes in multivariate probabilistic regression setup and its strong performance in univariate regression tasks. Furthermore, ablation studies are conducted to justify the design choices of NodeFlow. In conclusion, NodeFlow's end-to-end training process and strong performance make it a compelling solution for practitioners and researchers. Additionally, it opens new avenues for research and application in the field of probabilistic regression on tabular data.

9.
BMC Med Res Methodol ; 24(1): 158, 2024 Jul 23.
Artigo em Inglês | MEDLINE | ID: mdl-39044195

RESUMO

BACKGROUND: In randomized clinical trials, treatment effects may vary, and this possibility is referred to as heterogeneity of treatment effect (HTE). One way to quantify HTE is to partition participants into subgroups based on individual's risk of experiencing an outcome, then measuring treatment effect by subgroup. Given the limited availability of externally validated outcome risk prediction models, internal models (created using the same dataset in which heterogeneity of treatment analyses also will be performed) are commonly developed for subgroup identification. We aim to compare different methods for generating internally developed outcome risk prediction models for subject partitioning in HTE analysis. METHODS: Three approaches were selected for generating subgroups for the 2,441 participants from the United States enrolled in the ASPirin in Reducing Events in the Elderly (ASPREE) randomized controlled trial. An extant proportional hazards-based outcomes predictive risk model developed on the overall ASPREE cohort of 19,114 participants was identified and was used to partition United States' participants by risk of experiencing a composite outcome of death, dementia, or persistent physical disability. Next, two supervised non-parametric machine learning outcome classifiers, decision trees and random forests, were used to develop multivariable risk prediction models and partition participants into subgroups with varied risks of experiencing the composite outcome. Then, we assessed how the partitioning from the proportional hazard model compared to those generated by the machine learning models in an HTE analysis of the 5-year absolute risk reduction (ARR) and hazard ratio for aspirin vs. placebo in each subgroup. Cochran's Q test was used to detect if ARR varied significantly by subgroup. RESULTS: The proportional hazard model was used to generate 5 subgroups using the quintiles of the estimated risk scores; the decision tree model was used to generate 6 subgroups (6 automatically determined tree leaves); and the random forest model was used to generate 5 subgroups using the quintiles of the prediction probability as risk scores. Using the semi-parametric proportional hazards model, the ARR at 5 years was 15.1% (95% CI 4.0-26.3%) for participants with the highest 20% of predicted risk. Using the random forest model, the ARR at 5 years was 13.7% (95% CI 3.1-24.4%) for participants with the highest 20% of predicted risk. The highest outcome risk group in the decision tree model also exhibited a risk reduction, but the confidence interval was wider (5-year ARR = 17.0%, 95% CI= -5.4-39.4%). Cochran's Q test indicated ARR varied significantly only by subgroups created using the proportional hazards model. The hazard ratio for aspirin vs. placebo therapy did not significantly vary by subgroup in any of the models. The highest risk groups for the proportional hazards model and random forest model contained 230 participants each, while the highest risk group in the decision tree model contained 41 participants. CONCLUSIONS: The choice of technique for internally developed models for outcome risk subgroups influences HTE analyses. The rationale for the use of a particular subgroup determination model in HTE analyses needs to be explicitly defined based on desired levels of explainability (with features importance), uncertainty of prediction, chances of overfitting, and assumptions regarding the underlying data structure. Replication of these analyses using data from other mid-size clinical trials may help to establish guidance for selecting an outcomes risk prediction modelling technique for HTE analyses.


Assuntos
Aspirina , Aprendizado de Máquina , Modelos de Riscos Proporcionais , Humanos , Aspirina/uso terapêutico , Idoso , Feminino , Masculino , Resultado do Tratamento , Estados Unidos , Medição de Risco/métodos , Medição de Risco/estatística & dados numéricos , Modelos Estatísticos , Ensaios Clínicos Controlados Aleatórios como Assunto/métodos , Ensaios Clínicos Controlados Aleatórios como Assunto/estatística & dados numéricos , Árvores de Decisões , Avaliação de Resultados em Cuidados de Saúde/métodos , Avaliação de Resultados em Cuidados de Saúde/estatística & dados numéricos
10.
Sensors (Basel) ; 24(13)2024 Jul 03.
Artigo em Inglês | MEDLINE | ID: mdl-39001095

RESUMO

Traffic accidents due to fatigue account for a large proportion of road fatalities. Based on simulated driving experiments with drivers recruited from college students, this paper investigates the use of heart rate variability (HRV) features to detect driver fatigue while considering sex differences. Sex-independent and sex-specific differences in HRV features between alert and fatigued states derived from 2 min electrocardiogram (ECG) signals were determined. Then, decision trees were used for driver fatigue detection using the HRV features of either all subjects or those of only males or females. Nineteen, eighteen, and thirteen HRV features were significantly different (Mann-Whitney U test, p < 0.01) between the two mental states for all subjects, males, and females, respectively. The fatigue detection models for all subjects, males, and females achieved classification accuracies of 86.3%, 94.8%, and 92.0%, respectively. In conclusion, sex differences in HRV features between drivers' mental states were found according to both the statistical analysis and classification results. By considering sex differences, precise HRV feature-based driver fatigue detection systems can be developed. Moreover, in contrast to conventional methods using HRV features from 5 min ECG signals, our method uses HRV features from 2 min ECG signals, thus enabling more rapid driver fatigue detection.


Assuntos
Condução de Veículo , Eletrocardiografia , Fadiga , Frequência Cardíaca , Humanos , Masculino , Frequência Cardíaca/fisiologia , Eletrocardiografia/métodos , Feminino , Fadiga/fisiopatologia , Fadiga/diagnóstico , Adulto Jovem , Adulto , Acidentes de Trânsito , Fatores Sexuais , Processamento de Sinais Assistido por Computador , Caracteres Sexuais
11.
J Arthroplasty ; 2024 Jul 12.
Artigo em Inglês | MEDLINE | ID: mdl-39004384

RESUMO

BACKGROUND: In total joint arthroplasty patients, intraoperative hypothermia (IOH) is associated with perioperative complications and an increased economic burden. Previous models have some limitations and mainly focus on regression modeling. Random forest (RF) algorithms and decision tree modeling are effective for eliminating irrelevant features and making predictions that aid in accelerating modeling and reducing application difficulty. METHODS: We conducted this prospective observational study using convenience sampling and collected data from 327 total joint arthroplasty patients in a tertiary hospital from March 4, 2023 to September 11, 2023. Of those, 229 patients were assigned to the training and 98 to the testing sets. The Chi-square, Mann-Whitney U, and t-tests were used for baseline analyses. The feature variables selection used the RF algorithms, and the decision tree model was trained on 299 examples and validated on 98. The sensitivity, specificity, recall, F1 score, and area under the curve (AUC) were used to test the model's performance. RESULTS: The RF algorithms identified the preheating time, the volume of flushing fluids, the intraoperative infusion volume, the anesthesia time, the surgical time, and the core temperature after intubation as risk factors for IOH. The decision tree was grown to five levels with nine terminal nodes. The overall incidence of IOH was 42.13%. The sensitivity, specificity, recall, F1 score, and AUC were 0.651, 0.907, 0.916, 0.761, and 0.810, respectively. The model indicated strong internal consistency and predictive ability. CONCLUSIONS: The preheating time, the volume of flushing fluids, the intraoperative infusion volume, the anesthesia time, the surgical time, and the core temperature after intubation could accurately predict IOH in total joint arthroplasty patients. By monitoring these factors, the clinical staff could achieve early detection and intervention of IOH in total joint arthroplasty patients.

12.
World J Oncol ; 15(4): 550-561, 2024 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-38993243

RESUMO

Background: Domestic and foreign studies on lung cancer have been oriented to the medical efficacy of low-dose computed tomography (LDCT), but there is a lack of studies on the costs, value and cost-effectiveness of the treatment. There is a scarcity of conclusive evidence regarding the cost-effectiveness of LDCT within the specific context of Taiwan. This study is designed to address this gap by conducting a comprehensive analysis of the cost-effectiveness of LDCT and chest X-ray (CXR) as screening methods for lung cancer. Methods: Markov decision model simulation was used to estimate the cost-effectiveness of biennial screening with LDCT and CXR based on a health provider perspective. Inputs are based on probabilities, health status utility (quality-adjusted life years (QALYs)), costs of lung cancer screening, diagnosis, and treatment from the literatures, and expert opinion. A total of 1,000 simulations and five cycles of Markov bootstrapping simulations were performed to compare the incremental cost-utility ratio (ICUR) of these two screening strategies. Probability and one-way sensitivity analyses were also performed. Results: The ICUR of early lung cancer screening compared LDCT to CXR is $-24,757.65/QALYs, and 100% of the probability agree to adopt it under a willingness-to-pay (WTP) threshold of the Taiwan gross domestic product (GDP) per capita ($35,513). The one-way sensitivity analysis also showed that ICUR depends heavily on recall rate. Based on the prevalence rate of 39.7 lung cancer cases per 100,000 people in 2020, it could be estimated that LDCT screening for high-risk populations could save $17,154,115. Conclusion: LDCT can detect more early lung cancers, reduce mortality and is cost-saving than CXR in a long-term simulation of Taiwan's healthcare system. This study provides valuable insights for healthcare decision-makers and suggests analyzing cost-effectiveness for additional variables in future research.

13.
Environ Sci Pollut Res Int ; 31(32): 45074-45104, 2024 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-38958857

RESUMO

Water plays a pivotal role in socio-economic development in Algeria. However, the overexploitations of groundwater resources, water scarcity, and the proliferation of pollution sources (including industrial and urban effluents, untreated landfills, and chemical fertilizers, etc.) have resulted in substantial groundwater contamination. Preserving water irrigation quality has thus become a primary priority, capturing the attention of both scientists and local authorities. The current study introduces an innovative method to mapping contamination risks, integrating vulnerability assessments, land use patterns (as a sources of pollution), and groundwater overexploitation (represented by the waterhole density) through the implementation of a decision tree model. The resulting risk map illustrates the probability of contamination occurrence in the substantial aquifer on the plateau of Mostaganem. An agricultural region characterized by the intensive nutrients and pesticides use, the significant presence of septic tanks, widespread illegal dumping, and a technical landfill not compliant with environmental standards. The critical situation in the region is exacerbated by excessive groundwater pumping surpassing the aquifer's natural replenishment capacity (with 115 boreholes and 6345 operational wells), especially in a semi-arid climate featuring limited water resources and frequent drought. Vulnerability was evaluated using the DRFTID method, a derivative of the DRASTIC model, considering parameters such as depth to groundwater, recharge, fracture density, slope, nature of the unsaturated zone, and the drainage density. All these parameters are combined with analyses of inter-parameter relationship effects. The results show a spatial distribution into three risk levels (low, medium, and high), with 31.5% designated as high risk, and 56% as medium risk. The validation of this mapping relies on the assessment of physicochemical analyses in samples collected between 2010 and 2020. The results indicate elevated groundwater contamination levels in samples. Chloride exceeded acceptable levels by 100%, nitrate by 71%, calcium by 50%, and sodium by 42%. These elevated concentrations impact electrical conductivity, resulting in highly mineralized water attributed to anthropogenic agricultural pollution and septic tank discharges. High-risk zones align with areas exhibiting elevated nitrate and chloride concentrations. This model, deemed satisfactory, significantly enhances the sustainable management of water resources and irrigated land across various areas. In the long term, it would be beneficial to refine "vulnerability and risk" models by integrating detailed data on land use, groundwater exploitation, and hydrogeological and hydrochemical characteristics. This approach could improve vulnerability accuracy and pollution risk maps, particularly through detailed local data availability. It is also crucial that public authorities support these initiatives by adapting them to local geographical and climatic specificities on a regional and national scale. Finally, these studies have the potential to foster sustainable development at different geographical levels.


Assuntos
Árvores de Decisões , Monitoramento Ambiental , Água Subterrânea , Água Subterrânea/química , Argélia , Poluição da Água/análise , Poluentes Químicos da Água/análise , Medição de Risco
14.
Artigo em Inglês | MEDLINE | ID: mdl-39037154

RESUMO

Few studies included objective blood pressure (BP) to construct the predictive model of severe obstructive sleep apnea (OSA). This study used binary logistic regression model (BLRM) and the decision tree method (DTM) to constructed the predictive models for identifying severe OSA, and to compare the prediction capability between the two methods. Totally 499 adult patients with severe OSA and 1421 non-severe OSA controls examined at the Sleep Medicine Center of a tertiary hospital in southern Taiwan between October 2016 and April 2019 were enrolled. OSA was diagnosed through polysomnography. Data on BP, demographic characteristics, anthropometric measurements, comorbidity histories, and sleep questionnaires were collected. BLRM and DTM were separately applied to identify predictors of severe OSA. The performance of risk scores was assessed by area under the receiver operating characteristic curves (AUCs). In BLRM, body mass index (BMI) ≥27 kg/m2, and Snore Outcomes Survey score ≤55 were significant predictors of severe OSA (AUC 0.623). In DTM, mean SpO2 <96%, average systolic BP ≥135 mmHg, and BMI ≥39 kg/m2 were observed to effectively differentiate cases of severe OSA (AUC 0.718). The AUC for the predictive models produced by the DTM was higher in older adults than in younger adults (0.807 vs. 0.723) mainly due to differences in clinical predictive features. In conclusion, DTM, using a different set of predictors, seems more effective in identifying severe OSA than BLRM. Differences in predictors ascertained demonstrated the necessity for separately constructing predictive models for younger and older adults.

15.
Transl Stroke Res ; 2024 Jul 22.
Artigo em Inglês | MEDLINE | ID: mdl-39037513

RESUMO

Chinese population have a high prevalence of unruptured intracranial aneurysm (UIA). Clinical and imaging risk factors predicting UIA growth or rupture are poorly understood in the Chinese population due to the lack of large-scale longitudinal studies, and the treatment decision for UIA patients was challenging. Develop a decision tree (DT) model for UIA instability, and validate its performance in multi-center studies. Single-UIA patients from two prospective, longitudinal multicenter cohort studies were analyzed, and set as the development cohort and validation cohort. The primary endpoint was UIA instability (rupture, growth, or morphological change). A DT was established within the development cohort and validated within the validation cohort. The performance of clinicians in identifying unstable UIAs before and after the help of the DT was compared using the area under curve (AUC). The development cohort included 1270 patients with 1270 UIAs and a follow-up duration of 47.2 ± 15.5 months. Aneurysm instability occurred in 187 (14.7%) patients. Multivariate Cox analysis revealed hypertension (hazard ratio [HR], 1.54; 95%CI, 1.14-2.09), aspect ratio (HR, 1.22; 95%CI, 1.17-1.28), size ratio (HR, 1.31; 95%CI, 1.23-1.41), bifurcation configuration (HR, 2.05; 95%CI, 1.52-2.78) and irregular shape (HR, 4.30; 95%CI, 3.19-5.80) as factors of instability. In the validation cohort (n = 106, 12 was unstable), the DT model incorporating these factors was highly predictive of UIA instability (AUC, 0.88 [95%CI, 0.79-0.97]), and superior to existing UIA risk scales such as PHASES and ELAPSS (AUC, 0.77 [95%CI, 0.67-0.86] and 0.76 [95%CI, 0.66-0.86], P < 0.001). Within all 1376 single-UIA patients, the use of the DT significantly improved the accuracy of junior neurosurgical clinicians to identify unstable UIAs (AUC from 0.63 to 0.82, P < 0.001). The DT incorporating hypertension, aspect ratio, size ratio, bifurcation configuration and irregular shape was able to predict UIA instability better than existing clinical scales in Chinese cohorts. CLINICAL TRIAL REGISTRATION: IARP-CP cohort were included (unique identifier: ChiCTR1900024547. Published July 15, 2019. Completed December 30, 2020), with 100-Project phase-I cohort (unique identifier: NCT04872842, Published May 5, 2021. Completed November 8, 2022) as the development cohort. The 100-Project phase-II cohort (unique identifier: NCT05608122. Published November 8, 2022) as the validation cohort.

16.
PeerJ ; 12: e17711, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-39035151

RESUMO

Background and Objectives: Postpartum depression (PPD) is prevalent among women after childbirth, but accessing mental healthcare for PPD is challenging. This study aimed to assess the treatment gap and barriers to mental healthcare access for women with PPD symptoms living in Punjab, Pakistan. Methods: A multicenter cross-sectional study was conducted in five populous cities of Punjab from January to June 2023 by administering the questionnaire to the women using stratified random sampling. A total of 3,220 women in first 6 months postpartum were screened using the Edinburgh Postnatal Depression Scale. Of them, 1,503 women scored thirteen or above, indicating potential depressive disorder. Interviews were conducted to explore help-seeking behavior and barriers to accessing mental healthcare. Descriptive statistics along with nonparametric tests (e.g., Kruskal-Wallis, Mann-Whitney U) were used and group differences were examined. Scatter plot matrices with fitted lines were used to explore associations between variables. Classification and regression tree methods were used to classify the importance and contribution of different variables for the intensity of PPD. Results: Only 2% of women (n = 33) with high PPD symptoms sought mental healthcare, and merely 5% of women (n = 75) had been in contact with a health service since the onset of their symptoms. 92.80% of women with PPD symptoms did not seek any medical attention. The majority of women, 1,215 (81%), perceived the need for mental health treatment; however, 91.23% of them did not seek treatment from healthcare services. Women who recently gave birth to a female child had higher mean depression scores compared to those who gave birth to a male child. Age, education, and birth location of newborn were significantly associated (p  <  0.005) with mean barrier scores, mean social support scores, mean depression scores and treatment gap. The results of classification and regression decision tree model showed that instrumental barrier scores are the most important in predicting mean PPD scores. Conclusion: Women with PPD symptoms encountered considerable treatment gap and barriers to access mental health care. Integration of mental health services into obstetric care as well as PPD screening in public and private hospitals of Punjab, Pakistan is critically needed to overcome the treatment gap and barriers.


Assuntos
Depressão Pós-Parto , Acessibilidade aos Serviços de Saúde , Serviços de Saúde Mental , Humanos , Depressão Pós-Parto/terapia , Depressão Pós-Parto/epidemiologia , Depressão Pós-Parto/diagnóstico , Feminino , Paquistão/epidemiologia , Adulto , Acessibilidade aos Serviços de Saúde/estatística & dados numéricos , Estudos Transversais , Serviços de Saúde Mental/estatística & dados numéricos , Inquéritos e Questionários , Aceitação pelo Paciente de Cuidados de Saúde/estatística & dados numéricos , Aceitação pelo Paciente de Cuidados de Saúde/psicologia , Adulto Jovem , Comportamento de Busca de Ajuda , Escalas de Graduação Psiquiátrica
17.
Biomed Phys Eng Express ; 10(5)2024 Jul 10.
Artigo em Inglês | MEDLINE | ID: mdl-38955139

RESUMO

The prevalence of vision impairment is increasing at an alarming rate. The goal of the study was to create an automated method that uses optical coherence tomography (OCT) to classify retinal disorders into four categories: choroidal neovascularization, diabetic macular edema, drusen, and normal cases. This study proposed a new framework that combines machine learning and deep learning-based techniques. The utilized classifiers were support vector machine (SVM), K-nearest neighbor (K-NN), decision tree (DT), and ensemble model (EM). A feature extractor, the InceptionV3 convolutional neural network, was also employed. The performance of the models was evaluated against nine criteria using a dataset of 18000 OCT images. For the SVM, K-NN, DT, and EM classifiers, the analysis exhibited state-of-the-art performance, with classification accuracies of 99.43%, 99.54%, 97.98%, and 99.31%, respectively. A promising methodology has been introduced for the automatic identification and classification of retinal disorders, leading to reduced human error and saved time.


Assuntos
Algoritmos , Inteligência Artificial , Redes Neurais de Computação , Doenças Retinianas , Máquina de Vetores de Suporte , Tomografia de Coerência Óptica , Humanos , Tomografia de Coerência Óptica/métodos , Doenças Retinianas/diagnóstico , Doenças Retinianas/diagnóstico por imagem , Aprendizado Profundo , Retina/diagnóstico por imagem , Retina/patologia , Árvores de Decisões , Retinopatia Diabética/diagnóstico , Retinopatia Diabética/diagnóstico por imagem , Aprendizado de Máquina , Neovascularização de Coroide/diagnóstico por imagem , Neovascularização de Coroide/diagnóstico , Edema Macular/diagnóstico por imagem , Edema Macular/diagnóstico
18.
Sisli Etfal Hastan Tip Bul ; 58(2): 216-225, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-39021695

RESUMO

Objectives: Predictive risk scores have a significant impact on patient selection and assessing the likelihood of complications following interventions in patients with severe aortic stenosis (AS). This study aims to explore the utility of machine learning (ML) techniques in predicting 30-day major adverse cardiac events (MACE) by analyzing parameters, including the Global Registry of Acute Coronary Events (GRACE) score. Methods: This retrospective, multi-center, observational study enrolled 453 consecutive patients diagnosed with severe AS who underwent transcatheter aortic valve implantation (TAVI) from April 2020 to January 2023. The primary outcome was defined as a composition of MACE comprising periprocedural myocardial infarction (MI), cerebrovascular events (CVE), and all-cause mortality during the 1-month follow-up period after the procedure. Conventional binomial logistic regression and ML models were utilized and compared for prediction purposes. Results: The study population had a mean age of 76.1, with 40.8% being male. The primary endpoint was observed in 7.5% of cases. Among the individual components of the primary endpoint, the rates of all-cause mortality, MI, and CVE were reported as 4.2%, 2.4%, and 1.9%, respectively. The ML-based Extreme Gradient Boosting (XGBoost) model with the GRACE score demonstrated superior discriminative performance in predicting the primary endpoint, compared to both the ML model without the GRACE score and the conventional regression model [Area Under the Curve (AUC)= 0.98 (0.91-0.99), AUC= 0,87 (0.80-0.98), AUC= 0.84 (0.79-0.96)]. Conclusion: ML techniques hold the potential to enhance outcomes in clinical practice, especially when utilized alongside established clinical tools such as the GRACE score.

19.
Support Care Cancer ; 32(7): 483, 2024 Jul 03.
Artigo em Inglês | MEDLINE | ID: mdl-38958751

RESUMO

OBJECTIVES: Post-traumatic growth can improve the quality of life of cancer survivors. The objective of this study was to investigate post-traumatic growth heterogeneity trajectory in perioperative gastric cancer survivors, and to identify characteristics that predict membership for each trajectory. METHODS: Gastric cancer survivors (n = 403) were recruited before surgery, their baseline assessment (including post-traumatic growth and related characteristics) was completed, and post-traumatic growth levels were followed up on the day they left the intensive care unit, at discharge, and 1 month after discharge. Latent growth mixture mode was used to identify the heterogeneous trajectory of post-traumatic growth, and the core predictors of trajectory subtypes were explored using a decision tree model. RESULTS: Three post-traumatic growth development trajectories were identified among gastric cancer survivors: stable high of PTG group (20.6%), fluctuation of PTG group (44.4%), persistent low of PTG group (35.0%). The decision tree model showed anxiety, coping style, and psychological resilience-which was the primary predictor-might be used to predict the PTG trajectory subtypes of gastric cancer survivors. CONCLUSIONS: There was considerable variability in the experience of post-traumatic growth among gastric cancer survivors. Recognition of high-risk gastric cancer survivors who fall into the fluctuation or persistent low of PTG group and provision of psychological resilience-centered support might allow medical professionals to improve patients' post-traumatic growth and mitigate the impact of negative outcomes.


Assuntos
Sobreviventes de Câncer , Crescimento Psicológico Pós-Traumático , Neoplasias Gástricas , Humanos , Neoplasias Gástricas/psicologia , Masculino , Feminino , Sobreviventes de Câncer/psicologia , Pessoa de Meia-Idade , Estudos Longitudinais , Idoso , Adulto , Qualidade de Vida , Adaptação Psicológica , Resiliência Psicológica , Ansiedade/etiologia , Árvores de Decisões
20.
Sci Rep ; 14(1): 15072, 2024 07 02.
Artigo em Inglês | MEDLINE | ID: mdl-38956083

RESUMO

With the increasing prevalence of obesity in India, body mass index (BMI) has garnered importance as a disease predictor. The current World Health Organization (WHO) body mass index (BMI) cut-offs may not accurately portray these health risks in older adults aged 60 years and above. This study aims to define age-appropriate cut-offs for older adults (60-74 years and 75 years and above) and compare the performance of these cut-offs with the WHO BMI cut-offs using cardio-metabolic conditions as outcomes. Using baseline data from the Longitudinal Ageing Study in India (LASI), classification and regression tree (CART) cross-sectional analysis was conducted to obtain age-appropriate BMI cut-offs based on cardio-metabolic conditions as outcomes. Logistic regression models were estimated to compare the association of the two sets of cut-offs with cardio-metabolic outcomes. The area under the receiver operating characteristic curve (AUC), sensitivity and specificity were estimated. Agreement with waist circumference, an alternate measure of adiposity, was conducted. For older adults aged 60-74 years and 75 years and above, the cut-off for underweight reduced from < 18.5 to < 17.4 and < 13.3 respectively. The thresholds for overweight and obese increased for older adults aged 60-74 years old from > = 25 to > 28.8 and > = 30 to > 33.7 respectively. For older adults aged 75 years and above, the thresholds decreased for both categories. The largest improvement in AUC was observed in older adults aged 75 years and above. The newly derived cut-offs also demonstrated higher sensitivity and specificity among all age-sex stratifications. There is a need to adopt greater rigidity in defining overweight/obesity among older adults aged 75 years and above, as opposed to older adults aged 60-74 years old among whom the thresholds need to be less conservative. Further stratification in the low risk category could also improve BMI classification among older adults. These age-specific thresholds may act as improved alternatives of the current WHO BMI thresholds and improve classification among older adults in India.


Assuntos
Índice de Massa Corporal , Desnutrição , Humanos , Idoso , Índia/epidemiologia , Masculino , Feminino , Pessoa de Meia-Idade , Desnutrição/epidemiologia , Desnutrição/diagnóstico , Estudos Transversais , Obesidade/epidemiologia , Fatores Etários , Curva ROC , Idoso de 80 Anos ou mais , Estudos Longitudinais , Sobrepeso/epidemiologia , Circunferência da Cintura , Magreza/epidemiologia
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA