Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Mais filtros

Bases de dados
Ano de publicação
Tipo de documento
País de afiliação
Intervalo de ano de publicação
1.
Br J Clin Pharmacol ; 90(3): 675-683, 2024 03.
Artigo em Inglês | MEDLINE | ID: mdl-37921554

RESUMO

AIM: When administering tacrolimus, therapeutic drug monitoring is recommended because nephrotoxicity, an adverse event, occurs at supra-therapeutic whole-blood concentrations of tacrolimus. However, some patients exhibit nephrotoxicity even at the recommended concentrations, therefore establishing a therapeutic range of tacrolimus concentration for the individual patient is necessary to avoid nephrotoxicity. This study aimed to develop a model for individualized prediction of nephrotoxicity in patients administered tacrolimus. METHODS: We collected data, such as laboratory test data at tacrolimus initiation, concomitant drugs and tacrolimus whole-blood concentration, from medical records of patients who received oral tacrolimus. Nephrotoxicity was defined as an increase in serum creatinine levels within 60 days of tacrolimus initiation. We built 13 prediction models based on different machine learning algorithms: logistic regression, support vector machine, gradient-boosting trees, random forest and neural networks. The best performing model was compared with the conventional model, which classifies patients according to the tacrolimus concentration alone. RESULTS: Data from 163 and 41 patients were used to construct models and evaluate the best performing one, respectively. Most of the patients were diagnosed with inflammatory or autoimmune diseases. The best performing model was built using a support vector machine; it showed a high F2 score of 0.750 and outperformed the conventional model (0.500). CONCLUSIONS: A machine learning model to predict nephrotoxicity in patients during tacrolimus treatment was developed using tacrolimus whole-blood concentration and other patient data. This model could potentially assist in identifying high-risk patients who require individualized target therapeutic concentrations of tacrolimus prior to treatment initiation to prevent nephrotoxicity.


Assuntos
Algoritmos , Tacrolimo , Humanos , Modelos Logísticos , Aprendizado de Máquina
2.
J Clin Med ; 13(8)2024 Apr 18.
Artigo em Inglês | MEDLINE | ID: mdl-38673635

RESUMO

Background: This investigation sought to cross validate the predictors of tongue pressure recovery in elderly patients' post-treatment for head and neck tumors, leveraging advanced machine learning techniques. Methods: By employing logistic regression, support vector regression, random forest, and extreme gradient boosting, the study analyzed an array of variables including patient demographics, surgery types, dental health status, and age, drawn from comprehensive medical records and direct tongue pressure assessments. Results: Among the models, logistic regression emerged as the most effective, demonstrating an accuracy of 0.630 [95% confidence interval (CI): 0.370-0.778], F1 score of 0.688 [95% confidence interval (CI): 0.435-0.853], precision of 0.611 [95% confidence interval (CI): 0.313-0.801], recall of 0.786 [95% confidence interval (CI): 0.413-0.938] and an area under the receiver operating characteristic curve of 0.626 [95% confidence interval (CI): 0.409-0.806]. This model distinctly highlighted the significance of glossectomy (p = 0.039), the presence of functional teeth (p = 0.043), and the patient's age (p = 0.044) as pivotal factors influencing tongue pressure, setting the threshold for statistical significance at p < 0.05. Conclusions: The analysis underscored the critical role of glossectomy, the presence of functional natural teeth, and age as determinants of tongue pressure in logistics regression, with the presence of natural teeth and the tumor site located in the tongue consistently emerging as the key predictors across all computational models employed in this study.

3.
PLoS One ; 17(3): e0264541, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35275928

RESUMO

The degradation of SARS-CoV-2 specific ribonucleic acid (RNA) was investigated by a numerical modeling approach based on nucleic acid amplification test (NAAT) results utilizing the SmartAmp technique. The precision of the measurement was verified by the relative standard deviation (RSD) of repeated measurements at each calibration point. The precision and detection limits were found to be 6% RSD (seven repeated measurements) and 94 copies/tube, respectively, at the lowest calibration point. RNA degradation curves obtained from NAAT data on four different temperatures were in good agreement with the first-order reaction model. By referring to rate constants derived from the results, the Arrhenius model was applied to predict RNA degradation behavior. If the initial RNA concentration was high enough, such as in samples taken from infected bodies, the NAAT results were expected to be positive during testing. On the other hand, if initial RNA concentrations were relatively low, such as RNA in residual viruses on environmental surfaces, special attention should be paid to avoid false-negative results. The results obtained in this study provide a practical guide for RNA sample management in the NAAT of non-human samples.


Assuntos
Teste de Ácido Nucleico para COVID-19 , COVID-19 , Técnicas de Amplificação de Ácido Nucleico , Estabilidade de RNA , RNA Viral/genética , SARS-CoV-2/genética , COVID-19/diagnóstico , COVID-19/genética , Humanos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA