Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 29
Filtrar
1.
BMC Med Res Methodol ; 23(1): 209, 2023 09 19.
Artigo em Inglês | MEDLINE | ID: mdl-37726680

RESUMO

Random Forests are a powerful and frequently applied Machine Learning tool. The permutation variable importance (VIMP) has been proposed to improve the explainability of such a pure prediction model. It describes the expected increase in prediction error after randomly permuting a variable and disturbing its association with the outcome. However, VIMPs measure a variable's marginal influence only, that can make its interpretation difficult or even misleading. In the present work we address the general need for improving the explainability of prediction models by exploring VIMPs in the presence of correlated variables. In particular, we propose to use a variable's residual information for investigating if its permutation importance partially or totally originates from correlated predictors. Hypotheses tests are derived by a resampling algorithm that can further support results by providing test decisions and p-values. In simulation studies we show that the proposed test controls type I error rates. When applying the methods to a Random Forest analysis of post-transplant survival after kidney transplantation, the importance of kidney donor quality for predicting post-transplant survival is shown to be high. However, the transplant allocation policy introduces correlations with other well-known predictors, which raises the concern that the importance of kidney donor quality may simply originate from these predictors. By using the proposed method, this concern is addressed and it is demonstrated that kidney donor quality plays an important role in post-transplant survival, regardless of correlations with other predictors.


Assuntos
Transplante de Rim , Algoritmo Florestas Aleatórias , Humanos , Algoritmos , Simulação por Computador , Aprendizado de Máquina
2.
Reprod Sci ; 30(9): 2805-2812, 2023 09.
Artigo em Inglês | MEDLINE | ID: mdl-36988903

RESUMO

The purpose of this paper is to explore whether the cardiovascular profile score (CVPS) correlates with fetal outcome in patients with non-immune hydrops fetalis (NIHF) and cardiac anomalies. In this retrospective study, we included fetuses with NIHF and the suspicion of a cardiac anomaly in prenatal ultrasound. The CVPS was calculated using information obtained by fetal echocardiographic examination. Feto-neonatal mortality (FNM) was defined as intrauterine fetal demise or death in the first 6 months of life. We reviewed 98 patients, who were referred to the Department of Obstetrics and Gynecology of the Johannes Gutenberg University in Mainz with the diagnosis of NIHF between January 2007 and March 2021. By eighteen of them, the suspicion of a cardiac anomaly was raised. After exclusion of six pregnancies (one termination of pregnancy and five because of incomplete data), 12 cases were left for analysis. Mean gestational age at which the CVPS was calculated was 29 + 2 weeks. Two fetuses died in utero. Of the remaining ten hydropic fetuses, three newborns died in the neonatal period, and seven survived after a 6-month surveillance period. Median CVPS of all fetuses was 6 points. Surviving fetuses showed statistically significantly higher CVPS values (median 8 points) than fetuses with FNM (median 5 points, p value = 0.009). Our results point towards a positive association between CVPS and fetal outcome in fetuses with NIHF and cardiac anomalies. The CVPS appears to be a useful marker in the assessment of heart failure in utero.


Assuntos
Sistema Cardiovascular , Cardiopatias Congênitas , Gravidez , Feminino , Humanos , Recém-Nascido , Lactente , Hidropisia Fetal/diagnóstico , Hidropisia Fetal/etiologia , Projetos Piloto , Estudos Retrospectivos , Ultrassonografia Pré-Natal/métodos
3.
J Perinat Med ; 50(7): 985-992, 2022 Sep 27.
Artigo em Inglês | MEDLINE | ID: mdl-35405041

RESUMO

OBJECTIVES: The prognosis of nonimmune hydrops fetalis (NIHF) is still poor with a high mortality and morbidity rate despite progress in perinatal care. This study was designed to investigate etiology and outcome of NIHF. METHODS: A retrospective review of 90 NIHF cases from 2007 to 2019 was conducted at University Medical Center of the Johannes Gutenberg University, Mainz, Germany. Demographics, genetic results, prenatal and postnatal outcomes including one year survival as well as autopsy data were extracted. Etiology of hydrops was classified using 13 previously established categories. In 4 patients observed between 2016 and 2019, we used a next-generation-sequencing (NGS) panel for genetic evaluation. RESULTS: Ninety NIHF cases were identified, with a median gestational age (GA) at diagnosis of 14 weeks. There were 25 live-born infants with a median GA of 34 weeks at birth, 15 patients survived to one year. There was aneuploidy in more than one third of the cases. All 90 cases were subclassified into etiologic categories with chromosomal 35, idiopathic 15, syndromic 11, cardiovascular 9, inborn errors of metabolism 6, lymphatic dysplasia 3, thoracic 3, infections 3, gastrointestinal 3 and hematologic 2. The NGS panel was used in 4 cases and 4 diagnoses were made. CONCLUSIONS: In 90 cases with NIHF we identified an aneuploidy in more than one third of the cases. Improved techniques, such as possibly specific genetic analysis, could reduce the high rate of unexplained cases of NIHF.


Assuntos
Aneuploidia , Hidropisia Fetal , Autopsia , Feminino , Idade Gestacional , Humanos , Hidropisia Fetal/diagnóstico , Hidropisia Fetal/epidemiologia , Hidropisia Fetal/etiologia , Lactente , Recém-Nascido , Gravidez , Estudos Retrospectivos
4.
Stat Med ; 40(26): 5702-5724, 2021 11 20.
Artigo em Inglês | MEDLINE | ID: mdl-34327735

RESUMO

In heart failure (HF) trials efficacy is usually assessed by a composite endpoint including cardiovascular death (CVD) and heart failure hospitalizations (HFHs), which has traditionally been evaluated with a time-to-first-event analysis based on a Cox model. As a considerable fraction of events is ignored that way, methods for recurrent events were suggested, among others the semiparametric proportional rates models by Lin, Wei, Yang, and Ying (LWYY model) and Mao and Lin (Mao-Lin model). In our work we apply least false parameter theory to explain the behavior of the composite treatment effect estimates resulting from the Cox model, the LWYY model, and the Mao-Lin model in clinically relevant scenarios parameterized through joint frailty models. These account for both different treatment effects on the two outcomes (CVD, HFHs) and the positive correlation between their risk rates. For the important setting of beneficial outcome-specific treatment effects we show that the correlation results in composite treatment effect estimates, which are decreasing with trial duration. The estimate from the Cox model is affected more by the attenuation than the estimates from the recurrent event models, which both demonstrate very similar behavior. Since the Mao-Lin model turns out to be less sensitive to harmful effects on mortality, we conclude that, among the three investigated approaches, the LWYY model is the most appropriate one for the composite endpoint in HF trials. Our investigations are motivated and compared with empirical results from the PARADIGM-HF trial (ClinicalTrials.gov identifier: NCT01035255), a large multicenter trial including 8399 chronic HF patients.


Assuntos
Insuficiência Cardíaca , Insuficiência Cardíaca/terapia , Humanos , Modelos de Riscos Proporcionais , Resultado do Tratamento
5.
Pharm Stat ; 20(4): 864-878, 2021 07.
Artigo em Inglês | MEDLINE | ID: mdl-33783071

RESUMO

Progression-free survival (PFS) is a frequently used endpoint in oncological clinical studies. In case of PFS, potential events are progression and death. Progressions are usually observed delayed as they can be diagnosed not before the next study visit. For this reason potential bias of treatment effect estimates for progression-free survival is a concern. In randomized trials and for relative treatment effects measures like hazard ratios, bias-correcting methods are not necessarily required or have been proposed before. However, less is known on cross-trial comparisons of absolute outcome measures like median survival times. This paper proposes a new method for correcting the assessment time bias of progression-free survival estimates to allow a fair cross-trial comparison of median PFS. Using median PFS for example, the presented method approximates the unknown posterior distribution by a Bayesian approach based on simulations. It is shown that the proposed method leads to a substantial reduction of bias as compared to estimates derived from maximum likelihood or Kaplan-Meier estimates. Bias could be reduced by more than 90% over a broad range of considered situations differing in assessment times and underlying distributions. By coverage probabilities of at least 94% based on the credibility interval of the posterior distribution the resulting parameters hold common confidence levels. In summary, the proposed approach is shown to be useful for a cross-trial comparison of median PFS.


Assuntos
Intervalo Livre de Progressão , Teorema de Bayes , Viés , Intervalo Livre de Doença , Humanos , Estimativa de Kaplan-Meier
6.
Comput Methods Programs Biomed ; 188: 105259, 2020 May.
Artigo em Inglês | MEDLINE | ID: mdl-31862679

RESUMO

BACKGROUND AND OBJECTIVE: Joint frailty regression models are intended for the analysis of recurrent event times in the presence of informative drop-outs. They have been proposed for clinical trials to estimate the effect of some treatment on the rate of recurrent heart failure hospitalisations in the presence of drop-outs due to cardiovascular death. Whereas a R-software-package for fitting joint frailty models is available, some technical issues have to be solved in order to use SASⓇ1 software, which is required in the regulatory environment of clinical trials. METHODS: First, we demonstrate how to solve these issues by deriving proper likelihood-decompositions, in particular for the case of non-normally distributed random terms. Second, we perform a simulation study to evaluate the accuracy of different software-implementations (in SAS and R) in terms of convergence behavior, bias of model parameter estimates and coverage probabilities of confidence intervals. Therefore we developed SAS macros that facilitate the analysis and simulation of joint frailty data. These are provided as supplementary material along with comprehensive manuals. RESULTS: Whereas estimates for regression coefficients are unbiased irrespective of the software, the bias of the remaining (nuisance) parameter estimates strongly depends on the software: SAS is shown to be much more efficient in avoiding bias compared to R. However, even in SAS a careful choice of the implementation is required to get reliable results, in particular for the joint gamma frailty model. By far the best performance is reached with a SAS-implementation that makes use of the probability integral transformation method. CONCLUSIONS: We have shown, that getting reliable results from joint frailty models is not straightforward and users should be aware about the computational options between and within software packages. Based on our simulation study, we elaborate recommendations on these options. In addition, our provided SAS macros may encourage statistical practitioners to apply these models in clinical trials with recurrent event data and potentially informative drop-outs.


Assuntos
Fragilidade/mortalidade , Fragilidade/fisiopatologia , Algoritmos , Simulação por Computador , Interpretação Estatística de Dados , Insuficiência Cardíaca/mortalidade , Insuficiência Cardíaca/fisiopatologia , Hospitalização , Humanos , Funções Verossimilhança , Modelos Cardiovasculares , Análise Multivariada , Probabilidade , Modelos de Riscos Proporcionais , Reprodutibilidade dos Testes , Software
7.
Biom J ; 61(6): 1385-1401, 2019 11.
Artigo em Inglês | MEDLINE | ID: mdl-31206775

RESUMO

This work is motivated by clinical trials in chronic heart failure disease, where treatment has effects both on morbidity (assessed as recurrent non-fatal hospitalisations) and on mortality (assessed as cardiovascular death, CV death). Recently, a joint frailty proportional hazards model has been proposed for these kind of efficacy outcomes to account for a potential association between the risk rates for hospital admissions and CV death. However, more often clinical trial results are presented by treatment effect estimates that have been derived from marginal proportional hazards models, that is, a Cox model for mortality and an Andersen-Gill model for recurrent hospitalisations. We show how these marginal hazard ratios and their estimates depend on the association between the risk processes, when these are actually linked by shared or dependent frailty terms. First we derive the marginal hazard ratios as a function of time. Then, applying least false parameter theory, we show that the marginal hazard ratio estimate for the hospitalisation rate depends on study duration and on parameters of the underlying joint frailty model. In particular, we identify parameters, for example the treatment effect on mortality, that determine if the marginal hazard ratio estimate for hospitalisations is smaller, equal or larger than the conditional one. How this affects rejection probabilities is further investigated in simulation studies. Our findings can be used to interpret marginal hazard ratio estimates in heart failure trials and are illustrated by the results of the CHARM-Preserved trial (where CHARM is the 'Candesartan in Heart failure Assessment of Reduction in Mortality and morbidity' programme).


Assuntos
Biometria/métodos , Ensaios Clínicos como Assunto , Insuficiência Cardíaca/tratamento farmacológico , Modelos de Riscos Proporcionais , Doenças Assintomáticas , Humanos , Medição de Risco
8.
Mult Scler ; 25(5): 661-668, 2019 04.
Artigo em Inglês | MEDLINE | ID: mdl-29532745

RESUMO

BACKGROUND: The course of multiple sclerosis (MS) shows substantial inter-individual variability. The underlying determinants of disease severity likely involve genetic and environmental factors. OBJECTIVE: The aim of this study was to assess the impact of APOE and HLA polymorphisms as well as smoking and body mass index (BMI) in the very early MS course. METHODS: Untreated patients ( n = 263) with a recent diagnosis of relapsing-remitting (RR) MS or clinically isolated syndrome underwent standardized magnetic resonance imaging (MRI). Genotyping was performed for single-nucleotide polymorphisms (SNPs) rs3135388 tagging the HLA-DRB1*15:01 haplotype and rs7412 (Ɛ2) and rs429358 (Ɛ4) in APOE. Linear regression analyses were applied based on the three SNPs, smoking and BMI as exposures and MRI surrogate markers for disease severity as outcomes. RESULTS: Current smoking was associated with reduced gray matter fraction, lower brain parenchymal fraction and increased cerebrospinal fluid fraction in comparison to non-smoking, whereas no effect was observed on white matter fraction. BMI and the SNPs in HLA and APOE were not associated with structural MRI parameters. CONCLUSIONS: Smoking may have an unfavorable effect on the gray matter fraction as a potential measure of MS severity already in early MS. These findings may impact patients' counseling upon initial diagnosis of MS.


Assuntos
Apolipoproteínas E/genética , Encéfalo/patologia , Cadeias HLA-DRB1/genética , Esclerose Múltipla/etiologia , Fumar/efeitos adversos , Adolescente , Adulto , Idoso , Atrofia/genética , Índice de Massa Corporal , Feminino , Predisposição Genética para Doença/genética , Humanos , Masculino , Pessoa de Meia-Idade , Esclerose Múltipla/genética , Esclerose Múltipla/patologia , Polimorfismo de Nucleotídeo Único/genética , Adulto Jovem
9.
Neurology ; 92(2): e115-e124, 2019 01 08.
Artigo em Inglês | MEDLINE | ID: mdl-30530796

RESUMO

OBJECTIVE: Prolonged monitoring times (72 hours) are recommended to detect paroxysmal atrial fibrillation (pAF) after ischemic stroke but this is not yet clinical practice; therefore, an individual patient selection for prolonged ECG monitoring might increase the diagnostic yield of pAF in a resource-saving manner. METHODS: We used individual patient data from 3 prospective studies (ntotal = 1,556) performing prolonged Holter-ECG monitoring (at least 72 hours) and centralized data evaluation after TIA or stroke in patients with sinus rhythm. Based on the TRIPOD (Transparent Reporting of a Multivariable Prediction Model for Individual Prognosis or Diagnosis) guideline, a clinical score was developed on one cohort, internally validated by bootstrapping, and externally validated on 2 other studies. RESULTS: pAF was detected in 77 of 1,556 patients (4.9%) during 72 hours of Holter monitoring. After logistic regression analysis with variable selection, age and the qualifying stroke event (categorized as stroke severity with NIH Stroke Scale [NIHSS] score ≤5 [odds ratio 2.4 vs TIA; 95% confidence interval 0.8-6.9, p = 0.112] or stroke with NIHSS score >5 [odds ratio 7.2 vs TIA; 95% confidence interval 2.4-21.8, p < 0.001]) were found to be predictive for the detection of pAF within 72 hours of Holter monitoring and included in the final score (Age: 0.76 points/year, Stroke Severity NIHSS ≤5 = 9 points, NIHSS >5 = 21 points; to Find AF [AS5F]). The high-risk group defined by AS5F is characterized by a predicted risk between 5.2% and 40.8% for detection of pAF with a number needed to screen of 3 for the highest observed AS5F points within the study population. Regarding the low number of outcomes before generalization of AS5F, the results need replication. CONCLUSION: The AS5F score can select patients for prolonged ECG monitoring after ischemic stroke to detect pAF. CLASSIFICATION OF EVIDENCE: This study provides Class I evidence that the AS5F score accurately identifies patients with ischemic stroke at a higher risk of pAF.


Assuntos
Fibrilação Atrial/diagnóstico , Fibrilação Atrial/etiologia , Eletrocardiografia Ambulatorial/métodos , Eletrocardiografia/métodos , Acidente Vascular Cerebral/complicações , Idoso , Idoso de 80 Anos ou mais , Feminino , Humanos , Modelos Logísticos , Masculino , Pessoa de Meia-Idade , Avaliação de Resultados em Cuidados de Saúde , Valor Preditivo dos Testes , Estudos Prospectivos , Reprodutibilidade dos Testes , Fatores de Tempo
10.
Clin Res Cardiol ; 107(5): 437-443, 2018 May.
Artigo em Inglês | MEDLINE | ID: mdl-29453594

RESUMO

BACKGROUND: Composite endpoints combining several event types of clinical interest often define the primary efficacy outcome in cardiologic trials. They are commonly evaluated as time-to-first-event, thereby following the recommendations of regulatory agencies. However, to assess the patient's full disease burden and to identify preventive factors or interventions, subsequent events following the first one should be considered as well. This is especially important in cohort studies and RCTs with a long follow-up leading to a higher number of observed events per patients. So far, there exist no recommendations which approach should be preferred. DESIGN: Recently, the Cardiovascular Round Table of the European Society of Cardiology indicated the need to investigate "how to interpret results if recurrent-event analysis results differ […] from time-to-first-event analysis" (Anker et al., Eur J Heart Fail 18:482-489, 2016). This work addresses this topic by means of a systematic simulation study. METHODS: This paper compares two common analysis strategies for composite endpoints differing with respect to the incorporation of recurrent events for typical data scenarios motivated by a clinical trial. RESULTS: We show that the treatment effects estimated from a time-to-first-event analysis (Cox model) and a recurrent-event analysis (Andersen-Gill model) can systematically differ, particularly in cardiovascular trials. Moreover, we provide guidance on how to interpret these results and recommend points to consider for the choice of a meaningful analysis strategy. CONCLUSIONS: When planning trials with a composite endpoint, researchers, and regulatory agencies should be aware that the model choice affects the estimated treatment effect and its interpretation.


Assuntos
Ensaios Clínicos como Assunto/métodos , Determinação de Ponto Final , Insuficiência Cardíaca/terapia , Projetos de Pesquisa , Simulação por Computador , Insuficiência Cardíaca/diagnóstico , Insuficiência Cardíaca/mortalidade , Insuficiência Cardíaca/fisiopatologia , Humanos , Modelos de Riscos Proporcionais , Recidiva , Fatores de Tempo , Resultado do Tratamento
11.
BMC Med Res Methodol ; 17(1): 92, 2017 Jul 04.
Artigo em Inglês | MEDLINE | ID: mdl-28676086

RESUMO

BACKGROUND: Composite endpoints comprising hospital admissions and death are the primary outcome in many cardiovascular clinical trials. For statistical analysis, a Cox proportional hazards model for the time to first event is commonly applied. There is an ongoing debate on whether multiple episodes per individual should be incorporated into the primary analysis. While the advantages in terms of power are readily apparent, potential biases have been mostly overlooked so far. METHODS: Motivated by a randomized controlled clinical trial in heart failure patients, we use directed acyclic graphs (DAG) to investigate potential sources of bias in treatment effect estimates, depending on whether only the first or multiple episodes are considered. The biases first are explained in simplified examples and then more thoroughly investigated in simulation studies that mimic realistic patterns. RESULTS: Particularly the Cox model is prone to potentially severe selection bias and direct effect bias, resulting in underestimation when restricting the analysis to first events. We find that both kinds of bias can simultaneously be reduced by adequately incorporating recurrent events into the analysis model. Correspondingly, we point out appropriate proportional hazards-based multi-state models for decreasing bias and increasing power when analyzing multiple-episode composite endpoints in randomized clinical trials. CONCLUSIONS: Incorporating multiple episodes per individual into the primary analysis can reduce the bias of a treatment's total effect estimate. Our findings will help to move beyond the paradigm of considering first events only for approaches that use more information from the trial and augment interpretability, as has been called for in cardiovascular research.


Assuntos
Doenças Cardiovasculares/terapia , Avaliação de Resultados em Cuidados de Saúde/métodos , Avaliação de Resultados em Cuidados de Saúde/estatística & dados numéricos , Modelos de Riscos Proporcionais , Algoritmos , Humanos , Modelos Estatísticos , Ensaios Clínicos Controlados Aleatórios como Assunto , Reprodutibilidade dos Testes , Projetos de Pesquisa
12.
Respir Res ; 18(1): 101, 2017 05 23.
Artigo em Inglês | MEDLINE | ID: mdl-28535788

RESUMO

BACKGROUND: In acute respiratory respiratory distress syndrome (ARDS) a sustained mismatch of alveolar ventilation and perfusion (VA/Q) impairs the pulmonary gas exchange. Measurement of endexpiratory lung volume (EELV) by multiple breath-nitrogen washout/washin is a non-invasive, bedside technology to assess pulmonary function in mechanically ventilated patients. The present study examines the association between EELV changes and VA/Q distribution and the possibility to predict VA/Q normalization by means of EELV in a porcine model. METHODS: After approval of the state and institutional animal care committee 12 anesthetized pigs were randomized to ARDS either by bronchoalveolar lavage (n = 6) or oleic acid injection (n = 6). EELV, VA/Q ratios by multiple inert gas elimination and ventilation distribution by electrical impedance tomography were assessed at healthy state and at five different positive endexpiratory pressure (PEEP) steps in ARDS (0, 20, 15, 10, 5 cmH2O; each maintained for 30 min). RESULTS: VA/Q, EELV and tidal volume distribution all displayed the PEEP-induced recruitment in ARDS. We found a close correlation between VA/Q < 0.1 (representing shunt and low VA/Q units) and changes in EELV (spearman correlation coefficient -0.79). Logistic regression reveals the potential to predict VA/Q normalization (VA/Q < 0.1 less than 5%) from changes in EELV with an area under the curve of 0.89 with a 95%-CI of 0.81-0.96 in the receiver operating characteristic. Different lung injury models and recruitment characteristics did not influence these findings. CONCLUSION: In a porcine ARDS model EELV measurement depicts PEEP-induced lung recruitment and is strongly associated with normalization of the VA/Q distribution in a model-independent fashion. Determination of EELV could be an intriguing addition in the context of lung protection strategies.


Assuntos
Lesão Pulmonar/fisiopatologia , Ventilação Pulmonar/fisiologia , Síndrome do Desconforto Respiratório/fisiopatologia , Animais , Volume Expiratório Forçado/fisiologia , Medidas de Volume Pulmonar/métodos , Masculino , Pico do Fluxo Expiratório/fisiologia , Suínos
13.
J Neurosci Methods ; 276: 73-78, 2017 01 30.
Artigo em Inglês | MEDLINE | ID: mdl-27894783

RESUMO

BACKGROUND: A reliable measurement of brain water content (wet-to-dry ratio) is an important prerequisite for conducting research on mechanisms of brain edema formation. The conventionally used oven-drying method suffers from several limitations, especially in small samples. A technically demanding and time-consuming alternative is freeze-drying. NEW METHOD: Centrifugal vacuum concentrators (e.g. SpeedVac/speed-vacuum drying) are a combination of vacuum-drying and centrifugation, used to reduce the boiling temperature. These concentrators have the key advantages of improving the freeze-drying speed and maintaining the integrity of dried samples, thus, allowing e.g. DNA analyses. In the present study, we compared the heat-oven with speed-vacuum technique with regard to efficacy to remove moisture from water and brain samples and their effectiveness to distinguish treatment paradigms after experimental traumatic brain injury (TBI) caused by controlled cortical impact (CCI). RESULTS: Both techniques effectively removed water, the oven technique taking 24h and vacuum-drying taking 48h. Vacuum-drying showed lower variations in small samples (30-45mg) and was suitable for genomic analysis as exemplified by sex genotyping. The effect of sodium bicarbonate (NaBic8.4%) on brain edema formation after CCI was investigated in small samples (2×1mm). Only vacuum-drying showed low variation and significant improvement under NaBic8.4% treatment. COMPARISON WITH AN EXISTING METHOD: The receiver operating curves (ROC) analysis demonstrated that vacuum-drying (area under the curve (AUC):0.867-0.967) was superior to the conventional heat-drying method (AUC:0.367-0.567). CONCLUSIONS: The vacuum method is superior in terms of quantifying water content in small samples. In addition, vacuum-dried samples can also be used for subsequent analyses, e.g., PCR analysis.


Assuntos
Química Encefálica , Dessecação/métodos , Temperatura Alta , Vácuo , Água/análise , Animais , Área Sob a Curva , Edema Encefálico/tratamento farmacológico , Edema Encefálico/metabolismo , Lesões Encefálicas Traumáticas/tratamento farmacológico , Lesões Encefálicas Traumáticas/metabolismo , Centrifugação , Dessecação/instrumentação , Modelos Animais de Doenças , Estudos de Viabilidade , Técnicas de Genotipagem , Masculino , Camundongos Endogâmicos C57BL , Fármacos Neuroprotetores/farmacologia , Curva ROC , Bicarbonato de Sódio/farmacologia , Fatores de Tempo
14.
BMC Res Notes ; 9: 127, 2016 Feb 27.
Artigo em Inglês | MEDLINE | ID: mdl-26920895

RESUMO

BACKGROUND: A variety of instruments are used to perform airway management by tracheal intubation. In this study, we compared the MacIntosh balde (MB) laryngoscope with the Bonfils intubation fibrescope as intubation techniques. The aim of this study was to identify the technique (MB or Bonfils) that would allow students in their last year of medical school to perform tracheal intubation faster and with a higher success probability. Data were collected from 150 participants using an airway simulator ['Laerdal Airway Management Trainer' (Laerdal Medical AS, Stavanger, Norway)]. The participants were randomly assigned to a sequence of techniques to use. Four consecutive intubation 'trials' were performed with each technique. These trials were evaluated for differences in the following categories: the 'time to successful ventilation', 'success probability' within 90 s,'time to visualisation' of the vocal cords (glottis), and 'quality of visualisation' according to the Cormack and Lehane score (C&L, grade 1-4). The primary endpoint was the 'time to successful ventilation'in the fourth and final trial. RESULTS: There was no statistically significant difference in the 'time to successful ventilation' between the two techniques in trial 4 ('time to successful ventilation': median: MB: 16 s, Bonfils: 14 s, p = 0.244). However, the 'success probability' within 90 s was higher when using a Macintosh blade than when using a Bonfils (95 vs. 87%). The glottis could be better visualised when using a Bonfils (C&L score of 1 (best view): MB: 41%, Bonfils: 93%), but visualisation was achieved more rapidly when using a Macintosh blade (median: 'time to visualisation': MB: 6 s, Bonfils: 8 s, p = 0.003). CONCLUSIONS: The time to ventilation using the MacIntosh blade and Bonfils mainly did to differ, however success probabilities and time to visualisation primary favoured the MacIntosh blade as intubation technique, although the Bonfils seem to have a steeper learning curve. The Bonfils is still a promising intubation technique and might be easier to learn as the MB, at least in a manikin.


Assuntos
Avaliação Educacional/estatística & dados numéricos , Tecnologia de Fibra Óptica/instrumentação , Intubação Intratraqueal/métodos , Laringoscópios , Treinamento por Simulação/estatística & dados numéricos , Estudantes de Medicina , Adulto , Feminino , Humanos , Intubação Intratraqueal/instrumentação , Masculino , Manequins , Preceptoria , Respiração Artificial , Fatores de Tempo
15.
Int J Oral Maxillofac Implants ; 30(5): 1143-8, 2015.
Artigo em Inglês | MEDLINE | ID: mdl-26394352

RESUMO

PURPOSE: To compare the oral health-related quality of life (OHRQoL) in a prospective, randomized crossover trial in patients with mandibular overdentures retained with two or four locators. MATERIALS AND METHODS: In 30 patients with edentulous mandibles, four implants (ICX-plus implants [Medentis Medical]) were placed in the intraforaminal area. Eight weeks after transgingival healing, patients were randomly assigned to have two or four implants incorporated in the prosthesis. After 3 months, the retention concepts were switched. The patients with a two-implant-supported overdenture had four implants incorporated, whereas patients with a four-implant-supported overdenture had two retention locators taken out. After 3 more months, all four implants were retained in the implant-supported overdenture in every patient. To measure OHRQoL of the patients, the Oral Health Impact Profile 14, German version (OHIP-14 G), was used. RESULTS: A considerable increase in OHRQoL could be seen in all patients after the prosthesis was placed on the implants. Also, a statistically significant difference of OHRQoL could be seen in the OHIP-14 G scores between two-implant and four-implant overdentures. Patients had a higher OHRQoL after incorporation of four implants in the overdenture compared with only two implants. CONCLUSION: Patients with implant-retained overdentures had better OHRQoL compared with those with conventional dentures. The number of incorporated implants in the locator-retained overdenture also influenced the increase in OHRQoL, with four implants having a statistically significant advantage over two implants.


Assuntos
Prótese Dentária Fixada por Implante/psicologia , Prótese Total Inferior/psicologia , Revestimento de Dentadura/psicologia , Saúde Bucal , Qualidade de Vida , Idoso , Idoso de 80 Anos ou mais , Estudos Cross-Over , Implantes Dentários , Retenção de Dentadura/psicologia , Feminino , Seguimentos , Humanos , Arcada Edêntula/psicologia , Arcada Edêntula/reabilitação , Masculino , Pessoa de Meia-Idade , Estudos Prospectivos , Análise de Sobrevida , Escala Visual Analógica
16.
BMC Med Res Methodol ; 15: 16, 2015 Mar 08.
Artigo em Inglês | MEDLINE | ID: mdl-25886022

RESUMO

BACKGROUND: In medical studies with recurrent event data a total time scale perspective is often needed to adequately reflect disease mechanisms. This means that the hazard process is defined on the time since some starting point, e.g. the beginning of some disease, in contrast to a gap time scale where the hazard process restarts after each event. While techniques such as the Andersen-Gill model have been developed for analyzing data from a total time perspective, techniques for the simulation of such data, e.g. for sample size planning, have not been investigated so far. METHODS: We have derived a simulation algorithm covering the Andersen-Gill model that can be used for sample size planning in clinical trials as well as the investigation of modeling techniques. Specifically, we allow for fixed and/or random covariates and an arbitrary hazard function defined on a total time scale. Furthermore we take into account that individuals may be temporarily insusceptible to a recurrent incidence of the event. The methods are based on conditional distributions of the inter-event times conditional on the total time of the preceeding event or study start. Closed form solutions are provided for common distributions. The derived methods have been implemented in a readily accessible R script. RESULTS: The proposed techniques are illustrated by planning the sample size for a clinical trial with complex recurrent event data. The required sample size is shown to be affected not only by censoring and intra-patient correlation, but also by the presence of risk-free intervals. This demonstrates the need for a simulation algorithm that particularly allows for complex study designs where no analytical sample size formulas might exist. CONCLUSIONS: The derived simulation algorithm is seen to be useful for the simulation of recurrent event data that follow an Andersen-Gill model. Next to the use of a total time scale, it allows for intra-patient correlation and risk-free intervals as are often observed in clinical trial data. Its application therefore allows the simulation of data that closely resemble real settings and thus can improve the use of simulation studies for designing and analysing studies.


Assuntos
Algoritmos , Modelos Estatísticos , Projetos de Pesquisa/estatística & dados numéricos , Medição de Risco/estatística & dados numéricos , Simulação por Computador , Projetos de Pesquisa Epidemiológica , Humanos , Incidência , Modelos de Riscos Proporcionais , Recidiva , Reprodutibilidade dos Testes , Medição de Risco/métodos , Fatores de Risco , Fatores de Tempo
17.
J Neurosci Methods ; 240: 67-76, 2015 Jan 30.
Artigo em Inglês | MEDLINE | ID: mdl-25445060

RESUMO

BACKGROUND: Experimental stroke studies use multiple techniques to evaluate histopathological damage. Unfortunately, sensitivity and reproducibility of these techniques are poorly characterized despite pivotal influence on results. METHOD: The present study compared several quantification methods to differentiate between two severities of global cerebral ischemia and reperfusion. Male Sprague-Dawley rats were randomized to moderate (10min) or severe (14min) ischemia by bilateral carotid occlusion (BCAO) with hemorrhagic hypotension. Neuronal cell count was determined in hippocampus at bregma -3.14mm and -3.8mm on day 3 and 28 post insult by counting neurons in the whole CA1 or in one to three defined regions of interest (ROI) placed in NeuN and Fluoro-Jade B stained sections. RESULTS: In healthy rats hippocampal neurons were arranged uniformly, while distribution became inhomogeneous after ischemia. The number of NeuN and Fluoro-Jade B positive cells was dependent on localization. Differences between ischemia severities became more prominent at 28 days compared to 3 days. Fluoro-Jade B positive cell count increased at 28 days, staining rather injured not dying neurons. COMPARISON WITH EXISTING METHODS: Placement of counting windows has a major influence on extent of differences between degree of neuronal injury and variations within groups. CONCLUSIONS: The investigated quantification methods result in inconsistent information on the degree of damage. To obtain consistent and reliable results observation period should be extended beyond 3 days. Due to inhomogeneous distribution of viable neurons in CA1 after ischemia neuronal counting should not be performed in a single ROI window, but should be performed in multiple ROIs or the whole CA1 band.


Assuntos
Isquemia Encefálica/patologia , Região CA1 Hipocampal/patologia , Histocitoquímica/métodos , Animais , Antígenos Nucleares/metabolismo , Benzoxazinas , Contagem de Células , Corantes , Modelos Animais de Doenças , Fluoresceínas , Corantes Fluorescentes , Hemorragias Intracranianas/patologia , Hipotensão Intracraniana/patologia , Masculino , Proteínas do Tecido Nervoso/metabolismo , Neurônios/patologia , Distribuição Aleatória , Ratos Sprague-Dawley , Reprodutibilidade dos Testes , Índice de Gravidade de Doença , Fatores de Tempo
18.
Digestion ; 89(4): 310-8, 2014.
Artigo em Inglês | MEDLINE | ID: mdl-25074257

RESUMO

BACKGROUND AND AIMS: Despite increasingly sensitive and accurate blood tests to detect liver disease, liver biopsy remains very useful in patients with atypical clinical features and abnormal liver tests of unknown etiology. The aim was to determine those elevated laboratory liver parameters that cause the clinician to order a biopsy, and whether laboratory tests are associated with pathological findings on histology. METHODS: 504 patients with unclear hepatopathy, admitted to the outpatient clinic of a university hospital between 2007 and 2010, were analyzed with respect to laboratory results, clinical data, and the results of liver biopsies. RESULTS: Aspartate aminotransferase (AST) and glutamate dehydrogenase (GLDH) levels above the normal range significantly increased the likelihood of recommending a liver biopsy by 81% [OR with 95% CI 1.81 (1.21-2.71), p = 0.004] and 159% [OR with 95% CI 2.59 (1.70-3.93), p < 0.001], respectively. AST values above normal were associated with fibrosis (63 vs. 40% for normal AST, p = 0.010). Elevated ferritin levels pointed to a higher incidence of steatosis (48 vs. 10% for normal ferritin, p < 0.001) and inflammation (87 vs. 62% for normal ferritin, p = 0.004). CONCLUSIONS: Our results indicate that elevated AST and GLDH were associated with a greater likelihood of recommending liver biopsy. Elevated AST and ferritin levels were associated with steatosis, inflammation and fibrosis on liver biopsies. Thus, AST and ferritin may be useful non-invasive predictors of liver pathology in patients with unclear hepatopathy.


Assuntos
Hepatopatias/diagnóstico , Fígado/patologia , Adolescente , Adulto , Idoso , Idoso de 80 Anos ou mais , Biópsia , Alemanha , Humanos , Fígado/enzimologia , Hepatopatias/enzimologia , Pessoa de Meia-Idade , Estudos Retrospectivos , Adulto Jovem
19.
Biom J ; 56(4): 631-48, 2014 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-24817598

RESUMO

In some clinical trials, the repeated occurrence of the same type of event is of primary interest and the Andersen-Gill model has been proposed to analyze recurrent event data. Existing methods to determine the required sample size for an Andersen-Gill analysis rely on the strong assumption that all heterogeneity in the individuals' risk to experience events can be explained by known covariates. In practice, however, this assumption might be violated due to unknown or unmeasured covariates affecting the time to events. In these situations, the use of a robust variance estimate in calculating the test statistic is highly recommended to assure the type I error rate, but this will in turn decrease the actual power of the trial. In this article, we derive a new sample-size formula to reach the desired power even in the presence of unexplained heterogeneity. The formula is based on an inflation factor that considers the degree of heterogeneity and characteristics of the robust variance estimate. Nevertheless, in the planning phase of a trial there will usually be some uncertainty about the size of the inflation factor. Therefore, we propose an internal pilot study design to reestimate the inflation factor during the study and adjust the sample size accordingly. In a simulation study, the performance and validity of this design with respect to type I error rate and power are proven. Our method is applied to the HepaTel trial evaluating a new intervention for patients with cirrhosis of the liver.


Assuntos
Biometria/métodos , Ensaios Clínicos como Assunto/métodos , Humanos , Cirrose Hepática/terapia , Modelos Estatísticos , Projetos Piloto , Recidiva , Tamanho da Amostra , Incerteza
20.
Stat Med ; 33(7): 1104-20, 2014 Mar 30.
Artigo em Inglês | MEDLINE | ID: mdl-24122841

RESUMO

Many authors have proposed different approaches to combine multiple endpoints in a univariate outcome measure in the literature. In case of binary or time-to-event variables, composite endpoints, which combine several event types within a single event or time-to-first-event analysis are often used to assess the overall treatment effect. A main drawback of this approach is that the interpretation of the composite effect can be difficult as a negative effect in one component can be masked by a positive effect in another. Recently, some authors proposed more general approaches based on a priority ranking of outcomes, which moreover allow to combine outcome variables of different scale levels. These new combined effect measures assign a higher impact to more important endpoints, which is meant to simplify the interpretation of results. Whereas statistical tests and models for binary and time-to-event variables are well understood, the latter methods have not been investigated in detail so far. In this paper, we will investigate the statistical properties of prioritized combined outcome measures. We will perform a systematical comparison to standard composite measures, such as the all-cause hazard ratio in case of time-to-event variables or the absolute rate difference in case of binary variables, to derive recommendations for different clinical trial scenarios. We will discuss extensions and modifications of the new effect measures, which simplify the clinical interpretation. Moreover, we propose a new method on how to combine the classical composite approach with a priority ranking of outcomes using a multiple testing strategy based on the closed test procedure.


Assuntos
Ensaios Clínicos como Assunto/métodos , Avaliação de Resultados em Cuidados de Saúde/métodos , Modelos de Riscos Proporcionais , Resultado do Tratamento , Simulação por Computador , Humanos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...