Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 53
Filter
Add more filters

Publication year range
1.
J Biopharm Stat ; 33(1): 77-89, 2023 01 02.
Article in English | MEDLINE | ID: mdl-35649152

ABSTRACT

Clinical studies are generally required to characterize the accuracy of new diagnostic tests. In some cases, historical data are available from a predicate device, which is directly relevant to the new test. If this data can be appropriately incorporated into the new test study design, there is an opportunity to reduce the sample size and trial duration for the new test. One approach to achieve this is the Bayesian power prior method, which allows for the historical information to be down-weighted via a power parameter. We propose a dynamic method to calculate the power parameter based on first comparing the data between the historical and new data sources using a one-sided comparison, and second mapping the comparison probability through a scaled-Weibull discount function to tune the effective sample size borrowed. This pragmatic and conservative approach is embedded in an adaptive trial framework allowing for the trial to stop early for success. An example is presented for a new test developed to detect Methicillin-resistant Staphylococcus aureus present in the nasal carriage.


Subject(s)
Methicillin-Resistant Staphylococcus aureus , Humans , Bayes Theorem , Prospective Studies , Research Design , Diagnostic Tests, Routine
2.
BMC Med Res Methodol ; 18(1): 50, 2018 06 07.
Article in English | MEDLINE | ID: mdl-29879902

ABSTRACT

BACKGROUND: Joint modelling of longitudinal and time-to-event outcomes has received considerable attention over recent years. Commensurate with this has been a rise in statistical software options for fitting these models. However, these tools have generally been limited to a single longitudinal outcome. Here, we describe the classical joint model to the case of multiple longitudinal outcomes, propose a practical algorithm for fitting the models, and demonstrate how to fit the models using a new package for the statistical software platform R, joineRML. RESULTS: A multivariate linear mixed sub-model is specified for the longitudinal outcomes, and a Cox proportional hazards regression model with time-varying covariates is specified for the event time sub-model. The association between models is captured through a zero-mean multivariate latent Gaussian process. The models are fitted using a Monte Carlo Expectation-Maximisation algorithm, and inferences are based on approximate standard errors from the empirical profile information matrix, which are contrasted to an alternative bootstrap estimation approach. We illustrate the model and software on a real data example for patients with primary biliary cirrhosis with three repeatedly measured biomarkers. CONCLUSIONS: An open-source software package capable of fitting multivariate joint models is available. The underlying algorithm and source code makes use of several methods to increase computational speed.


Subject(s)
Algorithms , Biometry/methods , Linear Models , Software , Biomarkers/analysis , Humans , Longitudinal Studies , Monte Carlo Method , Multivariate Analysis , Outcome Assessment, Health Care/methods , Outcome Assessment, Health Care/statistics & numerical data , Reproducibility of Results
3.
BMC Med Res Methodol ; 16(1): 117, 2016 09 07.
Article in English | MEDLINE | ID: mdl-27604810

ABSTRACT

BACKGROUND: Available methods for the joint modelling of longitudinal and time-to-event outcomes have typically only allowed for a single longitudinal outcome and a solitary event time. In practice, clinical studies are likely to record multiple longitudinal outcomes. Incorporating all sources of data will improve the predictive capability of any model and lead to more informative inferences for the purpose of medical decision-making. METHODS: We reviewed current methodologies of joint modelling for time-to-event data and multivariate longitudinal data including the distributional and modelling assumptions, the association structures, estimation approaches, software tools for implementation and clinical applications of the methodologies. RESULTS: We found that a large number of different models have recently been proposed. Most considered jointly modelling linear mixed models with proportional hazard models, with correlation between multiple longitudinal outcomes accounted for through multivariate normally distributed random effects. So-called current value and random effects parameterisations are commonly used to link the models. Despite developments, software is still lacking, which has translated into limited uptake by medical researchers. CONCLUSION: Although, in an era of personalized medicine, the value of multivariate joint modelling has been established, researchers are currently limited in their ability to fit these models routinely. We make a series of recommendations for future research needs.


Subject(s)
Algorithms , Models, Theoretical , Multivariate Analysis , Outcome Assessment, Health Care/statistics & numerical data , Bayes Theorem , Clinical Decision-Making , Humans , Longitudinal Studies , Outcome Assessment, Health Care/methods , Reproducibility of Results , Time Factors
4.
Immunol Cell Biol ; 93(5): 508-13, 2015.
Article in English | MEDLINE | ID: mdl-25533287

ABSTRACT

Toxoplasma gondii is a highly prevalent intracellular protozoan parasite that causes severe disease in congenitally infected or immunocompromised hosts. T. gondii is capable of invading immune cells and it has been suggested that the parasite harnesses the migratory pathways of these cells to spread through the body. Although in vitro evidence suggests that the parasite further enhances its spread by inducing a hypermotility phenotype in parasitized immune cells, in vivo evidence for this phenomenon is scarce. Here we use a physiologically relevant oral model of T. gondii infection, in conjunction with two-photon laser scanning microscopy, to address this issue. We found that a small proportion of natural killer (NK) cells in mesenteric lymph nodes contained parasites. Compared with uninfected 'bystander' NK cells, these infected NK cells showed faster, more directed and more persistent migratory behavior. Consistent with this, infected NK cells showed impaired spreading and clustering of the integrin, LFA-1, when exposed to plated ligands. Our results provide the first evidence for a hypermigratory phenotype in T. gondii-infected NK cells in vivo, providing an anatomical context for understanding how the parasite manipulates immune cell motility to spread through the host.


Subject(s)
Cell Movement , Killer Cells, Natural/immunology , Lymph Nodes/pathology , Toxoplasma/immunology , Toxoplasmosis/immunology , Administration, Oral , Animals , Humans , Killer Cells, Natural/parasitology , Lymphocyte Function-Associated Antigen-1/metabolism , Mice , Mice, Inbred CBA , Models, Animal , Phenotype , Toxoplasmosis/transmission
5.
Risk Anal ; 32(7): 1232-43, 2012 Jul.
Article in English | MEDLINE | ID: mdl-22050459

ABSTRACT

A species sensitivity distribution (SSD) models data on toxicity of a specific toxicant to species in a defined assemblage. SSDs are typically assumed to be parametric, despite noteworthy criticism, with a standard proposal being the log-normal distribution. Recently, and confusingly, there have emerged different statistical methods in the ecotoxicological risk assessment literature, independent of the distributional assumption, for fitting SSDs to toxicity data with the overall aim of estimating the concentration of the toxicant that is hazardous to % of the biological assemblage (usually with small). We analyze two such estimators derived from simple linear regression applied to the ordered log-transformed toxicity data values and probit transformed rank-based plotting positions. These are compared to the more intuitive and statistically defensible confidence limit-based estimator. We conclude based on a large-scale simulation study that the latter estimator should be used in typical assessments where a pointwise value of the hazardous concentration is required.


Subject(s)
Ecotoxicology/methods , Hazardous Substances/toxicity , Models, Statistical , Risk Assessment/methods , Animals , Linear Models
6.
Contemp Clin Trials Commun ; 23: 100818, 2021 Sep.
Article in English | MEDLINE | ID: mdl-34258470

ABSTRACT

BACKGROUND: The SPYRAL HTN-OFF MED Pivotal trial demonstrated that RDN was efficacious compared to a sham control. The underlying model was an extension of the analysis of covariance (ANCOVA) model, adjusted for baseline blood pressure (BP), and allowed borrowing of information from the previously reported feasibility study using a novel Bayesian method. Fundamental to the estimation of a treatment effect for efficacy are a multitude of statistical modelling assumptions, including the role of outliers, linearity of the association between baseline BP and outcome, and parallelism of the treatment effect difference over the baseline BP range. In this report, we examine the validity of these assumptions to verify the robustness of the treatment effect measured. METHODS: We examined the requisite modelling assumptions of the ANCOVA model fitted to the SPYRAL HTN-OFF MED Pivotal trial using Bayesian methods. To address outliers, we fit a robust regression model (with heavy tailed errors) to the data with diffuse weakly informative prior distributions on the parameters. To address linearity, we replaced the linear baseline term by a natural spline term with 4 degrees of freedom. To address parallelism, we refit the ANCOVA model with an interaction term for treatment arm and baseline BP. RESULTS: ANCOVA models were fitted to the trial data (pooled across the feasibility and pivotal cohorts) using Bayesian methodology with diffuse (non-informative) prior distributions. The modelling assumptions inherent to the ANCOVA models were shown to be broadly satisfied. A robust ANCOVA model yielded a posterior treatment effect of -4.1 mmHg (95% credible interval: -6.3 to -1.9) indicating the influence of outlier values was small. There was moderate evidence of an interaction term effect between baseline BP and treatment, but no evidence of gross violation of linearity in baseline BP. CONCLUSION: The posterior treatment effect estimate is shown to be robust to underlying model assumptions, thus further supporting the evidence of RDN to be an efficacious treatment for resistant hypertension.

7.
EuroIntervention ; 16(18): e1496-e1502, 2021 Apr 02.
Article in English | MEDLINE | ID: mdl-33226002

ABSTRACT

AIMS: Multiple endpoints with varying clinical relevance are available to establish the efficacy of device-based treatments. Given the variance among blood pressure measures and medication changes in hypertension trials, we performed a win ratio analysis of outcomes in a sham-controlled, randomised trial of renal denervation (RDN) in patients with uncontrolled hypertension despite commonly prescribed antihypertensive medications. We propose a novel prioritised endpoint framework for determining the treatment benefit of RDN compared with sham control. METHODS AND RESULTS: We analysed the SPYRAL HTN-ON MED pilot study data using a prioritised hierarchical endpoint comprised of 24-hour mean ambulatory systolic blood pressure (SBP), office SBP, and medication burden. A generalised pairwise comparisons methodology (win ratio) was extended to examine this endpoint. Clinically relevant thresholds of 5 and 10 mmHg were used for comparisons of ambulatory and office SBP, respectively, and therefore to define treatment "winners" and "losers". For a total number of 1,596 unmatched pairs, the RDN subject was the winner in 1,050 pairs, the RDN subject was the loser in 378 pairs, and 168 pairs were tied. The win ratio in favour of RDN was 2.78 (95% confidence interval [CI]: 1.58 to 5.48; p<0.001) and corresponding net benefit statistic was 0.42 (95% CI: 0.20 to 0.63). Sensitivity analyses performed with differing blood pressure thresholds and according to drug adherence testing demonstrated consistent results. CONCLUSIONS: The win ratio method addresses prior limitations by enabling inclusion of more patient-oriented results while prioritising those endpoints considered most clinically important. Applying these methods to the SPYRAL HTN-ON MED pilot study (ClinicalTrials.gov Identifier: NCT02439775), RDN was determined to be superior regarding a hierarchical endpoint and a "winner" compared with sham control patients.


Subject(s)
Hypertension , Sympathectomy , Antihypertensive Agents/therapeutic use , Blood Pressure , Humans , Hypertension/drug therapy , Hypertension/surgery , Kidney , Pilot Projects , Treatment Outcome
9.
EuroIntervention ; 16(1): 89-96, 2020 05 20.
Article in English | MEDLINE | ID: mdl-32038027

ABSTRACT

AIMS: We aimed to estimate the rate of renal artery adverse events following renal denervation with the most commonly applied radiofrequency catheter system based on a comprehensive review of published reports. METHODS AND RESULTS: We reviewed 50 published renal denervation (RDN) trials reporting on procedural safety including 5,769 subjects with 10,249 patient-years of follow-up. Twenty-six patients with renal artery stenosis or dissection (0.45%) were identified of whom 24 (0.41%) required renal artery stenting. The primary meta-analysis of all reports indicated a 0.20% pooled annual incidence rate of stent implantation (95% CI: 0.12 to 0.29% per year). Additional sensitivity analyses yielded consistent pooled estimates (range: 0.17 to 0.42% per year). Median time from RDN procedure to all renal intervention was 5.5 months (range: 0 to 33 months); 79% of all events occurred within one year of the procedure. A separate review of 14 clinical trials reporting on prospective follow-up imaging using either magnetic resonance imaging, computed tomography or angiography following RDN in 511 total subjects identified just 1 new significant stenosis (0.20%) after a median of 11 months post procedure (one to 36 months). CONCLUSIONS: Renal artery reintervention following renal denervation with the most commonly applied RF renal denervation system (Symplicity) is rare. Most events were identified within one year.


Subject(s)
Catheter Ablation/adverse effects , Denervation/adverse effects , Renal Artery/injuries , Renal Artery/radiation effects , Sympathectomy/methods , Antihypertensive Agents , Blood Pressure , Humans , Hypertension/surgery , Kidney/physiopathology , Renal Artery/innervation , Sympathectomy/adverse effects , Treatment Outcome
10.
Clin Res Cardiol ; 109(3): 289-302, 2020 Mar.
Article in English | MEDLINE | ID: mdl-32034481

ABSTRACT

BACKGROUND: The SPYRAL HTN clinical trial program was initiated with two 80-patient pilot studies, SPYRAL HTN-OFF MED and SPYRAL HTN-ON MED, which provided biological proof of principle that renal denervation has a blood pressure-lowering effect versus sham controls for subjects with uncontrolled hypertension in the absence or presence of antihypertensive medications, respectively. TRIAL DESIGN: Two multicenter, prospective, randomized, sham-controlled trials have been designed to evaluate the safety and efficacy of catheter-based renal denervation for the reduction of blood pressure in subjects with hypertension in the absence (SPYRAL HTN-OFF MED Pivotal) or presence (SPYRAL HTN-ON MED Expansion) of antihypertensive medications. The primary efficacy endpoint is baseline-adjusted change from baseline in 24-h ambulatory systolic blood pressure. The primary safety endpoint is incidence of major adverse events at 1 month after randomization (or 6 months in cases of new renal artery stenosis). Both trials utilize a Bayesian design to allow for prespecified interim analyses to take place, and thus, the final sample sizes are dependent on whether enrollment is stopped at the first or second interim analysis. SPYRAL HTN-OFF MED Pivotal will enroll up to 300 subjects and SPYRAL HTN-ON MED Expansion will enroll up to 221 subjects. A novel Bayesian power prior approach will leverage historical information from the pilot studies, with a degree of discounting determined by the level of agreement with data from the prospectively powered studies. CONCLUSIONS: The Bayesian paradigm represents a novel and promising approach in device-based hypertension trials. CLINICAL TRIAL REGISTRATION: URL: https://www.clinicaltrials.gov. Unique identifier: NCT02439749 (SPYRAL HTN-OFF MED Pivotal) and NCT02439775 (SPYRAL HTN-ON MED Expansion).


Subject(s)
Antihypertensive Agents/administration & dosage , Catheter Ablation/methods , Hypertension/therapy , Sympathectomy/methods , Bayes Theorem , Blood Pressure , Denervation/methods , Humans , Hypertension/physiopathology , Prospective Studies , Single-Blind Method
11.
Eur Heart J Cardiovasc Imaging ; 21(10): 1116-1122, 2020 10 01.
Article in English | MEDLINE | ID: mdl-32243493

ABSTRACT

AIMS: Indexed effective orifice area (EOAi) charts are used to determine the likelihood of prosthesis-patient mismatch (PPM) after aortic valve replacement (AVR). The aim of this study is to validate whether these EOAi charts, based on echocardiographic normal reference values, can accurately predict PPM. METHODS AND RESULTS: In the PERIcardial SurGical AOrtic Valve ReplacemeNt (PERIGON) Pivotal Trial, 986 patients with aortic valve stenosis/regurgitation underwent AVR with an Avalus valve. Patients were randomly split (50:50) into training and test sets. The mean measured EOAs for each valve size from the training set were used to create an Avalus EOAi chart. This chart was subsequently used to predict PPM in the test set and measures of diagnostic accuracy (sensitivity, specificity, and negative and positive predictive value) were assessed. PPM was defined by an EOAi ≤0.85 cm2/m2, and severe PPM was defined as EOAi ≤0.65 cm2/m2. The reference values obtained from the training set ranged from 1.27 cm2 for size 19 mm up to 1.81 cm2 for size 27 mm. The test set had an incidence of 66% of PPM and 24% of severe PPM. The EOAi chart inaccurately predicted PPM in 30% of patients and severe PPM in 22% of patients. For the prediction of PPM, the sensitivity was 87% and the specificity 37%. For the prediction of severe PPM, the sensitivity was 13% and the specificity 98%. CONCLUSION: The use of echocardiographic normal reference values for EOAi charts to predict PPM is unreliable due to the large proportion of misclassifications.


Subject(s)
Aortic Valve Stenosis , Heart Valve Prosthesis Implantation , Heart Valve Prosthesis , Aortic Valve/diagnostic imaging , Aortic Valve/surgery , Aortic Valve Stenosis/diagnostic imaging , Aortic Valve Stenosis/surgery , Heart Valve Prosthesis Implantation/adverse effects , Humans , Prosthesis Design , Prosthesis Fitting , Treatment Outcome
12.
Ecotoxicol Environ Saf ; 72(2): 293-300, 2009 Feb.
Article in English | MEDLINE | ID: mdl-18691758

ABSTRACT

Assessment factors have been proposed as a means to extrapolate from data on the concentrations hazardous to a small sample of species to the concentration hazardous to p% of the species in a given community (HCp). Aldenberg and Jaworska [2000. Uncertainty of the hazardous concentration and fraction affected for normal species sensitivity distributions. Ecotoxicol. Environ. Saf. 46, 1-18] proposed estimators that prescribed universal assessment factors which made use of distributional assumptions associated with species sensitivity distributions. In this paper we maintain those assumptions but introduce loss functions which punish over- and under-estimation. Furthermore, the final loss function is parameterised such that conservatism can be asymmetrically and non-linearly controlled which enables one to better represent the reality of risk assessment scenarios. We describe the loss functions and derive Bayes rules for each. We demonstrate the method by producing a table of universal factors that are independent of the substance being assessed and which can be combined with the toxicity data in order to estimate the HC5. Finally, through an example we illustrate the potential strength of the newly proposed estimators which rationally accounts for the costs of under- and over-estimation to choose an estimator; as opposed to arbitrarily choosing a one-sided lower confidence limit.


Subject(s)
Ecotoxicology/methods , Environmental Pollutants/toxicity , Hazardous Substances/toxicity , Risk Assessment/methods , Toxicity Tests/methods , Bayes Theorem , Uncertainty
13.
Eur J Cardiothorac Surg ; 55(2): 179-185, 2019 02 01.
Article in English | MEDLINE | ID: mdl-30596979

ABSTRACT

Multivariable regression models are used to establish the relationship between a dependent variable (i.e. an outcome of interest) and more than 1 independent variable. Multivariable regression can be used for a variety of different purposes in research studies. The 3 most common types of multivariable regression are linear regression, logistic regression and Cox proportional hazards regression. A detailed understanding of multivariable regression is essential for correct interpretation of studies that utilize these statistical tools. This statistical primer discusses some common considerations and pitfalls for researchers to be aware of when undertaking multivariable regression.


Subject(s)
Models, Statistical , Multivariate Analysis , Cardiac Surgical Procedures/mortality , Cardiac Surgical Procedures/statistics & numerical data , Female , Humans , Male , Risk Assessment , Risk Factors
14.
Interact Cardiovasc Thorac Surg ; 28(1): 1-8, 2019 01 01.
Article in English | MEDLINE | ID: mdl-30010875

ABSTRACT

Regression modelling is an important statistical tool frequently utilized by cardiothoracic surgeons. However, these models-including linear, logistic and Cox proportional hazards regression-rely on certain assumptions. If these assumptions are violated, then a very cautious interpretation of the fitted model should be taken. Here, we discuss several assumptions and report diagnostics that can be used to detect departures from these assumptions. Most of the diagnostics discussed are based on residuals: a measure of the difference between the observed and model fitted values. Reliable and generalizable results depend on correctly developed statistical models, and proper diagnostics should play an integral part in the model development.


Subject(s)
Cardiology/statistics & numerical data , Cardiovascular Diseases/diagnosis , Models, Statistical , Regression Analysis , Humans
15.
Heart ; 105(10): 783-789, 2019 05.
Article in English | MEDLINE | ID: mdl-30541760

ABSTRACT

OBJECTIVE: The objective of this multicentre study was to compare short-term and midterm outcomes between sternotomy and minimally invasive approaches for mitral valve surgery. METHODS: Data for all mitral valve procedures with or without concomitant tricuspid atrial fibrillation surgery were analysed from three UK hospitals between January 2008 and December 2016. To account for selection bias between minimally invasive approach and sternotomy, one-to-one propensity score calliper matching without replacement was performed. The main outcome measure was midterm reintervention free survival that was summarised by the Kaplan-Meier estimator and compared between treatment arms using the stratified log-rank test. RESULTS: A total of 2404 procedures (1757 sternotomy and 647 minimally invasive) were performed during the study period. Propensity score matching resulted in 639 matched pairs with improved balance postmatching in all 31 covariates (absolute standardised mean differences <10%). Despite longer procedural times patients who underwent minimally invasive surgery had a lower need for transfusion (20.5%vs14.4%, p=0.005) and reduced median postoperative length of stay (7 vs 6 days, p<0.001). There were no statistically significant differences in the rates of in-hospital mortality or postoperative stroke. Reintervention-free survival at 8 years was estimated as 86.1% in the minimally invasive group and 84.1% in the sternotomy group (p=0.40). CONCLUSIONS: Minimally invasive surgery is associated with excellent short-term outcomes and comparable midterm outcomes for patients undergoing mitral valve surgery. A minimally invasive approach should be considered for all patients who require mitral valve intervention and should be the standard against which transcatheter mitral techniques are compared.


Subject(s)
Cardiac Surgical Procedures/methods , Minimally Invasive Surgical Procedures/methods , Mitral Valve Insufficiency/surgery , Mitral Valve/surgery , Propensity Score , Sternotomy/methods , Aged , Female , Follow-Up Studies , Hospital Mortality/trends , Humans , Male , Middle Aged , Mitral Valve Insufficiency/mortality , Operative Time , Retrospective Studies , Survival Rate/trends , Treatment Outcome , United Kingdom/epidemiology
16.
J Am Heart Assoc ; 8(21): e014020, 2019 11 05.
Article in English | MEDLINE | ID: mdl-31665959

ABSTRACT

Background Blood pressure (BP) guidelines for patients with aortic stenosis or a history of aortic stenosis treated with aortic valve replacement (AVR) match those in the general population, but this extrapolation may not be warranted. Methods and Results Among patients enrolled in the Medtronic intermediate, high, and extreme risk trials, we included those with a transcatheter AVR (n=1794) or surgical AVR (n=1103) who were alive at 30 days. The associations between early (average of discharge and 30 day post-AVR) systolic BP (SBP) and diastolic BP (DBP) measurements and clinical outcomes between 30 days and 1 year were evaluated. Among 2897 patients, after adjustment, spline curves demonstrated an association between lower SBP (<120 mm Hg, representing 21% of patients) and DBP (<60 mm Hg, representing 30% of patients) and increased all-cause and cardiovascular mortality and repeat hospitalization. These relationships were unchanged when patients with moderate-to-severe aortic regurgitation post-AVR were excluded. After adjustment, compared with DBP 60 to <80 mm Hg, DBP 30 to <60 mm Hg was associated with increased all-cause (adjusted hazard ratio 1.62, 95% CI 1.23-2.14) and cardiovascular mortality (adjusted hazard ratio 2.13, 95% CI 1.52-3.00), but DBP 80 to <100 mm Hg was not. Similarly, after adjustment, compared with SBP 120 to <150 mm Hg, SBP 90 to <120 mm Hg was associated with increased all-cause (adjusted hazard ratio 1.63, 95% CI 1.21-2.21) and cardiovascular mortality (adjusted hazard ratio 1.81, 95% CI 1.25-2.61), but SBP 150 to <180 mm Hg was not. Conclusions Lower BP in the first month after transcatheter AVR or surgical AVR is common and associated with increased mortality and repeat hospitalization. Clarifying optimal BP targets in these patients ought to be a priority and may improve patient outcomes. Clinical Trial Registration Information URL: http://www.clinicaltrials.gov. Unique identifiers: NCT01586910, NCT01240902.


Subject(s)
Heart Valve Prosthesis Implantation , Hypotension/mortality , Postoperative Complications/mortality , Aged , Aged, 80 and over , Female , Heart Valve Prosthesis Implantation/methods , Humans , Male , Retrospective Studies , Transcatheter Aortic Valve Replacement
18.
Environ Toxicol Chem ; 27(11): 2403-11, 2008 Nov.
Article in English | MEDLINE | ID: mdl-18522453

ABSTRACT

Species sensitivity distributions (SSDs) may accurately predict the proportion of species in a community that are at hazard from environmental contaminants only if they contain sensitivity data from a large sample of species representative of the mix of species present in the locality or habitat of interest. With current widely accepted ecotoxicological methods, however, this rarely occurs. Two recent suggestions address this problem. First, use rapid toxicity tests, which are less rigorous than conventional tests, to approximate experimentally the sensitivity of many species quickly and in approximate proportion to naturally occurring communities. Second, use expert judgements regarding the sensitivity of higher taxonomic groups (e.g., orders) and Bayesian statistical methods to construct SSDs that reflect the richness (or perceived importance) of these groups. Here, we describe and analyze several models from a Bayesian perspective to construct SSDs from data derived using rapid toxicity testing, combining both rapid test data and expert opinion. We compare these new models with two frequentist approaches, Kaplan-Meier and a log-normal distribution, using a large data set on the salinity sensitivity of freshwater macroinvertebrates from Victoria (Australia). The frequentist log-normal analysis produced a SSD that overestimated the hazard to species relative to the Kaplan-Meier and Bayesian analyses. Of the Bayesian analyses investigated, the introduction of a weighting factor to account for the richness (or importance) of taxonomic groups influenced the calculated hazard to species. Furthermore, Bayesian methods allowed us to determine credible intervals representing SSD uncertainty. We recommend that rapid tests, expert judgements, and novel Bayesian statistical methods be used so that SSDs reflect communities of organisms found in nature.


Subject(s)
Bayes Theorem , Environmental Pollutants/toxicity , Sodium Chloride/toxicity , Toxicity Tests, Acute/methods , Animals , Ecology , Species Specificity
19.
Int J Biostat ; 14(1)2018 01 31.
Article in English | MEDLINE | ID: mdl-29389664

ABSTRACT

Methodological development and clinical application of joint models of longitudinal and time-to-event outcomes have grown substantially over the past two decades. However, much of this research has concentrated on a single longitudinal outcome and a single event time outcome. In clinical and public health research, patients who are followed up over time may often experience multiple, recurrent, or a succession of clinical events. Models that utilise such multivariate event time outcomes are quite valuable in clinical decision-making. We comprehensively review the literature for implementation of joint models involving more than a single event time per subject. We consider the distributional and modelling assumptions, including the association structure, estimation approaches, software implementations, and clinical applications. Research into this area is proving highly promising, but to-date remains in its infancy.


Subject(s)
Data Interpretation, Statistical , Longitudinal Studies , Models, Statistical , Outcome Assessment, Health Care , Humans , Outcome Assessment, Health Care/methods
20.
Stat Methods Med Res ; 27(1): 185-197, 2018 01.
Article in English | MEDLINE | ID: mdl-27460537

ABSTRACT

A clinical prediction model is a tool for predicting healthcare outcomes, usually within a specific population and context. A common approach is to develop a new clinical prediction model for each population and context; however, this wastes potentially useful historical information. A better approach is to update or incorporate the existing clinical prediction models already developed for use in similar contexts or populations. In addition, clinical prediction models commonly become miscalibrated over time, and need replacing or updating. In this article, we review a range of approaches for re-using and updating clinical prediction models; these fall in into three main categories: simple coefficient updating, combining multiple previous clinical prediction models in a meta-model and dynamic updating of models. We evaluated the performance (discrimination and calibration) of the different strategies using data on mortality following cardiac surgery in the United Kingdom: We found that no single strategy performed sufficiently well to be used to the exclusion of the others. In conclusion, useful tools exist for updating existing clinical prediction models to a new population or context, and these should be implemented rather than developing a new clinical prediction model from scratch, using a breadth of complementary statistical methods.


Subject(s)
Forecasting , Models, Statistical , Cardiac Surgical Procedures/mortality , Humans , Registries , Regression Analysis , Reproducibility of Results , United Kingdom
SELECTION OF CITATIONS
SEARCH DETAIL