RESUMO
The role of exercise in preventing osteoporotic fractures is vague, and further recommendations for optimized exercise protocols are very rare. In the present work, we provided positive evidence for exercise effects on the number of osteoporotic fractures in adults, albeit without observing any significant relevance of intensity progression or study duration. INTRODUCTION: Osteoporotic fractures are a major challenge confronting our aging society. Exercise might be an efficient agent for reducing osteoporotic fractures in older adults, but the most promising exercise protocol for that purpose has yet to be identified. The present meta-analysis thus aimed to identify important predictors of the exercise effect on osteoporotic fractures in adults. METHODS: We conducted a systematic search of six literature databases according to the PRISMA guideline that included controlled exercise studies and reported the number of low-trauma major osteoporotic fractures separately for exercise (EG) and control (CG) groups. Primary study outcome was incidence ratio (IR) for major osteoporotic fractures. Sub-analyses were conducted for progression of intensity (yes vs. no) during the trial and the study duration (≤ 12 months vs. > 12 months). RESULTS: In summary, 11 studies with a pooled number of 9715 participant-years in the EG and 9592 in the CG were included. The mixed-effects conditional Poisson regression revealed positive exercise effects on major osteoporotic fractures (RR: 0.75, 95% CI: 0.54-0.94, p = .006). Although studies with intensity progression were more favorable, our subgroup analysis did not determine significant differences for diverging intensity progression (p = .133) or study duration (p = .883). Heterogeneity among the trials of the subgroups (I2 ≤ 0-7.1%) was negligible. CONCLUSION: The present systematic review and meta-analysis provided significant evidence for the favorable effect of exercise on major osteoporotic fractures. However, diverging study and exercise characteristics along with the close interaction of exercise parameters prevented the derivation of reliable recommendations for exercise protocols for fracture reductions. PROSPERO ID: CRD42021250467.
Assuntos
Fraturas por Osteoporose , Humanos , Idoso , Fraturas por Osteoporose/etiologia , Fraturas por Osteoporose/prevenção & controle , Exercício Físico , Terapia por Exercício/métodos , Envelhecimento , Qualidade de VidaRESUMO
Understanding mechanistic causes of population change is critical for managing and conserving species. Integrated population models (IPMs) allow for quantifying population changes while directly relating environmental drivers to vital rates, but power of IPMs to detect trends and environmental effects on vital rates remains understudied. We simulated data for an IPM fewer than 41 scenarios to determine the power to detect trends and environmental effects on vital rates based on study duration, sample size, detection probability, and effect size. Our results indicated that temporal duration of a study and effect size, rather than sample size of each individual data set or detection probability, had the greatest influence on the power to identify trends in adult survival and fecundity. When using only 10 years of data, we were unable to identify a 50% increase in adult survival but were able to identify this increase with 22 years of data. When using only capture-recapture data in a traditional Cormack-Jolly-Seber analysis, we lacked sufficient power to identify trends in survival, and power of the Cormack-Jolly-Seber model was always less than the IPM. The IPM had greater power to identify trends and environmental effects on fecundity (e.g., we detected a 58% change in fecundity using 12 years of data). Models with effects of environmental variables on vital rates had less power than trends, likely to be due to increased annual variation in the vital rate when modeling responses to environmental effects that varied by year. Lack of power in the Cormack-Jolly-Seber analysis could be due to the relatively small variability in adult survival compared with fecundity, given the life history of our simulated species. As interannual variation in environmental conditions will probably increase with climate change, this type of analysis can help to inform the study duration needed, which may be a shifting target given future climate uncertainty and the complex nature of environmental correlations with demography.
Assuntos
Mudança Climática , Tamanho da Amostra , Probabilidade , Dinâmica PopulacionalRESUMO
In event-driven clinical trials comparing the survival functions of two groups, the number of events required to achieve the desired power is usually calculated using the Freedman formula or the Schoenfeld formula. Then, the sample size and the study duration derived from the required number of events are considered; however, their combination is not uniquely determined. In practice, various combinations are examined considering the enrollment speed, study duration, and the cost of enrollment. However, effective methods for visually representing their relationships and evaluating the uncertainty in study duration are insufficient. We developed a graphical approach for examining the relationship between sample size and study duration. To evaluate the uncertainty in study duration under a given sample size, we also derived the probability density function of the study duration and a method for updating the probability density function according to the observed number of events (ie, information time). The proposed methods are expected to improve the operation and management of clinical trials with a time-to-event endpoint.
Assuntos
Projetos de Pesquisa , Humanos , Tamanho da Amostra , IncertezaRESUMO
The influence of exposure duration on chemical toxicity has important implications for risk assessment. Although a default 10-fold extrapolation factor is commonly applied when the toxicological dataset includes a subchronic (90-day) study but lacks studies of chronic duration, little consensus has been reached on an appropriate extrapolation factor to apply when the dataset includes a 28-day study but lacks studies of longer durations. The goal of the present assessment was to identify a 28-day to 90-day extrapolation factor by analyzing distributions of ratios of No-Observed-Adverse-Effect Levels (NOAELs) and Benchmark Doses (BMDs) derived from 28-day and 90-day studies. The results of this analysis suggest that a default 10-fold extrapolation factor in chemical risk assessment applications is sufficient to account for the uncertainty associated with evaluating human health risk based on results from a 28-day study in the absence of results from a 90-day study. This analysis adds significantly to the growing body of literature interpreting the influence of exposure duration on chemical toxicity that will likewise facilitate discussions on the future state of testing requirements in the international regulatory community.
Assuntos
Substâncias Perigosas/administração & dosagem , Substâncias Perigosas/toxicidade , Animais , Benchmarking/métodos , Relação Dose-Resposta a Droga , Humanos , Nível de Efeito Adverso não Observado , Medição de Risco/métodos , Fatores de Tempo , IncertezaRESUMO
The 1-year dog toxicity study is no longer required by certain pesticide regulatory jurisdictions, including the United States and the European Union. Health Canada's Pest Management Regulatory Agency (PMRA) examined its current requirement for this study to determine if it could be refined or eliminated. A retrospective analysis was conducted to examine the impact of the 1-year dog study on human health risk assessment. The Acceptable Daily Intake (ADI), a measure of the amount of a pesticide in food that can be ingested on a daily basis over a lifetime without an appreciable health risk, was the metric for this analysis. For 143 pesticides evaluated by the PMRA between 2008 and 2015, the supporting toxicology databases were examined to determine if other toxicology studies were protective of the findings in the 1-year dog study. When this criterion was not met, further investigation was undertaken to determine the potential impact of not having the 1-year dog study. For most of the pesticides, effect levels in the 1-year dog study were not substantially different from those in other toxicology studies, when considering factors such as dose-spacing and known experimental variability. The results of this analysis suggest that absence of the 1-year dog study would have minimal impact on the assessment of human health risk. Therefore, Health Canada's PMRA has removed the routine requirement for the 1-year dog study from its pesticide data requirements.
Assuntos
Substâncias Perigosas/toxicidade , Praguicidas/toxicidade , Testes de Toxicidade/métodos , Animais , Canadá , Cães , União Europeia , Humanos , Nível de Efeito Adverso não Observado , Medição de Risco/métodos , Estados UnidosRESUMO
Over 400 active pesticides are registered in Japan (FAMIC 2013). The results of dog toxicity studies (usually, the 1-year study) were used by the Japanese regulatory authorities to establish the acceptable daily intake (ADI) for 45 pesticide active ingredients (about 9%). A retrospective review of ADIs established in Japan with dog studies as pivotal data for their derivation was performed: the ADIs were reassessed under the assumption that the 1-year dog study would not be available and an alternate ADI was derived based on the remaining toxicology database. In 35 of the 45 cases (77.8%) the ADI resulting from the absence of the 1-year dog study was no greater than twice the Japanese ADI, a difference considered not to be of biological significance. In 6 cases (13%) the resulting ADI was 2-5 times higher, which is considered of questionable biological relevance. On further evaluation of the database, three of these six cases were assessed as to clarify that there is no clear difference and for the other three additional studies to clarify that uncertain findings would have been required. In 3 of the 45 cases (7%) there may be a real difference within the ADI ratio of 2-5. Only in 1 case (2.2%) ADI was five times higher than that has been set. Accordingly, the absence of a 1-year dog study does not appear to influence the ADI derivation in a relevant manner in more than 98% of cases. For the four compounds with a real difference in ADI, consumer exposure would still be well below the alternative ADI. Therefore, a strong case can be made that the standard mandatory requirement to conduct a 1-year dog study, in addition to the 3-month study, is not justified and of no additional value in protecting human health. In addition, a substantial reduction in test animals could be achieved.
Assuntos
Praguicidas/toxicidade , Testes de Toxicidade , Animais , Bases de Dados Factuais , Modelos Animais de Doenças , Cães , Humanos , Japão , Nível de Efeito Adverso não Observado , Medição de RiscoRESUMO
There is a growing need for study designs that can evaluate efficacy and toxicity outcomes simultaneously in phase I or phase I/II cancer clinical trials. Many dose-finding approaches have been proposed; however, most of these approaches assume binary efficacy and toxicity outcomes, such as dose-limiting toxicity (DLT), and objective responses. DLTs are often defined for short time periods. In contrast, objective responses are often defined for longer periods because of practical limitations on confirmation and the criteria used to define 'confirmation'. This means that studies have to be carried out for unacceptably long periods of time. Previous studies have not proposed a satisfactory solution to this specific problem. Furthermore, this problem may be a barrier for practitioners who want to implement notable previous dose-finding approaches. To cope with this problem, we propose an approach using unconfirmed early responses as the surrogate efficacy outcome for the confirmed outcome. Because it is reasonable to expect moderate positive correlation between the two outcomes and the method replaces the surrogate outcome with the confirmed outcome once it becomes available, the proposed approach can reduce irrelevant dose selection and accumulation of bias. Moreover, it is also expected that it can significantly shorten study duration. Using simulation studies, we demonstrate the positive utility of the proposed approach and provide three variations of it, all of which can be easily implemented with modified likelihood functions and outcome variable definitions.
Assuntos
Antineoplásicos/administração & dosagem , Ensaios Clínicos Fase I como Assunto/estatística & dados numéricos , Relação Dose-Resposta a Droga , Neoplasias/tratamento farmacológico , Antineoplásicos/efeitos adversos , Biomarcadores , Humanos , Neoplasias/epidemiologia , Fatores de Tempo , Resultado do TratamentoRESUMO
It is currently unknown how quantitative diffusion and myelin MRI designs affect the results of a longitudinal study. We used two independent datasets containing 6 monthly MRI measurements from 20 healthy controls and 20 relapsing-remitting multiple sclerosis (RR-MS) patients. Six designs were tested, including 3 MRI acquisitions, either over 6 months or over a shorter study duration, with balanced (same interval) or unbalanced (different interval) time intervals between MRI acquisitions. First, we show that in RR-MS patients, the brain changes over time obtained with 3 MRI acquisitions were similar to those observed with 5 MRI acquisitions and that designs with an unbalanced time interval showed the highest similarity, regardless of study duration. No significant brain changes were found in the healthy controls over the same periods. Second, the study duration affects the sample size in the RR-MS dataset; a longer study requires more subjects and vice versa. Third, the number of follow-up acquisitions and study duration affect the sensitivity and specificity of the associations with clinical parameters, and these depend on the white matter bundle and MRI measure considered. Together, this suggests that the optimal design depends on the assumption of the dynamics of change in the target population and the accuracy required to capture these dynamics. Thus, this work provides a better understanding of key factors to consider in a longitudinal study and provides clues for better strategies in clinical trial design.
Assuntos
Esclerose Múltipla Recidivante-Remitente , Esclerose Múltipla , Humanos , Encéfalo/diagnóstico por imagem , Imagem de Difusão por Ressonância Magnética , Seguimentos , Estudos Longitudinais , Imageamento por Ressonância Magnética/métodos , Esclerose Múltipla/diagnóstico por imagem , Esclerose Múltipla Recidivante-Remitente/diagnóstico por imagem , Bainha de MielinaRESUMO
An intrauterine system (IUS) can be implanted in the uterus and deliver drug directly at the site of pharmacological action. Mirena was the first FDA-approved levonorgestrel (LNG) releasing IUS without an approved generic form. Its 5-year application duration presents challenges for bioequivalence (BE) assessment using the conventional in vivo studies with pharmacokinetic and/or comparative clinical endpoints. Conventionally, along with other conditions, BE could be established if the 90% confidence interval (CI) of the ratio of geometric means of residual LNG at the end of 5 years is within the BE limits of 80.00% and 125.00%. Modeling and simulation were conducted to identify a shortened BE study duration and its corresponding BE acceptance limit that can be used as a surrogate for the conventional limit for a 5-year study. Simulation results suggest that having the 90% CI of the residual LNG 12 months post insertion within 95.00-105.26% would ensure that residual LNG amount at 5 years to be within 80.00-125.00%. This modeling and simulation practice leads to the current BE recommendation: if a test IUS is made of the same material in the same concentration and has the same physical dimensions as the Mirena, its BE could be established by showing (1) comparative physicochemical and mechanical properties; (2) comparative in vitro drug release behavior for 5 years; and (3) performance in a comparative short-term in vivo study and BE based on 90% confidence interval of test and reference ratio of residual LNG to be within 95.00-105.26% at month 12.
Assuntos
Anticoncepcionais Femininos , Dispositivos Intrauterinos Medicados , Anticoncepcionais Femininos/farmacocinética , Feminino , Humanos , Levanogestrel/farmacocinética , Equivalência Terapêutica , Fatores de TempoRESUMO
BACKGROUND: The gold standard measurement for recording sleep is polysomnography performed in a hospital environment for 1 night. This requires individuals to sleep with a device and several sensors attached to their face, scalp, and body, which is both cumbersome and expensive. Self-trackers, such as wearable sensors (eg, smartwatch) and nearable sensors (eg, sleep mattress), can measure a broad range of physiological parameters related to free-living sleep conditions; however, the optimal duration of such a self-tracker measurement is not known. For such free-living sleep studies with actigraphy, 3 to 14 days of data collection are typically used. OBJECTIVE: The primary goal of this study is to investigate if 3 to 14 days of sleep data collection is sufficient while using self-trackers. The secondary goal is to investigate whether there is a relationship among sleep quality, physical activity, and heart rate. Specifically, we study whether individuals who exhibit similar activity can be clustered together and to what extent the sleep patterns of individuals in relation to seasonality vary. METHODS: Data on sleep, physical activity, and heart rate were collected over 6 months from 54 individuals aged 52 to 86 years. The Withings Aura sleep mattress (nearable; Withings Inc) and Withings Steel HR smartwatch (wearable; Withings Inc) were used. At the individual level, we investigated the consistency of various physical activities and sleep metrics over different time spans to illustrate how sensor data from self-trackers can be used to illuminate trends. We used exploratory data analysis and unsupervised machine learning at both the cohort and individual levels. RESULTS: Significant variability in standard metrics of sleep quality was found between different periods throughout the study. We showed specifically that to obtain more robust individual assessments of sleep and physical activity patterns through self-trackers, an evaluation period of >3 to 14 days is necessary. In addition, we found seasonal patterns in sleep data related to the changing of the clock for daylight saving time. CONCLUSIONS: We demonstrate that >2 months' worth of self-tracking data are needed to provide a representative summary of daily activity and sleep patterns. By doing so, we challenge the current standard of 3 to 14 days for sleep quality assessment and call for the rethinking of standards when collecting data for research purposes. Seasonal patterns and daylight saving time clock change are also important aspects that need to be taken into consideration when choosing a period for collecting data and designing studies on sleep. Furthermore, we suggest using self-trackers (wearable and nearable ones) to support longer-term evaluations of sleep and physical activity for research purposes and, possibly, clinical purposes in the future.
RESUMO
BACKGROUND: N-of-1 trials promise to help individuals make more informed decisions about treatment selection through structured experiments that compare treatment effectiveness by alternating treatments and measuring their impacts in a single individual. We created a digital platform that automates the design, administration, and analysis of N-of-1 trials. Our first N-of-1 trial, the app-based Brain Boost Study, invited individuals to compare the impacts of two commonly consumed substances (caffeine and L-theanine) on their cognitive performance. OBJECTIVE: The purpose of this study is to evaluate critical factors that may impact the completion of N-of-1 trials to inform the design of future app-based N-of-1 trials. We will measure study completion rates for participants that begin the Brain Boost Study and assess their associations with study duration (5, 15, or 27 days) and notification level (light or moderate). METHODS: Participants will be randomized into three study durations and two notification levels. To sufficiently power the study, a minimum of 640 individuals must begin the study, and 97 individuals must complete the study. We will use a multiple logistic regression model to discern whether the study length and notification level are associated with the rate of study completion. For each group, we will also compare participant adherence and the proportion of trials that yield statistically meaningful results. RESULTS: We completed the beta testing of the N1 app on a convenience sample of users. The Brain Boost Study on the N1 app opened enrollment to the public in October 2019. More than 30 participants enrolled in the first month. CONCLUSIONS: To our knowledge, this will be the first study to rigorously evaluate critical factors associated with study completion in the context of app-based N-of-1 trials. TRIAL REGISTRATION: ClinicalTrials.gov NCT04056650; https://clinicaltrials.gov/ct2/show/NCT04056650. INTERNATIONAL REGISTERED REPORT IDENTIFIER (IRRID): PRR1-10.2196/16362.
RESUMO
We assessed evidence for changes in efficacy of food-based interventions aimed at reducing appetite or energy intake (EI), and whether this could be used to provide guidance on trial design. A systematic search identified randomized controlled trials testing sustained efficacy of diets, foods, supplements or food ingredients on appetite and/or EI. Trials had to include sufficient exposure duration (≥3 days) with appetite and/or EI measured after both acute and repeated exposures. Twenty-six trials met the inclusion criteria and reported data allowing for assessment of the acute and chronic effects of interventions. Most (21/26) measured appetite outcomes and over half (14/26) had objective measures of EI. A significant acute effect of the intervention was retained in 10 of 12 trials for appetite outcomes, and six of nine studies for EI. Initial effects were most likely retained where these were more robust and studies adequately powered. Where the initial, acute effect was not statistically significant, a significant effect was later observed in only two of nine studies for appetite and none of five studies for EI. Maintenance of intervention effects on appetite or EI needs to be confirmed but seems likely where acute effects are robust and replicable in adequately powered studies.