ABSTRACT
Malaria is an infectious disease affecting a large population across the world, and interventions need to be efficiently applied to reduce the burden of malaria. We develop a framework to help policy-makers decide how to allocate limited resources in realtime for malaria control. We formalize a policy for the resource allocation as a sequence of decisions, one per intervention decision, that map up-to-date disease related information to a resource allocation. An optimal policy must control the spread of the disease while being interpretable and viewed as equitable to stakeholders. We construct an interpretable class of resource allocation policies that can accommodate allocation of resources residing in a continuous domain and combine a hierarchical Bayesian spatiotemporal model for disease transmission with a policy-search algorithm to estimate an optimal policy for resource allocation within the pre-specified class. The estimated optimal policy under the proposed framework improves the cumulative long-term outcome compared with naive approaches in both simulation experiments and application to malaria interventions in the Democratic Republic of the Congo.
Subject(s)
Malaria , Bayes Theorem , Humans , Malaria/prevention & control , Resource AllocationABSTRACT
BACKGROUND: Frailty is prevalent among patients with heart failure (HF) and is associated with increased mortality rates and worse patient-centered outcomes. Hand grip strength (GS) has been proposed as a single-item marker of frailty and a potential screening tool to identify patients most likely to benefit from therapies that target frailty so as to improve quality of life (QoL) and clinical outcomes. We assessed the association of longitudinal decline in GS with all-cause mortality and QoL. Decline in GS is associated with increased risk of all-cause mortality and worse overall and domain-specific (physical, functional, emotional, social) QoL among patients with advanced HF. METHODS: We used data from a prospective, observational cohort of patients with New York Heart Association class III or IV HF in Singapore. Patients' overall and domain-specific QoL were assessed, and GS was measured every 4 months. We constructed a Kaplan-Meier plot with GS at baseline dichotomized into categories of weak (≤ 5th percentile) and normal (> 5th percentile) based on the GS in a healthy Singapore population of the same sex and age. Missing GS measurements were imputed using chained equations. We jointly modeled longitudinal GS measurements and survival time, adjusting for comorbidities. We used mixed effects models to evaluate the associations between GS and QoL. RESULTS: Among 251 patients (mean age 66.5 ± 12.0 years; 28.3% female), all-cause mortality occurred in 58 (23.1%) patients over a mean follow-up duration of 3.0 ± 1.3 years. Patients with weak GS had decreased survival rates compared to those with normal GS (log-rank Pâ¯=â¯0.033). In the joint model of longitudinal GS and survival time, a decrease of 1 unit in GS was associated with a 12% increase in rate of mortality (hazard ratio: 1.12; 95% confidence interval: 1.05-1.20; Pâ¯=â¯< 0.001). Higher GS was associated with higher overall QoL (ß (SE)â¯=â¯0.36 (0.07); Pâ¯=â¯< 0.001) and higher domain-specific QoL, including physical (ß [SE]â¯=â¯0.13 [0.03]; Pâ¯=â¯< 0.001), functional (ß [SE]â¯=â¯0.12 [0.03]; Pâ¯=â¯< 0.001), and emotional QoL (ß [SE]â¯=â¯0.08 [0.02]; Pâ¯=â¯< 0.001). Higher GS was associated with higher social QoL, but this was not statistically significant (ß [SE]â¯=â¯0.04 [0.03]; Pâ¯=â¯0.122). CONCLUSIONS: Among patients with advanced HF, longitudinal decline in GS was associated with worse survival rates and QoL. Further studies are needed to evaluate whether incorporating GS into patient selection for HF therapies leads to improved survival rates and patient-centered outcomes.
Subject(s)
Frailty , Heart Failure , Aged , Female , Humans , Male , Middle Aged , Hand Strength , Prospective Studies , Quality of Life , Singapore/epidemiologyABSTRACT
Many problems that appear in biomedical decision-making, such as diagnosing disease and predicting response to treatment, can be expressed as binary classification problems. The support vector machine (SVM) is a popular classification technique that is robust to model misspecification and effectively handles high-dimensional data. The relative costs of false positives and false negatives can vary across application domains. The receiving operating characteristic (ROC) curve provides a visual representation of the trade-off between these two types of errors. Because the SVM does not produce a predicted probability, an ROC curve cannot be constructed in the traditional way of thresholding a predicted probability. However, a sequence of weighted SVMs can be used to construct an ROC curve. Although ROC curves constructed using weighted SVMs have great potential for allowing ROC curves analyses that cannot be done by thresholding predicted probabilities, their theoretical properties have heretofore been underdeveloped. We propose a method for constructing confidence bands for the SVM ROC curve and provide the theoretical justification for the SVM ROC curve by showing that the risk function of the estimated decision rule is uniformly consistent across the weight parameter. We demonstrate the proposed confidence band method using simulation studies. We present a predictive model for treatment response in breast cancer as an illustrative example.
Subject(s)
Breast Neoplasms , Support Vector Machine , Breast Neoplasms/diagnosis , Computer Simulation , Female , Humans , Probability , ROC CurveABSTRACT
Precision medicine seeks to provide treatment only if, when, to whom, and at the dose it is needed. Thus, precision medicine is a vehicle by which healthcare can be made both more effective and efficient. Individualized treatment rules operationalize precision medicine as a map from current patient information to a recommended treatment. An optimal individualized treatment rule is defined as maximizing the mean of a pre-specified scalar outcome. However, in settings with multiple outcomes, choosing a scalar composite outcome by which to define optimality is difficult. Furthermore, when there is heterogeneity across patient preferences for these outcomes, it may not be possible to construct a single composite outcome that leads to high-quality treatment recommendations for all patients. We simultaneously estimate the optimal individualized treatment rule for all composite outcomes representable as a convex combination of the (suitably transformed) outcomes. For each patient, we use a preference elicitation questionnaire and item response theory to derive the posterior distribution over preferences for these composite outcomes and subsequently derive an estimator of an optimal individualized treatment rule tailored to patient preferences. We prove that as the number of subjects and items on the questionnaire diverge, our estimator is consistent for an oracle optimal individualized treatment rule wherein each patient's preference is known a priori. We illustrate the proposed method using data from a clinical trial on antipsychotic medications for schizophrenia.
Subject(s)
Models, Statistical , Patient Preference/statistics & numerical data , Precision Medicine/methods , Antipsychotic Agents/therapeutic use , Humans , Precision Medicine/statistics & numerical data , Schizophrenia/drug therapy , Surveys and Questionnaires , Treatment OutcomeABSTRACT
There is growing interest and investment in precision medicine as a means to provide the best possible health care. A treatment regime formalizes precision medicine as a sequence of decision rules, one per clinical intervention period, that specify if, when and how current treatment should be adjusted in response to a patient's evolving health status. It is standard to define a regime as optimal if, when applied to a population of interest, it maximizes the mean of some desirable clinical outcome, such as efficacy. However, in many clinical settings, a high-quality treatment regime must balance multiple competing outcomes; eg, when a high dose is associated with substantial symptom reduction but a greater risk of an adverse event. We consider the problem of estimating the most efficacious treatment regime subject to constraints on the risk of adverse events. We combine nonparametric Q-learning with policy-search to estimate a high-quality yet parsimonious treatment regime. This estimator applies to both observational and randomized data, as well as settings with variable, outcome-dependent follow-up, mixed treatment types, and multiple time points. This work is motivated by and framed in the context of dosing for chronic pain; however, the proposed framework can be applied generally to estimate a treatment regime which maximizes the mean of one primary outcome subject to constraints on one or more secondary outcomes. We illustrate the proposed method using data pooled from 5 open-label flexible dosing clinical trials for chronic pain.
Subject(s)
Analgesics, Opioid/administration & dosage , Chronic Pain/drug therapy , Drug Dosage Calculations , Analgesics, Opioid/adverse effects , Analgesics, Opioid/therapeutic use , Humans , Long-Term Care , Models, Statistical , Precision Medicine/methods , Statistics as Topic , Statistics, NonparametricABSTRACT
Biomarkers associated with heterogeneity in subject responses to treatment hold potential for treatment selection. In practice, the decision regarding whether to adopt a treatment-selection marker depends on the effect of using the marker on the rate of targeted disease and on the cost associated with treatment. We propose an expected benefit measure that incorporates both effects to quantify a marker's treatment-selection capacity. This measure builds upon an existing decision-theoretic framework, but is expanded to account for the fact that optimal treatment absent marker information varies with the cost of treatment. In addition, we establish upper and lower bounds on the expected benefit for a perfect treatment-selection rule which provides the basis for a standardized expected benefit measure. We develop model-based estimators for these measures in a randomized trial setting and evaluate their asymptotic properties. An adaptive bootstrap confidence interval is proposed for inference in the presence of non-regularity. Alternative estimators robust to risk model misspecification are also investigated. We illustrate our methods using the Diabetes Control and Complications Trial where we evaluate the expected benefit of baseline hemoglobin A1C in selecting diabetes treatment.
Subject(s)
Biomarkers , Clinical Decision-Making/methods , Models, Statistical , Diabetes Mellitus/therapy , Glycated Hemoglobin/analysis , Humans , Randomized Controlled Trials as Topic/statistics & numerical dataABSTRACT
A personalized treatment strategy formalizes evidence-based treatment selection by mapping patient information to a recommended treatment. Personalized treatment strategies can produce better patient outcomes while reducing cost and treatment burden. Thus, among clinical and intervention scientists, there is a growing interest in conducting randomized clinical trials when one of the primary aims is estimation of a personalized treatment strategy. However, at present, there are no appropriate sample size formulae to assist in the design of such a trial. Furthermore, because the sampling distribution of the estimated outcome under an estimated optimal treatment strategy can be highly sensitive to small perturbations in the underlying generative model, sample size calculations based on standard (uncorrected) asymptotic approximations or computer simulations may not be reliable. We offer a simple and robust method for powering a single stage, two-armed randomized clinical trial when the primary aim is estimating the optimal single stage personalized treatment strategy. The proposed method is based on inverting a plugin projection confidence interval and is thereby regular and robust to small perturbations of the underlying generative model. The proposed method requires elicitation of two clinically meaningful parameters from clinical scientists and uses data from a small pilot study to estimate nuisance parameters, which are not easily elicited. The method performs well in simulated experiments and is illustrated using data from a pilot study of time to conception and fertility awareness.
Subject(s)
Randomized Controlled Trials as Topic/statistics & numerical data , Biostatistics , Computer Simulation , Confidence Intervals , Data Interpretation, Statistical , Evidence-Based Practice/statistics & numerical data , Female , Fertility , Humans , Male , Models, Statistical , Pilot Projects , Precision Medicine/statistics & numerical data , Pregnancy , Regression Analysis , Sample SizeABSTRACT
We spend the majority of our lives indoors; yet, we currently lack a comprehensive understanding of how the microbial communities found in homes vary across broad geographical regions and what factors are most important in shaping the types of microorganisms found inside homes. Here, we investigated the fungal and bacterial communities found in settled dust collected from inside and outside approximately 1200 homes located across the continental US, homes that represent a broad range of home designs and span many climatic zones. Indoor and outdoor dust samples harboured distinct microbial communities, but these differences were larger for bacteria than for fungi with most indoor fungi originating outside the home. Indoor fungal communities and the distribution of potential allergens varied predictably across climate and geographical regions; where you live determines what fungi live with you inside your home. By contrast, bacterial communities in indoor dust were more strongly influenced by the number and types of occupants living in the homes. In particular, the female : male ratio and whether a house had pets had a significant influence on the types of bacteria found inside our homes highlighting that who you live with determines what bacteria are found inside your home.
Subject(s)
Bacteria/isolation & purification , Dust , Fungi/isolation & purification , Housing , Allergens/isolation & purification , Animals , Bacteria/classification , Family Characteristics , Female , Fungi/classification , Geography , Humans , Male , Pets , United StatesABSTRACT
A treatment regime formalizes personalized medicine as a function from individual patient characteristics to a recommended treatment. A high-quality treatment regime can improve patient outcomes while reducing cost, resource consumption, and treatment burden. Thus, there is tremendous interest in estimating treatment regimes from observational and randomized studies. However, the development of treatment regimes for application in clinical practice requires the long-term, joint effort of statisticians and clinical scientists. In this collaborative process, the statistician must integrate clinical science into the statistical models underlying a treatment regime and the clinician must scrutinize the estimated treatment regime for scientific validity. To facilitate meaningful information exchange, it is important that estimated treatment regimes be interpretable in a subject-matter context. We propose a simple, yet flexible class of treatment regimes whose members are representable as a short list of if-then statements. Regimes in this class are immediately interpretable and are therefore an appealing choice for broad application in practice. We derive a robust estimator of the optimal regime within this class and demonstrate its finite sample performance using simulation experiments. The proposed method is illustrated with data from two clinical trials.
Subject(s)
Clinical Protocols , Decision Trees , Biometry/methods , Breast Neoplasms/drug therapy , Clinical Trials as Topic/statistics & numerical data , Computer Simulation , Depression/therapy , Evidence-Based Medicine/statistics & numerical data , Female , Humans , Models, Statistical , Precision Medicine/statistics & numerical dataABSTRACT
Chronic illness treatment strategies must adapt to the evolving health status of the patient receiving treatment. Data-driven dynamic treatment regimes can offer guidance for clinicians and intervention scientists on how to treat patients over time in order to bring about the most favorable clinical outcome on average. Methods for estimating optimal dynamic treatment regimes, such as Q-learning, typically require modeling nonsmooth, nonmonotone transformations of data. Thus, building well-fitting models can be challenging and in some cases may result in a poor estimate of the optimal treatment regime. Interactive Q-learning (IQ-learning) is an alternative to Q-learning that only requires modeling smooth, monotone transformations of the data. The R package iqLearn provides functions for implementing both the IQ-learning and Q-learning algorithms. We demonstrate how to estimate a two-stage optimal treatment policy with iqLearn using a generated data set bmiData which mimics a two-stage randomized body mass index reduction trial with binary treatments at each stage.
ABSTRACT
In clinical practice, physicians make a series of treatment decisions over the course of a patient's disease based on his/her baseline and evolving characteristics. A dynamic treatment regime is a set of sequential decision rules that operationalizes this process. Each rule corresponds to a decision point and dictates the next treatment action based on the accrued information. Using existing data, a key goal is estimating the optimal regime, that, if followed by the patient population, would yield the most favorable outcome on average. Q- and A-learning are two main approaches for this purpose. We provide a detailed account of these methods, study their performance, and illustrate them using data from a depression study.
ABSTRACT
Dynamic treatment regimes (DTRs) operationalize the clinical decision process as a sequence of functions, one for each clinical decision, where each function maps up-to-date patient information to a single recommended treatment. Current methods for estimating optimal DTRs, for example Q-learning, require the specification of a single outcome by which the "goodness" of competing dynamic treatment regimes is measured. However, this is an over-simplification of the goal of clinical decision making, which aims to balance several potentially competing outcomes, for example, symptom relief and side-effect burden. When there are competing outcomes and patients do not know or cannot communicate their preferences, formation of a single composite outcome that correctly balances the competing outcomes is not possible. This problem also occurs when patient preferences evolve over time. We propose a method for constructing DTRs that accommodates competing outcomes by recommending sets of treatments at each decision point. Formally, we construct a sequence of set-valued functions that take as input up-to-date patient information and give as output a recommended subset of the possible treatments. For a given patient history, the recommended set of treatments contains all treatments that produce non-inferior outcome vectors. Constructing these set-valued functions requires solving a non-trivial enumeration problem. We offer an exact enumeration algorithm by recasting the problem as a linear mixed integer program. The proposed methods are illustrated using data from the CATIE schizophrenia study.
Subject(s)
Clinical Protocols , Clinical Trials as Topic/methods , Decision Making , Models, Statistical , Treatment Outcome , Algorithms , Antipsychotic Agents/administration & dosage , Antipsychotic Agents/adverse effects , Antipsychotic Agents/therapeutic use , Body Mass Index , Humans , Schizophrenia/drug therapyABSTRACT
BACKGROUND: Recent advances in medical research suggest that the optimal treatment rules should be adaptive to patients over time. This has led to an increasing interest in studying dynamic treatment regime, a sequence of individualized treatment rules, one per stage of clinical intervention, which maps present patient information to a recommended treatment. There has been a recent surge of statistical work for estimating optimal dynamic treatment regimes from randomized and observational studies. The purpose of this article is to review recent methodological progress and applied issues associated with estimating optimal dynamic treatment regimes. METHODS: We discuss sequential multiple assignment randomized trials, a clinical trial design used to study treatment sequences. We use a common estimator of an optimal dynamic treatment regime that applies to sequential multiple assignment randomized trials data as a platform to discuss several practical and methodological issues. RESULTS: We provide a limited survey of practical issues associated with modeling sequential multiple assignment randomized trials data. We review some existing estimators of optimal dynamic treatment regimes and discuss practical issues associated with these methods including model building, missing data, statistical inference, and choosing an outcome when only non-responders are re-randomized. We mainly focus on the estimation and inference of dynamic treatment regimes using sequential multiple assignment randomized trials data. Dynamic treatment regimes can also be constructed from observational data, which may be easier to obtain in practice; however, care must be taken to account for potential confounding.
ABSTRACT
BACKGROUND: A dynamic treatment regime (DTR) comprises a sequence of decision rules, one per stage of intervention, that recommends how to individualize treatment to patients based on evolving treatment and covariate history. These regimes are useful for managing chronic disorders, and fit into the larger paradigm of personalized medicine. The Value of a DTR is the expected outcome when the DTR is used to assign treatments to a population of interest. PURPOSE: The Value of a data-driven DTR, estimated using data from a Sequential Multiple Assignment Randomized Trial, is both a data-dependent parameter and a non-smooth function of the underlying generative distribution. These features introduce additional variability that is not accounted for by standard methods for conducting statistical inference, for example, the bootstrap or normal approximations, if applied without adjustment. Our purpose is to provide a feasible method for constructing valid confidence intervals (CIs) for this quantity of practical interest. METHODS: We propose a conceptually simple and computationally feasible method for constructing valid CIs for the Value of an estimated DTR based on subsampling. The method is self-tuning by virtue of an approach called the double bootstrap. We demonstrate the proposed method using a series of simulated experiments. RESULTS: The proposed method offers considerable improvement in terms of coverage rates of the CIs over the standard bootstrap approach. LIMITATIONS: In this article, we have restricted our attention to Q-learning for estimating the optimal DTR. However, other methods can be employed for this purpose; to keep the discussion focused, we have not explored these alternatives. CONCLUSION: Subsampling-based CIs provide much better performance compared to standard bootstrap for the Value of an estimated DTR.
ABSTRACT
A dynamic treatment regime consists of a set of decision rules that dictate how to individualize treatment to patients based on available treatment and covariate history. A common method for estimating an optimal dynamic treatment regime from data is Q-learning which involves nonsmooth operations of the data. This nonsmoothness causes standard asymptotic approaches for inference like the bootstrap or Taylor series arguments to breakdown if applied without correction. Here, we consider the m-out-of-n bootstrap for constructing confidence intervals for the parameters indexing the optimal dynamic regime. We propose an adaptive choice of m and show that it produces asymptotically correct confidence sets under fixed alternatives. Furthermore, the proposed method has the advantage of being conceptually and computationally much simple than competing methods possessing this same theoretical property. We provide an extensive simulation study to compare the proposed method with currently available inference procedures. The results suggest that the proposed method delivers nominal coverage while being less conservative than alternatives. The proposed methods are implemented in the qLearn R-package and have been made available on the Comprehensive R-Archive Network (http://cran.r-project.org/). Analysis of the Sequenced Treatment Alternatives to Relieve Depression (STAR*D) study is used as an illustrative example.
Subject(s)
Biometry/methods , Randomized Controlled Trials as Topic/statistics & numerical data , Antidepressive Agents/therapeutic use , Artificial Intelligence , Computer Simulation , Confidence Intervals , Decision Theory , Depressive Disorder, Major/drug therapy , Humans , Linear Models , Logistic Models , Models, Statistical , Monte Carlo MethodABSTRACT
Because the number of patients waiting for organ transplants exceeds the number of organs available, a better understanding of how transplantation affects the distribution of residual lifetime is needed to improve organ allocation. However, there has been little work to assess the survival benefit of transplantation from a causal perspective. Previous methods developed to estimate the causal effects of treatment in the presence of time-varying confounders have assumed that treatment assignment was independent across patients, which is not true for organ transplantation. We develop a version of G-estimation that accounts for the fact that treatment assignment is not independent across individuals to estimate the parameters of a structural nested failure time model. We derive the asymptotic properties of our estimator and confirm through simulation studies that our method leads to valid inference of the effect of transplantation on the distribution of residual lifetime. We demonstrate our method on the survival benefit of lung transplantation using data from the United Network for Organ Sharing.
Subject(s)
Data Interpretation, Statistical , Life Expectancy , Lung Diseases/mortality , Lung Diseases/surgery , Lung Transplantation/mortality , Outcome Assessment, Health Care/methods , Survival Rate , Adolescent , Adult , Age Distribution , Aged , Aged, 80 and over , Causality , Humans , Internationality , Middle Aged , Young AdultABSTRACT
Uncontrolled glycated hemoglobin (HbA1c) levels are associated with adverse events among complex diabetic patients. These adverse events present serious health risks to affected patients and are associated with significant financial costs. Thus, a high-quality predictive model that could identify high-risk patients so as to inform preventative treatment has the potential to improve patient outcomes while reducing healthcare costs. Because the biomarker information needed to predict risk is costly and burdensome, it is desirable that such a model collect only as much information as is needed on each patient so as to render an accurate prediction. We propose a sequential predictive model that uses accumulating patient longitudinal data to classify patients as: high-risk, low-risk, or uncertain. Patients classified as high-risk are then recommended to receive preventative treatment and those classified as low-risk are recommended to standard care. Patients classified as uncertain are monitored until a high-risk or low-risk determination is made. We construct the model using claims and enrollment files from Medicare, linked with patient Electronic Health Records (EHR) data. The proposed model uses functional principal components to accommodate noisy longitudinal data and weighting to deal with missingness and sampling bias. The proposed method demonstrates higher predictive accuracy and lower cost than competing methods in a series of simulation experiments and application to data on complex patients with diabetes.
ABSTRACT
Pain coping skills training (PCST) is efficacious in patients with cancer, but clinical access is limited. To inform implementation, as a secondary outcome, we estimated the cost-effectiveness of 8 dosing strategies of PCST evaluated in a sequential multiple assignment randomized trial among women with breast cancer and pain (N = 327). Women were randomized to initial doses and re-randomized to subsequent doses based on their initial response (ie, ≥30% pain reduction). A decision-analytic model was designed to incorporate costs and benefits associated with 8 different PCST dosing strategies. In the primary analysis, costs were limited to resources required to deliver PCST. Quality-adjusted life-years (QALYs) were modeled based on utility weights measured with the EuroQol-5 dimension 5-level at 4 assessments over 10 months. A probabilistic sensitivity analysis was performed to account for parameter uncertainty. Implementation of PCST initiated with the 5-session protocol was more costly ($693-853) than strategies initiated with the 1-session protocol ($288-496). QALYs for strategies beginning with the 5-session protocol were greater than for strategies beginning with the 1-session protocol. With the goal of implementing PCST as part of comprehensive cancer treatment and with willingness-to-pay thresholds ranging beyond $20,000 per QALY, the strategy most likely to provide the greatest number of QALYs at an acceptable cost was a 1-session PCST protocol followed by either 5 maintenance telephone calls for responders or 5 sessions of PCST for nonresponders. A PCST program with 1 initial session and subsequent dosing based on response provides good value and improved outcomes. PERSPECTIVE: This article presents the results of a cost analysis of the delivery of PCST, a nonpharmacological intervention, to women with breast cancer and pain. Results could potentially provide important cost-related information to health care providers and systems on the use of an efficacious and accessible nonmedication strategy for pain management. TRIALS REGISTRATION: ClinicalTrials.gov: NCT02791646, registered 6/2/2016.
Subject(s)
Breast Neoplasms , Cost-Effectiveness Analysis , Humans , Female , Breast Neoplasms/complications , Adaptation, Psychological , Pain , Pain Management/methodsABSTRACT
ABSTRACT: Behavioral pain management interventions are efficacious for reducing pain in patients with cancer. However, optimal dosing of behavioral pain interventions for pain reduction is unknown, and this hinders routine clinical use. A Sequential Multiple Assignment Randomized Trial (SMART) was used to evaluate whether varying doses of Pain Coping Skills Training (PCST) and response-based dose adaptation can improve pain management in women with breast cancer. Participants (N = 327) had stage I-IIIC breast cancer and a worst pain score of > 5/10. Pain severity (a priori primary outcome) was assessed before initial randomization (1:1 allocation) to PCST-Full (5 sessions) or PCST-Brief (1 session) and 5 to 8 weeks later. Responders ( > 30% pain reduction) were rerandomized to a maintenance dose or no dose and nonresponders (<30% pain reduction) to an increased or maintenance dose. Pain severity was assessed again 5 to 8 weeks later (assessment 3) and 6 months later (assessment 4). As hypothesized, PCST-Full resulted in greater mean percent pain reduction than PCST-Brief (M [SD] = -28.5% [39.6%] vs M [SD]= -14.8% [71.8%]; P = 0.041). At assessment 3 after second dosing, all intervention sequences evidenced pain reduction from assessment 1 with no differences between sequences. At assessment 4, all sequences evidenced pain reduction from assessment 1 with differences between sequences ( P = 0.027). Participants initially receiving PCST-Full had greater pain reduction at assessment 4 ( P = 0.056). Varying PCST doses led to pain reduction over time. Intervention sequences demonstrating the most durable decreases in pain reduction included PCST-Full. Pain Coping Skills Training with intervention adjustment based on response can produce sustainable pain reduction.
Subject(s)
Breast Neoplasms , Cancer Pain , Humans , Female , Cancer Pain/drug therapy , Adaptation, Psychological , Behavior Therapy/methods , PainABSTRACT
Exercise is a cornerstone of preventive medicine and a promising strategy to intervene on the biology of aging. Variation in the response to exercise is a widely accepted concept that dates back to the 1980s with classic genetic studies identifying sequence variations as modifiers of the VO2max response to training. Since that time, the literature of exercise response variance has been populated with retrospective analyses of existing datasets that are limited by a lack of statistical power from technical error of the measurements and small sample sizes, as well as diffuse outcomes, very few of which have included older adults. Prospective studies that are appropriately designed to interrogate exercise response variation in key outcomes identified a priori and inclusive of individuals over the age of 70 are long overdue. Understanding the underlying intrinsic (e.g., genetics and epigenetics) and extrinsic (e.g., medication use, diet, chronic disease) factors that determine robust versus poor responses to various exercise factors will be used to improve exercise prescription to target the pillars of aging and optimize the clinical efficacy of exercise training in older adults. This review summarizes the proceedings of the NIA-sponsored workshop entitled, "Understanding Heterogeneity of Responses to, and Optimizing Clinical Efficacy of, Exercise Training in Older Adults" and highlights the importance and current state of exercise response variation research, particularly in older adults, prevailing challenges, and future directions.