RESUMEN
BACKGROUND AND AIMS: In biliary atresia, serum bilirubin is commonly used to predict outcomes after Kasai portoenterostomy (KP). Infants with persistently high levels invariably need liver transplant, but those achieving normalized levels have a less certain disease course. We hypothesized that serum bile acid levels could help predict outcomes in the latter group. APPROACH AND RESULTS: Participants with biliary atresia from the Childhood Liver Disease Research Network were included if they had normalized bilirubin levels 6 months after KP and stored serum samples from the 6-month post-KP clinic visit ( n = 137). Bile acids were measured from the stored serum samples and used to divide participants into ≤40 µmol/L ( n = 43) or >40 µmol/L ( n = 94) groups. At 2 years of age, the ≤40 µmol/L compared with >40 µmol/L group had significantly lower total bilirubin, aspartate aminotransferase, alanine aminotransferase, gamma-glutamyltransferase, bile acids, and spleen size, as well as significantly higher albumin and platelet counts. Furthermore, during 734 person-years of follow-up, those in the ≤40 µmol/L group were significantly less likely to develop splenomegaly, ascites, gastrointestinal bleeding, or clinically evident portal hypertension. The ≤40 µmol/L group had a 10-year cumulative incidence of liver transplant/death of 8.5% (95% CI: 1.1%-26.1%), compared with 42.9% (95% CI: 28.6%-56.4%) for the >40 µmol/L group ( p = 0.001). CONCLUSIONS: Serum bile acid levels may be a useful prognostic biomarker for infants achieving normalized bilirubin levels after KP.
Asunto(s)
Atresia Biliar , Lactante , Humanos , Niño , Atresia Biliar/cirugía , Portoenterostomía Hepática , Pronóstico , Bilirrubina , Ácidos y Sales Biliares , Biomarcadores , Resultado del Tratamiento , Estudios RetrospectivosRESUMEN
BACKGROUND: The Japanese Society for Dialysis Therapy (JSDT) published in 2013 inaugural hemodialysis (HD) guidelines. Specific targets include 1.4 for single-pool Kt/V (spKt/V) with a minimum dose of 1.2, minimum dialysis session length of 4 hours, minimum blood flow rate (BFR) of 200 mL/min, fluid removal rate no more than 15 mL/kg/hr, and hemodiafiltration (HDF) therapy for certain identified symptoms. We evaluated the effect of these guidelines on actual practice in the years spanning 2005 - 2018. METHODS: Analyses were carried out to describe trends in the above HD prescription practices from December 2005 to April 2013 (before guideline publication) to August 2018 based on prevalent patient cross-sections from approximately 60 randomly selected HD facilities participating in the Japan Dialysis Outcomes and Practice Patterns Study. RESULTS: From April 2006 to August 2017 continual rises occurred in mean spKt/V (from 1.35 to 1.49), and percent of patients having spKt/V>1.2 (71% to 85%). Mean BFR increased with time from 198.3 mL/min (April 2006) to 218.4 mL/min (August 2017) , along with percent of patients with BFR >200 ml/min (65% to 85%). HDF use increased slightly from 6% (April 2006 and August 2009) to 8% by April 2013, but increased greatly thereafter to 23% by August 2017. In contrast, mean HD treatment time showed little change from 2006-2017, whereas mean UFR declined from 11.3 in 2006 to 8.4 mL/Kg/hour in 2017. CONCLUSIONS: From 2006 - 2018 Japanese HD patients experienced marked improvement in reaching the spKt/V target specified by the 2013 JSDT guidelines. This may have been due to moderate increase in mean BFR even though mean HD session length did not change much. In addition, HDF use increased dramatically in this time period. Other HD delivery changes during this time, such as increased use of super high flux dialyzers, also merit study. While we cannot definitively conclude a causal relationship between the publication of the guidelines and the subsequent practice changes in Japan, those changes moved practice closer to the recommendations of the guidelines.
Asunto(s)
Guías de Práctica Clínica como Asunto , Pautas de la Práctica en Medicina/tendencias , Prescripciones/normas , Diálisis Renal/normas , Anciano , Femenino , Humanos , Japón , Masculino , Persona de Mediana EdadRESUMEN
OBJECTIVES: The aim of the study was to assess neurodevelopmental outcomes among children with biliary atresia (BA) surviving with their native liver at ages 3 to 12 years and evaluate variables that associate with neurodevelopment. METHODS: Participants (ages 3-12 years) in a prospective, longitudinal, multicenter study underwent neurodevelopmental testing with Weschler Preschool and Primary Scale of Intelligence, 3rd edition (WPPSI-III, ages 3-5 years) and Weschler Intelligence Scale for Children, 4th edition (WISC-IV, ages 6-12 years). Continuous scores were analyzed using Kolmogorov-Smironov tests compared with a normal distribution (meanâ=â100â±â15). Effect of covariates on Full-Scale Intelligence Quotient (FSIQ) was analyzed using linear regression. RESULTS: Ninety-three participants completed 164 WPPSI-III (mean age 3.9) and 51 WISC-IV (mean age 6.9) tests. WPPSI-III FSIQ (104â±â14, Pâ<â0.02), Verbal IQ (106â±â14, Pâ<â0.001), and General Language Composite (107â±â16, Pâ<â0.001) distributions were shifted higher compared with test norms. WISC-IV FSIQ (105â±â12, Pâ<â0.01), Perceptual Reasoning Index (107â±â12, Pâ<â0.01), and Processing Speed Index (105â±â10, Pâ<â0.02) also shifted upwards. In univariate and multivariable analysis, parent education (Pâ<â0.01) was a significant predictor of FSIQ on WPPSI-III and positively associated with WISC-IV FSIQ. Male sex and higher total bilirubin and gamma glutamyl transferase (GGT) predicted lower WPPSI-III FSIQ. Portal hypertension was predictive of lower WISC-IV FSIQ. CONCLUSIONS: This cohort of children with BA and native liver did not demonstrate higher prevalence of neurodevelopmental delays. Markers of advanced liver disease (higher total bilirubin and GGT for age ≤5 years; portal hypertension for age ≥6) correlate with lower FSIQ and may identify a vulnerable subset of patients who would benefit from intervention.
Asunto(s)
Atresia Biliar/psicología , Trastornos del Neurodesarrollo/epidemiología , Atresia Biliar/sangre , Atresia Biliar/patología , Bilirrubina/sangre , Niño , Desarrollo Infantil , Preescolar , Escolaridad , Femenino , Humanos , Hipertensión Portal/etiología , Hipertensión Portal/psicología , Hígado/patología , Estudios Longitudinales , Masculino , Trastornos del Neurodesarrollo/etiología , Prevalencia , Estudios Prospectivos , Factores de Riesgo , Escalas de Wechsler , gamma-Glutamiltransferasa/sangreRESUMEN
IMPORTANCE: Most individuals who die of sudden cardiac death (SCD) display very advanced lesions of atherosclerosis in their coronary arteries. Thus, we sought to identify and characterize a putative subpopulation of young individuals exhibiting accelerated coronary artery atherosclerosis. OBJECTIVE: Our analysis of the Pathobiological Determinants of Atherosclerosis in Youth (PDAY) study-which examined 2651 individuals, obtaining quantitative measurements of traditional risk factors for coronary heart disease (CHD)-aimed to identify individuals with advanced coronary artery lesions, and to determine whether risk factors could account for such rapid disease progression, or not. DESIGN: Using the cross-sectional PDAY study data, an exploratory de facto analysis stratified the population by age and observed number of coronary raised lesions and examined these groups via Poisson regression modeling. A separate de novo approach utilized Poisson mixture modeling to generate low- and high-growth groups based on measurements of traditional risk factors, and identified factors contributing to disease progression. PARTICIPANTS: Participants, nâ¯=â¯2651 individuals aged 15-34, who had died of non-cardiac death, were recruited post mortem. Tissues and other samples were harvested for analysis (details in previously published PDAY studies). Main Outcome(s) and Measure(s). Using quantitative measurements of raised coronary lesions and traditional risk factors of CHD, we sought to identify which risk factors account for disease progression. RESULTS: A group of ~13% of the PDAY population exhibits accelerated coronary atherosclerosis despite their young age. Several traditional risk factors were associated with increased odds of inclusion in this subgroup, reflecting current understanding of these markers of disease. However, only age was a significant contributing factor to the observed coronary lesion burden. CONCLUSIONS: While a range of traditional risk factors contribute to an individual's inclusion to the identified subgroup with accelerated atherosclerosis, these factors, with the exceptions of age, are not able to predict an individual's lesion burden. Moreover, unattributed variances in observations indicate the need to study novel risk factors. SHORT SUMMARY: Hypothesis The extent of coronary atherosclerotic disease is limited and homogeneous within youth, and its progression can be accounted for by traditional risk factors in this population. FINDINGS: A subpopulation (~13%) of the Pathobiological Determinants of Atherosclerosis in Youth cohort exhibited accelerated coronary artery atherosclerosis. While several traditional risk factors contribute to an individual's inclusion in this subgroup, these factors, with the exceptions of age, do not predict accurately an individual's lesions burden. Critically, unattributed variances in observations indicate the need for the identification of novel risk factors. MEANING: Screening of the general population at a young age for high-risk group membership could provide opportunity for disease prevention and avoidance of the worse complications such as myocardial infarction and sudden cardiac death later in life.
Asunto(s)
Factores de Edad , Enfermedad de la Arteria Coronaria/patología , Progresión de la Enfermedad , Placa Aterosclerótica/patología , Adolescente , Adulto , Proteína C-Reactiva , Causas de Muerte , Enfermedad de la Arteria Coronaria/etiología , Enfermedad de la Arteria Coronaria/mortalidad , Estudios Transversales , Femenino , Humanos , Masculino , Placa Aterosclerótica/etiología , Placa Aterosclerótica/mortalidad , Distribución de Poisson , Factores de Riesgo , Factores de Tiempo , Adulto JovenRESUMEN
Portal hypertension (PHT) is a major cause of morbidity and mortality in pediatric liver diseases. Thus, research into causes and disease modifiers in PHT in these conditions is vitally important. PHT is rarely directly or indirectly measured in the assessment of children with chronic liver disease. A straightforward, reproducible definition of PHT could be invaluable for consistently identifying patients with PHT and for grouping these patients according to their risk of complications from their disease. We propose the term Clinically Evident Portal Hypertension (CEPH) to denote clinical findings that demonstrate evidence of elevated portal pressure. When CEPH criteria are met, PHT is highly likely to be present, although it is likely that PHT exists for variable periods of time before meeting CEPH criteria. Use of this research definition of CEPH will allow for consistent identification of these patients by clinicians in nearly any clinical setting and serve as a clinical milepost that may dictate future prognosis in pediatric patients with cirrhosis.
Asunto(s)
Gastroenterología/normas , Hipertensión Portal/diagnóstico , Pediatría/normas , Evaluación de Síntomas/normas , Terminología como Asunto , Niño , Femenino , Gastroenterología/métodos , Humanos , Hipertensión Portal/clasificación , Hígado/irrigación sanguínea , Masculino , Pediatría/métodosRESUMEN
Femoroacetabular impingement (FAI) is a condition in which subtle deformities of the femoral head and acetabulum (hip socket) result in pathological abutment during hip motion. FAI is a common cause of hip pain and can lead to acetabular cartilage damage and osteoarthritis. For some patients with FAI, surgical intervention is indicated, and it can improve quality of life and potentially delay the onset of osteoarthritis. For other patients, however, surgery is contraindicated because significant cartilage damage has already occurred. Unfortunately, current imaging modalities (X-rays and conventional MRI) are subjective and lack the sensitivity to distinguish these two groups reliably. In this paper, we describe the pairing of T2* mapping data (an investigational, objective MRI sequence) and a spatial proportional odds model for surgically obtained ordinal outcomes (Beck's scale of cartilage damage). Each hip in the study is assigned its own spatial dependence parameter, and a Dirichlet process prior distribution permits clustering of said parameters. Using the fitted model, we produce a six-color, patient-specific predictive map of the entire acetabular cartilage. Such maps will facilitate patient education and clinical decision making. Copyright © 2017 John Wiley & Sons, Ltd.
Asunto(s)
Cartílago Articular/diagnóstico por imagen , Pinzamiento Femoroacetabular/clasificación , Imagen por Resonancia Magnética/estadística & datos numéricos , Índice de Severidad de la Enfermedad , Acetábulo/diagnóstico por imagen , Acetábulo/patología , Adolescente , Adulto , Artroscopía , Cartílago Articular/patología , Niño , Interpretación Estadística de Datos , Femenino , Pinzamiento Femoroacetabular/diagnóstico por imagen , Pinzamiento Femoroacetabular/patología , Cabeza Femoral/diagnóstico por imagen , Cabeza Femoral/patología , Humanos , Masculino , Persona de Mediana Edad , Modelos Estadísticos , Adulto JovenRESUMEN
In many areas of clinical investigation there is great interest in identifying and validating surrogate endpoints, biomarkers that can be measured a relatively short time after a treatment has been administered and that can reliably predict the effect of treatment on the clinical outcome of interest. However, despite dramatic advances in the ability to measure biomarkers, the recent history of clinical research is littered with failed surrogates. In this paper, we present a statistical perspective on why identifying surrogate endpoints is so difficult. We view the problem from the framework of causal inference, with a particular focus on the technique of principal stratification (PS), an approach which is appealing because the resulting estimands are not biased by unmeasured confounding. In many settings, PS estimands are not statistically identifiable and their degree of non-identifiability can be thought of as representing the statistical difficulty of assessing the surrogate value of a biomarker. In this work, we examine the identifiability issue and present key simplifying assumptions and enhanced study designs that enable the partial or full identification of PS estimands. We also present example situations where these assumptions and designs may or may not be feasible, providing insight into the problem characteristics which make the statistical evaluation of surrogate endpoints so challenging.
RESUMEN
INTRODUCTION: Ferritin level and erythropoiesis-stimulating agent (ESA) responsiveness are each associated with hemodialysis patient survival. We assessed interrelationships between these two vs. survival. METHODS: Patients in the Japan Dialysis Outcomes and Practice Patterns Study Phases 4-6 (2009-2018) were included. All-cause mortality associations were assessed with progressive adjustment to evaluate covariate influence. RESULTS: During follow-up (median 2.6 years), 773 of 5154 patients died. After covariate adjustment, the mortality hazard ratio (HR) was 0.99 (95% CI: 0.81, 1.20) for low serum ferritin and 1.12 (CI: 0.89, 1.41) for high serum ferritin. By contrast, mortality risk with elevated ESA resistance index (ERI) persisted after covariate adjustment (HR 1.44, CI [1.17-1.78]). The serum ferritin and ERI interaction was not significant; p > 0.96 across all models. CONCLUSIONS: Japanese hemodialysis patients with high ERI experienced worse survival independent of serum ferritin levels, highlighting the importance of identifying and mitigating ESA hyporesponsiveness among dialysis patients.
RESUMEN
BACKGROUND: Alterations in both mitochondrial DNA (mtDNA) and nuclear DNA genes affect mitochondria function, causing a range of liver-based conditions termed mitochondrial hepatopathies (MH), which are subcategorized as mtDNA depletion, RNA translation, mtDNA deletion, and enzymatic disorders. We aim to enhance the understanding of pathogenesis and natural history of MH. METHODS: We analyzed data from patients with MH phenotypes to identify genetic causes, characterize the spectrum of clinical presentation, and determine outcomes. RESULTS: Three enrollment phenotypes, that is, acute liver failure (ALF, n = 37), chronic liver disease (Chronic, n = 40), and post-liver transplant (n = 9), were analyzed. Patients with ALF were younger [median 0.8 y (range, 0.0, 9.4) vs 3.4 y (0.2, 18.6), p < 0.001] with fewer neurodevelopmental delays (40.0% vs 81.3%, p < 0.001) versus Chronic. Comprehensive testing was performed more often in Chronic than ALF (90.0% vs 43.2%); however, etiology was identified more often in ALF (81.3% vs 61.1%) with mtDNA depletion being most common (ALF: 77% vs Chronic: 41%). Of the sequenced cohort (n = 60), 63% had an identified mitochondrial disorder. Cluster analysis identified a subset without an underlying genetic etiology, despite comprehensive testing. Liver transplant-free survival was 40% at 2 years (ALF vs Chronic, 16% vs 65%, p < 0.001). Eighteen (21%) underwent transplantation. With 33 patient-years of follow-up after the transplant, 3 deaths were reported. CONCLUSIONS: Differences between ALF and Chronic MH phenotypes included age at diagnosis, systemic involvement, transplant-free survival, and genetic etiology, underscoring the need for ultra-rapid sequencing in the appropriate clinical setting. Cluster analysis revealed a group meeting enrollment criteria but without an identified genetic or enzymatic diagnosis, highlighting the need to identify other etiologies.
Asunto(s)
Fallo Hepático Agudo , Trasplante de Hígado , Humanos , Fallo Hepático Agudo/diagnóstico , Fallo Hepático Agudo/genética , Trasplante de Hígado/efectos adversos , ADN Mitocondrial/genética , FenotipoRESUMEN
Introduction: Incidence of kidney replacement therapy (KRT) varies widely across countries. Its relations to individual characteristics, nephrology practices for slowing chronic kidney disease (CKD) progression, and KRT access remain unclear. Methods: We investigated intercountry differences in kidney failure (KF) rate, defined by a sustained estimated glomerular filtration rate (eGFR) <15 ml/min per 1.73 m2, and separately in KRT incidence, before and after adjusting for risk factors and blood pressure (BP) control or renin-angiotensin-aldosterone system inhibitor (RAASi) prescription practices in the CKD Outcomes and Practice Patterns Study (CKDopps) cohort study. Results: Among 7381 patients with CKD stage 3 to 4 at enrollment, 1297 progressed to KF and 947 initiated KRT over a 3-year follow-up period. Compared to the United States, demographic-adjusted and eGFR-adjusted hazard ratios (HRs) (HRs, 95% confidence intervals [CI]) for a sustained low eGFR were 0.77 (95% CI, 0.57-1.02) in Brazil, 0.90 (95% CI, 0.75-1.08) in France, and 1.03 (95% CI, 0.86-1.03) in Germany. Further adjustment for comorbidities, albuminuria, systolic BP, and RAASi prescription did not substantially change these HRs. In contrast, compared with the United States, the fully-adjusted HR for KRT remained significantly lower in Brazil (0.55, 95% CI 0.39-0.79), higher in Germany (95% CI, 1.36, 1.09-1.69), and similar in France (95% CI, 1.07, 0.81-1.39). Conclusion: Individual risk factors for CKD progression in nephrology patients appeared to explain most intercountry variations in KF but not KRT incidence. This suggests a prominent role for differences in practices related to KRT initiation or access, but not those for slowing disease progression. This study also shows that using KRT as a KF surrogate may bias estimates of associations with CKD progression risk factors.
RESUMEN
Biological, ecological, social, and technological systems are complex structures with multiple interacting parts, often represented by networks. Correlation matrices describing interdependency of the variables in such structures provide key information for comparison and classification of such systems. Classification based on correlation matrices could supplement or improve classification based on variable values, since the former reveals similarities in system structures, while the latter relies on the similarities in system states. Importantly, this approach of clustering correlation matrices is different from clustering elements of the correlation matrices, because our goal is to compare and cluster multiple networks-not the nodes within the networks. A novel approach for clustering correlation matrices, named "snakes-&-dragons," is introduced and illustrated by examples from neuroscience, human microbiome, and macroeconomics.
Asunto(s)
Algoritmos , Adolescente , Adulto , Encéfalo/fisiología , Análisis por Conglomerados , Simulación por Computador , Femenino , Genómica , Humanos , Masculino , Microbiota , Red Nerviosa/fisiología , Adulto JovenRESUMEN
BACKGROUND: Abnormal chronic kidney disease-mineral and bone disorder (CKD-MBD) markers have been associated with adverse outcomes in hemodialysis (HD) patients. Dialysate calcium concentration (D-Ca) likely influences serum calcium and phosphorus levels. Optimal D-Ca level remains unclear. We hypothesized that higher D-Ca is associated with cardiovascular events and mortality among Japanese HD patients. METHODS: Enrollment data of chronic HD patients in the prospective observational study JDOPPS, phases 1-5 (1999-2015), provided exposures and covariates. All-cause mortality, non-arrhythmic cardiovascular events (NonAR-CVE), or their composites were analyzed by D-Ca, and divided into 2.5, 2.75, and 3.0 mEq/L. To minimize confounding by indication, analyses were restricted to facilities in which at least 90% of patients received the same D-Ca prescription. Association of D-Ca level with outcomes was evaluated in Cox models stratified by phase and accounting for facility clustering. Covariates describing patient demographics, comorbidities, laboratory values, CKD-MBD therapy, and facility attributes provided adjustment. RESULTS: Of 9,201 patients included, 25.0% had D-Ca of 2.5 mEq/L; 6.8% D-Ca 2.75; and 68.2% D-Ca 3.0. Median follow-up time was 2.03 years. D-Ca was not associated with all-cause mortality, with hazards ratios for 2.5 vs. 3.0 mEq/L of 0.90 and 95% CI (0.73-1.11), nor with other outcomes. One effect modification occurred, protective for lower D-Ca on NonAR-CVE in the absence of cardiovascular comorbidities (p = 0.032), although corresponding D-Ca effects were not significant after multiple comparisons adjustment (p = 0.261 [D-Ca 2.5] and 0.125 [D-Ca 2.75]). CONCLUSION: Lowering D-Ca level below 3.0 mEq/L seems not to have a meaningful effect on patient outcomes.
Asunto(s)
Calcio/análisis , Trastorno Mineral y Óseo Asociado a la Enfermedad Renal Crónica/metabolismo , Diálisis Renal , Anciano , Anciano de 80 o más Años , Biomarcadores , Enfermedades Cardiovasculares/etiología , Enfermedades Cardiovasculares/mortalidad , Trastorno Mineral y Óseo Asociado a la Enfermedad Renal Crónica/tratamiento farmacológico , Trastorno Mineral y Óseo Asociado a la Enfermedad Renal Crónica/mortalidad , Femenino , Humanos , Japón/epidemiología , Persona de Mediana Edad , Pautas de la Práctica en Medicina , Estudios Prospectivos , Diálisis Renal/mortalidad , Resultado del TratamientoRESUMEN
Medical education increasingly involves online learning experiences to facilitate the standardization of curriculum across time and space. In class, delivering material by lecture is less effective at promoting student learning than engaging students in active learning experience and it is unclear whether this difference also exists online. We sought to evaluate medical student preferences for online lecture or online active learning formats and the impact of format on short- and long-term learning gains. Students participated online in either lecture or constructivist learning activities in a first year neurologic sciences course at a US medical school. In 2012, students selected which format to complete and in 2013, students were randomly assigned in a crossover fashion to the modules. In the first iteration, students strongly preferred the lecture modules and valued being told "what they need to know" rather than figuring it out independently. In the crossover iteration, learning gains and knowledge retention were found to be equivalent regardless of format, and students uniformly demonstrated a strong preference for the lecture format, which also on average took less time to complete. When given a choice for online modules, students prefer passive lecture rather than completing constructivist activities, and in the time-limited environment of medical school, this choice results in similar performance on multiple-choice examinations with less time invested. Instructors need to look more carefully at whether assessments and learning strategies are helping students to obtain self-directed learning skills and to consider strategies to help students learn to value active learning in an online environment.
RESUMEN
There are well-defined theoretical differences between the classical test theory (CTT) and item response theory (IRT) frameworks. It is understood that in the CTT framework, person and item statistics are test- and sample-dependent. This is not the perception with IRT. For this reason, the IRT framework is considered to be theoretically superior to the CTT framework for the purpose of estimating person and item parameters. In previous simulation studies, IRT models were used both as generating and as fitting models. Hence, results favoring the IRT framework could be attributed to IRT being the data-generation framework. Moreover, previous studies only considered the traditional CTT framework for the comparison, yet there is considerable literature suggesting that it may be more appropriate to use CTT statistics based on an underlying normal variable (UNV) assumption. The current study relates the class of CTT-based models with the UNV assumption to that of IRT, using confirmatory factor analysis to delineate the connections. A small Monte Carlo study was carried out to assess the comparability between the item and person statistics obtained from the frameworks of IRT and CTT with UNV assumption. Results show the frameworks of IRT and CTT with UNV assumption to be quite comparable, with neither framework showing an advantage over the other.
RESUMEN
Students Against Nicotine and Tobacco Addiction is a community-based participatory research project that engages local medical and mental health providers in partnership with students, teachers, and administrators at the Minnesota-based Job Corps. This intervention contains multiple and synchronous elements designed to allay the stress that students attribute to smoking, including physical activities, nonphysical activities, purposeful modifications to the campus's environment and rules/policies, and on-site smoking cessation education and peer support. The intent of the present investigation was to evaluate (a) the types of stress most predictive of smoking behavior and/or nicotine dependence, (b) which activities students are participating in, and (c) which activities are most predictive of behavior change (or readiness to change). Quantitative data were collected through 5 campus-wide surveys. Response rates for each survey exceeded 85%. Stressors most commonly cited included struggles to find a job, financial problems, family conflict, lack of privacy or freedom, missing family or being homesick, dealing with Job Corps rules, and other-unspecified. The most popular activities in which students took part were physically active ones. However, activities most predictive of beneficent change were nonphysical. Approximately one third of respondents were nicotine dependent at baseline. Nearly half intended to quit within 1 month and 74% intended to quit within 6 months. Interventions perceived as most helpful toward reducing smoking were nonphysical in nature. Future efforts with this and comparable populations should engage youth in advancing such activities within a broader range of activity choices, alongside conventional education and support.
Asunto(s)
Investigación Participativa Basada en la Comunidad , Prevención del Hábito de Fumar , Fumar/epidemiología , Adolescente , Demografía , Femenino , Humanos , Masculino , Minnesota/epidemiología , Actividad Motora , Educación del Paciente como Asunto , Prevalencia , Evaluación de Programas y Proyectos de Salud , Estrés Psicológico/epidemiología , Resultado del Tratamiento , Adulto JovenRESUMEN
This study is a feasibility test of whether incorporating trauma-sensitive yoga into group therapy for female victims of partner violence improves symptoms of anxiety, depression, and posttraumatic stress disorder (PTSD) beyond that achieved with group therapy alone. Seventeen (9 control, 8 intervention) adult female clients seeking group psychotherapy were enrolled. A 12-week trauma-sensitive yoga protocol was administered once weekly for 30-40 min at the end of each group therapy session. The control group received typical group psychotherapy. Feasibility was assessed through recruitment and retention rates as well as participants' self-reported perceptions of the safety and utility of the study. The study enrolled 85% (17/20) of those screened eligible. Loss to follow-up was 30% (5/17). No one reported emotional or physical harm. All of the respondents reported that the study was personally meaningful and that the results would be useful to others.
Asunto(s)
Ansiedad/terapia , Depresión/terapia , Violencia Doméstica , Psicoterapia de Grupo/métodos , Psicoterapia/métodos , Yoga/psicología , Adulto , Estudios de Factibilidad , Femenino , Humanos , Salud Mental , Persona de Mediana Edad , Ensayos Clínicos Controlados Aleatorios como AsuntoRESUMEN
OBJECTIVE: To compare clinical characteristics and patient-reported outcomes in seropositive versus seronegative primary Sjögren's syndrome (SS) patients and to investigate the effect of serologic status on the prevalence of chronic pain, comorbidity, and health quality. METHODS: Pain severity and neuropathic pain symptoms, comorbidity, and health status were assessed in 108 primary SS patients. Differences between patient groups were assessed by t-test and chi-square test, as well as adjusted pain-affect associations. The effect of predictor variables on pain severity was examined with multivariate regression. RESULTS: Pain severity was greater (P = 0.003) and physical function (P = 0.023) was reduced in the seronegative patients. Prevalence of neuropathic pain, depression, anxiety, and disability was similar between groups. Chronic pain, defined as daily pain for >3 months, was reported by 65% of seropositive (n = 65) and 75% of seronegative (n = 40) patients. After adjustment for age, sleep quality, and psychological distress, the difference in pain severity between seropositive and seronegative patients remained significant. CONCLUSION: Chronic pain is pervasive in both seropositive and seronegative primary SS patients, while pain severity and functional impairment are greater in seronegative patients. Neuropathic pain is equally prevalent and is the predominant pain phenotype in patients with moderate to severe pain. Accurate assessment of pain phenotypes is needed for more effective management of chronic pain in primary SS. The focus of future research should be to standardize assessment of pain and to identify the factors contributing to more severe pain in seronegative patients.