ABSTRACT
BACKGROUND: The Global Diet Quality Score (GDQS) was developed to be a simple, timely and cost-effective tool to track, simultaneously, nutritional deficiency and non-communicable disease risks from diet in diverse settings. The objective was to investigate the performance of GDQS as an indicator of adequate nutrient intake and dietary quality in a national-representative sample of the Brazilian population. METHODS: Nationally-representative data from 44,744 men and non-pregnant and non-lactating women aging ≥ 10 years, from the Brazilian National Dietary Survey were used. Dietary data were collected through two 24-h recalls (24HR). The GDQS was calculated and compared to a proxy indicator of nutrient adequate intake (the Minimum Dietary Diversity for Women-MDD-W) and to an indicator of high-risk diet for non-communicable diseases (caloric contribution from ultra-processed foods-UPF). To estimate the odds for overall nutrient inadequacy across MDD-W and GDQS quintiles, a multiple logistic regression was applied, and the two metrics' performances were compared using Wald's post-test. RESULTS: The mean GDQS for Brazilians was 14.5 (0-49 possible range), and only 1% of the population had a low-risk diet (GDQS ≥ 23). The GDQS mean was higher in women, elderly individuals and in higher-income households. An inverse correlation was found between the GDQS and UPF (rho (95% CI) = -0.20(-0.21;-0.19)). The odds for nutrient inadequacy were lower as quintiles of GDQS and MDD-W were higher (p-trend < 0.001), and MDD-W had a slightly better performance than GDQS (p-diff < 0.001). Having a low-risk GDQS (≥ 23) lowered the odds for nutrient inadequacy by 74% (95% CI:63%-81%). CONCLUSION: The GDQS is a good indicator of overall nutrient adequacy, and correlates well with UPF in a nationally representative sample of Brazil. Future studies must investigate the relationship between the GDQS and clinical endpoints, strengthening the recommendation to use this metric to surveillance dietary risks.
Subject(s)
Diet , Malnutrition , South American People , Male , Humans , Female , Aged , Energy Intake , EatingABSTRACT
CONTEXT: Goals of care conversations can promote high value care for patients with serious illness, yet documented discussions infrequently occur in hospital settings. OBJECTIVES: We sought to develop a quality improvement initiative to improve goals of care documentation for hospitalized patients. METHODS: Implementation occurred at an academic medical center in Pittsburgh, Pennsylvania. Intervention included integration of a 90-day mortality prediction model grouping patients into low, intermediate, and high risk; a centralized goals of care note; and automated notifications and targeted palliative consults. We compared documented goals of care discussions by risk score before and after implementation. RESULTS: Of the 12,571 patients hospitalized preimplementation and 10,761 postimplementation, 1% were designated high risk and 11% intermediate risk of mortality. Postimplementation, goals of care documentation increased for high (17.6%-70.8%, P< 0.0001) and intermediate risk patients (9.6%-28.0%, P < 0.0001). For intermediate risk patients, the percentage of goals of care documentation performed by palliative medicine specialists increased from pre- to postimplementation (52.3%-71.2%, P = 0.0002). For high-risk patients, the percentage of goals of care documentation completed by the primary service increased from pre-to postimplementation (36.8%-47.1%, P = 0.5898, with documentation performed by palliative medicine specialists slightly decreasing from pre- to postimplementation (63.2%-52.9%, P = 0.5898). CONCLUSIONS: Implementation of a goals of care initiative using a mortality prediction model significantly increased goals of care documentation especially among high-risk patients. Further study to assess strategies to increase goals of care documentation for intermediate risk patients is needed especially by nonspecialty palliative care.
Subject(s)
Hospitals , Palliative Care , Humans , Communication , Patient Care Planning , DocumentationABSTRACT
OBJECTIVE: To assess the prospective association of two diet quality scores based on the Nova food classification with BMI gain. DESIGN: The NutriNet-Brasil cohort is an ongoing web-based prospective study with continuous recruitment of participants aged ≥ 18 years since January 2020. A short 24-h dietary recall screener including 'yes/no' questions about the consumption of whole plant foods (WPF) and ultra-processed foods (UPF) was completed by participants at baseline. The Nova-WPF and the Nova-UPF scores were computed by adding up positive responses regarding the consumption of thirty-three varieties of WPF and twenty-three varieties of UPF, respectively. Participants reported their height at baseline and their weight at both baseline and after approximately 15 months of follow-up. A 15-month BMI (kg/m2) increase of ≥5 % was coded as BMI gain. SETTING: Brazil. PARTICIPANTS: 9551 participants from the NutriNet-Brasil cohort. RESULTS: Increasing quintiles of the Nova-UPF score were linearly associated with higher risk of BMI gain (relative risk Q5/Q1 = 1·34; 95 % CI 1·15, 1·56), whereas increasing quintiles of the Nova-WPF score were linearly associated with lower risk (relative risk Q5/Q1 = 0·80; 95 % CI 0·69, 0·94). We identified a moderate inverse correlation between the two scores (-0·33) and a partial mediating effect of the alternative score: 15 % for the total effect of the Nova-UPF score and 25 % for the total effect of the Nova-WPF score. CONCLUSIONS: The Nova-UPF and Nova-WPF scores are independently associated with mid-term BMI gain further justifying their use in diet quality monitoring systems.
Subject(s)
Fast Foods , Food Handling , Humans , Cohort Studies , Prospective Studies , Brazil , Diet , Weight GainABSTRACT
BACKGROUND: Evidence on concurrent changes in overall diet quality and weight and waist circumference in women of reproductive age from low- and middle-income countries is limited. OBJECTIVES: We examined the associations of changes in the Global Diet Quality Score (GDQS) and each GDQS food group with concurrent weight and waist circumference change in Mexican women. METHODS: We followed prospectively 8967 nonpregnant nonlactating women aged 25-49 y in the Mexican Teachers' Cohort between 2006 and 2008. We assessed diet using an FFQ of the previous year and anthropometric measures were self-reported. Regression models were used to examine 2-y changes in the GDQS and each food group (servings/d) with weight and waist circumference changes within the same period, adjusting for demographic and lifestyle factors. RESULTS: Compared with those with little change in the GDQS (-2 to 2 points), women with the largest increase in the GDQS (>5 points) had less weight (ß: -0.81 kg/2 y; 95% CI: -1.11, -0.51 kg/2 y) and waist circumference gain (ß: -1.05 cm/2 y; 95% CI: -1.62, -0.48 cm/2 y); likewise, women with the largest decrease in the GDQS (<-5 points) had more weight (ß: 0.36 kg/2 y; 95% CI: 0.06, 0.66 kg/2 y) and waist circumference gain (ß: 0.71 cm/2 y; 95% CI: 0.09, 1.32 cm/2 y). Increased intake of dark green leafy vegetables, cruciferous vegetables, deep orange vegetables, citrus fruits, and fish and shellfish was associated with less weight gain. In addition, deep orange vegetables, low fat and high fat dairy, whole grains, and fish were associated with less waist circumference gain within the 2-y period. CONCLUSIONS: Improvements in diet quality over a 2-y period reflected by an increase in the GDQS and changes in consumption of specific components of the GDQS were associated with less weight and waist circumference gain in Mexican women.
Subject(s)
Body Weight , Diet, Healthy/trends , Diet/trends , Waist Circumference , Adult , Female , Humans , Longitudinal Studies , Mexico , Middle Aged , Prospective StudiesABSTRACT
ABSTRACT BACKGROUND: Recent studies have shown that endoscopy fellows can perform colonoscopy effectively and safely. However, little is known about the performance of surgical residents without prior knowledge of endoscopic techniques. OBJECTIVE: To assess whether quality indicators were met at an outpatient endoscopy center and whether surgical residents, without prior upper or lower endoscopy skills, could perform colonoscopy adequately. METHODS: A prospective non-randomized cohort study was undertaken. All exams were performed either by assistant physicians or by residents. Quality measures were compared between those groups. RESULTS: A total of 2720 colonoscopies were analyzed. In the resident group, we observed older patients (57.7±12.7 years vs 51.5±14.5 years, P<0.001), a higher prevalence of screening colonoscopies (52% vs 39.4%, P<0.001) and a higher prevalence of colorectal cancer (6.4% vs 1.8%, P<0.001). The cecal intubation rate was higher in the attending group (99.9% vs 89.3%; P<0.001). The polyp detection rate was 40.8%, and no differences were observed between the studied groups. The residents had a higher rate of perforation in all exams (0.4% vs 0%; P=0.02). Postpolypectomy bleeding and 7-day readmission rates were the same (0.2%). All readmissions in 7 days occurred due to low digestive bleeding, and none required intervention. CONCLUSION: Quality indicators were met at a university outpatient endoscopy center; however, medical residents achieved lower rates of cecal intubation and higher rates of perforation than the attending physicians.
RESUMO CONTEXTO: Estudos recentes mostraram que médicos em treinamento podem realizar a colonoscopia de maneira eficaz e segura. No entanto, pouco se sabe sobre a performance dos médicos residentes de cirurgia sem o conhecimento prévio das técnicas endoscópicas. OBJETIVO: Avaliar se os indicadores de qualidade foram atendidos em um centro de endoscopia ambulatorial e se os residentes de cirurgia, sem habilidades anteriores em endoscopia alta ou baixa, realizaram a colonoscopia de forma adequada. MÉTODOS: Foi realizado um estudo de coorte prospectivo não randomizado. Todos os exames foram realizados por médicos assistentes ou residentes. Os indicadores de qualidade foram comparados entre esses grupos. RESULTADOS: Um total de 2.720 colonoscopias foram analisadas. No grupo de médicos residentes, observamos pacientes mais velhos (57,7±12,7 anos vs 51,5±14,5 anos, P<0,001), maior prevalência de colonoscopias de rastreamento (52% vs 39,4%, P<0,001) e maior prevalência de câncer colorretal (6,4% vs 1,8%, P<0,001). A taxa de intubação cecal foi maior no grupo de médicos assistentes (99,9% vs 89,3%; P<0,001). A taxa de detecção de pólipos foi de 40,8% e não foram observadas diferenças entre os grupos estudados. Os médicos residentes tiveram maior índice de perfuração (0,4% vs 0%; P=0,02). O sangramento pós-polipectomia e as taxas de readmissão em 7 dias foram iguais (0,2%). Todas as readmissões em 7 dias ocorreram devido a hemorragia digestiva baixa e nenhuma intervenção foi necessária. CONCLUSÃO: Os indicadores de qualidade foram alcançados em um centro de endoscopia universitário; no entanto, os médicos residentes alcançaram taxas mais baixas de intubação cecal e taxas mais altas de perfuração do que os médicos assistentes.
Subject(s)
Humans , Outpatients , Cecum , Universities , Prospective Studies , Cohort Studies , Colonoscopy , Clinical CompetenceABSTRACT
Real forensic casework biological evidence can be found in a myriad of different conditions and presenting very distinct features, including key elements such as degradation levels, the nature of biological evidence, mixture presence, and surface or substrate deposition, among others. Technical protocols employed by forensic DNA analysts must consider such characteristics in order to improve the chances of successfully genotyping these materials. MPS has been used as a very useful tool for forensic sample processing and genetic profile generation. However, it is not completely clear how different features encountered with real forensic samples impact sequencing quality and, consequently, profile accuracy and reliability. In this context, the present study analyzes a set of 47 real forensic casework samples, obtained from semen, saliva, blood and epithelial evidence, as well as reference oral swabs, aiming to evaluate the impact of a sample's biological nature in profiling success. All DNA extracts from samples were standardized according to sample conditions, as assessed by traditional forensic profiling methods (real-time PCR quantitation and capillary electrophoresis-coupled STR fragment analysis). Samples were separated into groups according to their biological nature, and the resultant sequencing quality was evaluated through a series of well-established statistical tests, applied specifically to six different MPS quality metrics. The results showed that certain groups of samples, especially epithelial and (to a lesser extent) saliva samples, exhibited significantly lower quality in terms of some of the evaluated metrics. A number of reasons for such unexpected behavior are discussed. In addition, a series of calculations was performed to assess the weight of genetic evidence in Brazilian samples, and reflexes in data analysis and national allele frequency database construction are discussed. Overall, the results indicate that a unified national allele frequency database can be used nationwide. Besides this, MPS genetic profiles obtained from samples with particular biological origins may benefit from meticulous manual review, and visual inspection could be important as an additional step to avoid genotyping errors or misinterpretation, leading to more trustworthy and reliable results in real criminal forensic casework analysis.
Subject(s)
DNA Fingerprinting/methods , High-Throughput Nucleotide Sequencing , Polymorphism, Single Nucleotide , Blood Chemical Analysis , Brazil , Databases, Genetic , Electrophoresis, Capillary , Epithelial Cells , Gene Frequency , Genetics, Population , Humans , Microsatellite Repeats , Real-Time Polymerase Chain Reaction , Saliva , Semen , Sequence Analysis, DNAABSTRACT
BACKGROUND: Publicly funded prescription drug programs, such as state pharmacy assistance programs, provide critical benefits for the care of individuals, but they are frequently limited in their resources to optimize patient outcomes. The application of quality metrics to prescription drug claims may help to determine whether prescribers' adherence to national standards can be augmented through academic detailing. OBJECTIVE: To evaluate changes in diabetes drug prescribing patterns after an academic detailing educational intervention in 2013 and 2014 for prescribers in the Pennsylvania Pharmaceutical Assistance Contract for the Elderly (PACE) program. METHODS: We used a retrospective, quasiexperimental study design that applied interrupted time series and segmented regression analysis, and examined PACE pharmacy claims data for 1 year before and 1 year after the academic detailing intervention. Four diabetes prescribing metrics were evaluated at monthly intervals for a sample of 574 prescribers who received academic detailing and for a propensity score-matched comparison sample of 574 prescribers who did not receive the intervention. RESULTS: The prescribers who received academic detailing did not differ significantly after the intervention from the providers who did not receive the intervention in their prescribing trends for the 4 diabetes metrics. The observed time series patterns suggest that diabetes-related ceiling effects were likely, with relatively small room for improvement at the group level during the study period. CONCLUSION: The results of this study did not demonstrate group differences in prescribing trends that were attributable to the intervention. However, many prescribers in the detailed group had been exposed to similar educational outreach by PACE before 2013, which limits the interpretation of this finding. In addition, the diabetes quality metrics had been the standard of care during the preceding decade, with a broad dissemination of the treatment guidelines to the provider community. These results are consistent with a ceiling effect in the measured metrics, suggesting that most prescribers in both groups were largely following core diabetes guidelines before and after the intervention.
ABSTRACT
BACKGROUND: The end-stage renal disease Medical Evidence Report serves as a source of comorbid condition data for risk adjustment of quality metrics. We sought to compare comorbid condition data in the Medical Evidence Report around dialysis therapy initiation with diagnosis codes in Medicare claims. STUDY DESIGN: Observational cohort study using US Renal Data System data. SETTING & PARTICIPANTS: Medicare-enrolled elderly (≥66 years) patients who initiated maintenance dialysis therapy July 1 to December 31, 2007, 2008, or 2009. INDEX TESTS: 12 comorbid conditions ascertained from claims during the 6 months before dialysis therapy initiation, the Medical Evidence Report, and claims during the 3 months after dialysis therapy initiation. REFERENCE TEST: None. RESULTS: Comorbid condition prevalence according to claims before dialysis therapy initiation generally exceeded prevalence according to the Medical Evidence Report. The κ statistics for comorbid condition designations other than diabetes ranged from 0.06 to 0.43. Discordance of designations was associated with age, race, sex, and end-stage renal disease Network. During 23,930 patient-years of follow-up from 4 to 12 months after dialysis therapy initiation (8,930 deaths), designations from claims during the 3 months after initiation better discriminated risk of death than designations from the Medical Evidence Report (C statistics of 0.674 vs 0.616). Between the Medical Evidence Report and claims, standardized mortality ratios changed by >10% for more than half the dialysis facilities. LIMITATIONS: Neither the Medical Evidence Report nor diagnosis codes in claims constitute a gold standard of comorbid condition data; results may not apply to nonelderly patients or patients without Medicare coverage. CONCLUSIONS: Discordance of comorbid condition designations from the Medical Evidence Report and claims around dialysis therapy initiation was substantial and significantly associated with patient characteristics, including location. These patterns may engender bias in risk-adjusted quality metrics. In lieu of the Medical Evidence Report, claims during the 3 months after dialysis therapy initiation may constitute a useful source of comorbid condition data.