RESUMO
BACKGROUND: Previous research found that reliability estimates for chart-extracted quality of care data vary. PURPOSE: The purpose was to examine methods and processes used to gather data on the National Database of Nursing Quality Indicators (NDNQI) pressure injury (PI) risk and prevention measures to identify factors that may influence their reliability. METHODS: Study participants (N = 120) from 36 hospitals completed a 35-item online survey. Included were the NDNQI PI Survey Team member with the most experience and/or skill in patient record review from each hospital (n = 36) and 84 other NDNQI PI Survey Team members. RESULTS: In general, participants followed NDNQI PI data collection guidelines. However, deviations were noted such as 60 (50%) participants collected PI data on units where they work, and 92 (76.7%) determined whether moisture management was performed by direct observation of patients rather than chart documentation. CONCLUSIONS: Findings provide insight on how to improve the reliability of hospital-acquired PI risk and prevention measures that includes clarification of the data collection guidelines.
Assuntos
Coleta de Dados/estatística & dados numéricos , Bases de Dados Factuais , Recursos Humanos de Enfermagem Hospitalar , Úlcera por Pressão/prevenção & controle , Indicadores de Qualidade em Assistência à Saúde/estatística & dados numéricos , Medição de Risco/normas , Hospitais , Humanos , Internet , Reprodutibilidade dos Testes , Inquéritos e QuestionáriosRESUMO
In this descriptive multi-site study, we examined inter-rater agreement on 11 National Database of Nursing Quality Indicators(®) (NDNQI(®) ) pressure ulcer (PrU) risk and prevention measures. One hundred twenty raters at 36 hospitals captured data from 1,637 patient records. At each hospital, agreement between the most experienced rater and each other team rater was calculated for each measure. In the ratings studied, 528 patients were rated as "at risk" for PrU and, therefore, were included in calculations of agreement for the prevention measures. Prevalence-adjusted kappa (PAK) was used to interpret inter-rater agreement because prevalence of single responses was high. The PAK values for eight measures indicated "substantial" to "near perfect" agreement between most experienced and other team raters: Skin assessment on admission (.977, 95% CI [.966-.989]), PrU risk assessment on admission (.978, 95% CI [.964-.993]), Time since last risk assessment (.790, 95% CI [.729-.852]), Risk assessment method (.997, 95% CI [.991-1.0]), Risk status (.877, 95% CI [.838-.917]), Any prevention (.856, 95% CI [.76-.943]), Skin assessment (.956, 95% CI [.904-1.0]), and Pressure-redistribution surface use (.839, 95% CI [.763-.916]). For three intervention measures, PAK values fell below the recommended value of ≥.610: Routine repositioning (.577, 95% CI [.494-.661]), Nutritional support (.500, 95% CI [.418-.581]), and Moisture management (.556, 95% CI [.469-.643]). Areas of disagreement were identified. Findings provide support for the reliability of 8 of the 11 measures. Further clarification of data collection procedures is needed to improve reliability for the less reliable measures. © 2016 Wiley Periodicals, Inc.
Assuntos
Bases de Dados Factuais , Úlcera por Pressão/prevenção & controle , Indicadores de Qualidade em Assistência à Saúde , Adolescente , Adulto , Idoso , Idoso de 80 Anos ou mais , Feminino , Hospitais/estatística & dados numéricos , Humanos , Masculino , Pessoa de Meia-Idade , Processo de Enfermagem , Úlcera por Pressão/epidemiologia , Úlcera por Pressão/etiologia , Prevalência , Reprodutibilidade dos Testes , Medição de Risco/organização & administração , Medição de Risco/estatística & dados numéricosRESUMO
BACKGROUND AND PURPOSE: Efforts to establish support for the reliability of quality indicator data are ongoing. Most patients typically receive recommended care, therefore, the high-prevalence of event rates make statistical analysis challenging. This article presents a novel statistical approach recently used to estimate inter-rater agreement for the National Database for Nursing Quality Indicator pressure injury risk and prevention data. METHODS: Inter-rater agreement was estimated by prevalence-adjusted kappa values. Data modifications were also done to overcome the convergence issue due to sparse cross-tables. RESULTS: Cohen's kappa values suggested low reliability despite high levels of agreement between raters. CONCLUSION: Prevalence-adjusted kappa values should be presented with Cohen's kappa values in order to evaluate inter-rater agreement when the majority of patients receive recommended care.