Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 96
Filtrar
Más filtros

Banco de datos
Tipo del documento
Intervalo de año de publicación
1.
Age Ageing ; 53(5)2024 05 01.
Artículo en Inglés | MEDLINE | ID: mdl-38796316

RESUMEN

INTRODUCTION: This process evaluation was conducted in parallel to the randomised controlled feasibility trial of NIDUS-Professional, a manualised remote dementia training intervention for homecare workers (HCWs), delivered alongside an individualised intervention for clients living with dementia and their family carers (NIDUS-Family). The process evaluation reports on: (i) intervention reach, dose and fidelity; (ii) contexts influencing agency engagement and (iii) alignment of findings with theoretical assumptions about how the intervention might produce change. METHODS: We report proportions of eligible HCWs receiving any intervention (reach), number of sessions attended (dose; attending ≥4/6 main sessions was predefined as adhering), intervention fidelity and adherence of clients and carers to NIDUS-Family (attending all 6-8 planned sessions). We interviewed HCWs, managers, family carers and facilitators. We integrated and thematically analysed, at the homecare agency level, qualitative interview and intervention recording data. RESULTS: 32/141 (23%) of eligible HCWs and 7/42 (17%) of family carers received any intervention; most who did adhered to the intervention (89% and 71%). Intervention fidelity was high. We analysed interviews with 20/44 HCWs, 3/4 managers and 3/7 family carers, as well as intervention recordings involving 32/44 HCWs. All agencies reported structural challenges in supporting intervention delivery. Agencies with greater management buy-in had higher dose and reach. HCWs valued NIDUS-Professional for enabling group reflection and peer support, providing practical, actionable care strategies and increasing their confidence as practitioners. CONCLUSION: NIDUS-Professional was valued by HCWs. Agency management, culture and priorities were key barriers to implementation; we discuss how to address these in a future trial.


Asunto(s)
Cuidadores , Demencia , Servicios de Atención de Salud a Domicilio , Auxiliares de Salud a Domicilio , Humanos , Demencia/terapia , Demencia/psicología , Cuidadores/educación , Auxiliares de Salud a Domicilio/educación , Auxiliares de Salud a Domicilio/psicología , Masculino , Femenino , Conocimientos, Actitudes y Práctica en Salud , Reino Unido , Evaluación de Procesos, Atención de Salud , Persona de Mediana Edad , Actitud del Personal de Salud , Entrevistas como Asunto
2.
J Med Internet Res ; 26: e45242, 2024 Aug 01.
Artículo en Inglés | MEDLINE | ID: mdl-39088815

RESUMEN

BACKGROUND: Low- and lower-middle-income countries account for a higher percentage of global epidemics and chronic diseases. In most low- and lower-middle-income countries, there is limited access to health care. The implementation of open-source electronic health records (EHRs) can be understood as a powerful enabler for low- and lower-middle-income countries because it can transform the way health care technology is delivered. Open-source EHRs can enhance health care delivery in low- and lower-middle-income countries by improving the collection, management, and analysis of health data needed to inform health care delivery, policy, and planning. While open-source EHR systems are cost-effective and adaptable, they have not proliferated rapidly in low- and lower-middle-income countries. Implementation barriers slow adoption, with existing research focusing predominantly on technical issues preventing successful implementation. OBJECTIVE: This interdisciplinary scoping review aims to provide an overview of contextual barriers affecting the adaptation and implementation of open-source EHR systems in low- and lower-middle-income countries and to identify areas for future research. METHODS: We conducted a scoping literature review following a systematic methodological framework. A total of 7 databases were selected from 3 disciplines: medicine and health sciences, computing, and social sciences. The findings were reported in accordance with the PRISMA-ScR (Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension for Scoping Reviews) checklist. The Mixed Methods Appraisal Tool and the Critical Appraisal Skills Programme checklists were used to assess the quality of relevant studies. Data were collated and summarized, and results were reported qualitatively, adopting a narrative synthesis approach. RESULTS: This review included 13 studies that examined open-source EHRs' adaptation and implementation in low- and lower-middle-income countries from 3 interrelated perspectives: socioenvironmental, technological, and organizational barriers. The studies identified key issues such as limited funding, sustainability, organizational and management challenges, infrastructure, data privacy and protection, and ownership. Data protection, confidentiality, ownership, and ethics emerged as important issues, often overshadowed by technical processes. CONCLUSIONS: While open-source EHRs have the potential to enhance health care delivery in low- and lower-middle-income-country settings, implementation is fraught with difficulty. This scoping review shows that depending on the adopted perspective to implementation, different implementation barriers come into view. A dominant focus on technology distracts from socioenvironmental and organizational barriers impacting the proliferation of open-source EHRs. The role of local implementing organizations in addressing implementation barriers in low- and lower-middle-income countries remains unclear. A holistic understanding of implementers' experiences of implementation processes is needed. This could help characterize and solve implementation problems, including those related to ethics and the management of data protection. Nevertheless, this scoping review provides a meaningful contribution to the global health informatics discipline.


Asunto(s)
Países en Desarrollo , Registros Electrónicos de Salud , Humanos
3.
J Clin Nurs ; 33(5): 1884-1895, 2024 May.
Artículo en Inglés | MEDLINE | ID: mdl-38240045

RESUMEN

AIMS: To explore the nature of interactions that enable older inpatients with cognitive impairments to engage with hospital staff on falls prevention. DESIGN: Ethnographic study. METHODS: Ethnographic observations on orthopaedic and older person wards in English hospitals (251.25 h) and semi-structured qualitative interviews with 50 staff, 28 patients and three carers. Findings were analysed using a framework approach. RESULTS: Interactions were often informal and personalised. Staff qualities that supported engagement in falls prevention included the ability to empathise and negotiate, taking patient perspectives into account. Although registered nurses had limited time for this, families/carers and other staff, including engagement workers, did so and passed information to nurses. CONCLUSIONS: Some older inpatients with cognitive impairments engaged with staff on falls prevention. Engagement enabled them to express their needs and collaborate, to an extent, on falls prevention activities. To support this, we recommend wider adoption in hospitals of engagement workers and developing the relational skills that underpin engagement in training programmes for patient-facing staff. IMPLICATIONS FOR PROFESSION AND PATIENT CARE: Interactions that support cognitively impaired inpatients to engage in falls prevention can involve not only nurses, but also families/carers and non-nursing staff, with potential to reduce pressures on busy nurses and improve patient safety. REPORTING METHOD: The paper adheres to EQUATOR guidelines, Standards for Reporting Qualitative Research. PATIENT OR PUBLIC CONTRIBUTION: Patient/public contributors were involved in study design, evaluation and data analysis. They co-authored this manuscript.


Asunto(s)
Disfunción Cognitiva , Pacientes Internos , Humanos , Anciano , Hospitales , Investigación Cualitativa , Antropología Cultural
4.
Appl Nurs Res ; 76: 151785, 2024 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-38641382

RESUMEN

BACKGROUND: It is known that heel offloading devices are widely used in clinical practice for the prevention of heel pressure ulcers, even though there is a lack of robust, good quality evidence to inform their use. OBJECTIVE: To explore how and why heel offloading devices are used (or not used) and reasoning behind their use in population at high risk of developing heel pressure ulcers. METHODS: An ethnographic study was conducted as part of a realist evaluation in three orthopaedic wards in a large English hospital. Twelve observations took place, with 49 h and 35 min of patient care observed. A total of 32 patients were observed and 19 members of the nursing team were interviewed and in-depth interviews with the three ward managers were conducted. RESULTS: Although the focus of the study was on offloading devices, constant low pressure heel specific devices were also observed in use for pressure ulcer prevention, whilst offloading devices were perceived to be for higher risk patients or those already with a heel pressure ulcer. Nursing staff viewed leadership from the ward manager and the influence of the Tissue Viability Nurse Specialists as key mechanisms for the proactive use of devices. CONCLUSIONS: This study informs trial design as it has identified that a controlled clinical trial of both types of heel specific devices is required to inform evidence-based practice. Involving the ward managers and Tissue Viability Nurse Specialists during set up phase for clinical equipoise could improve recruitment. Tweetable abstract How, for whom, and in what circumstances do devices work to prevent heel pressure ulcers? Observations of clinical practice.


Asunto(s)
Talón , Úlcera por Presión , Humanos , Úlcera por Presión/epidemiología
5.
BMC Geriatr ; 23(1): 381, 2023 06 21.
Artículo en Inglés | MEDLINE | ID: mdl-37344760

RESUMEN

BACKGROUND: Falls are the most common safety incident reported by acute hospitals. In England national guidance recommends delivery of a multifactorial falls risk assessment (MFRA) and interventions tailored to address individual falls risk factors. However, there is variation in how these practices are implemented. This study aimed to explore the variation by examining what supports or constrains delivery of MFRAs and tailored interventions in acute hospitals. METHODS: A realist review of literature was conducted with searches completed in three stages: (1) to construct hypotheses in the form of Context, Mechanism, Outcome configurations (CMOc) about how MFRAs and interventions are delivered, (2) to scope the breadth and depth of evidence available in Embase to test the CMOcs, and (3) following prioritisation of CMOcs, to refine search strategies for use in multiple databases. Citations were managed in EndNote; titles, abstracts, and full texts were screened, with 10% independently screened by two reviewers. RESULTS: Two CMOcs were prioritised for testing labelled: Facilitation via MFRA tools, and Patient Participation in interventions. Analysis indicated that MFRA tools can prompt action, but the number and type of falls risk factors included in tools differ across organisations leading to variation in practice. Furthermore, the extent to which tools work as prompts is influenced by complex ward conditions such as changes in patient condition, bed swaps, and availability of falls prevention interventions. Patient participation in falls prevention interventions is more likely where patient directed messaging takes individual circumstances into account, e.g., not wanting to disturb nurses by using the call bell. However, interactions that elicit individual circumstances can be resource intensive and patients with cognitive impairment may not be able to participate despite appropriately directed messaging. CONCLUSIONS: Organisations should consider how tools can be developed in ways that better support consistent and comprehensive identification of patients' individual falls risk factors and the complex ward conditions that can disrupt how tools work as facilitators. Ward staff should be supported to deliver patient directed messaging that is informed by their individual circumstances to encourage participation in falls prevention interventions, where appropriate. TRIAL REGISTRATION: PROSPERO: CRD42020184458.


Asunto(s)
Disfunción Cognitiva , Hospitales , Humanos , Inglaterra , Medición de Riesgo , Factores de Riesgo
6.
Int J Qual Health Care ; 35(4)2023 Oct 10.
Artículo en Inglés | MEDLINE | ID: mdl-37750687

RESUMEN

In the last 6 years, hospitals in developed countries have been trialling the use of command centres for improving organizational efficiency and patient care. However, the impact of these command centres has not been systematically studied in the past. It is a retrospective population-based study. Participants were patients who visited the Bradford Royal Infirmary hospital, Accident and Emergency (A&E) Department, between 1 January 2018 and 31 August 2021. Outcomes were patient flow (measured as A&E waiting time, length of stay, and clinician seen time) and data quality (measured by the proportion of missing treatment and assessment dates and valid transition between A&E care stages). Interrupted time-series segmented regression and process mining were used for analysis. A&E transition time from patient arrival to assessment by a clinician marginally improved during the intervention period; there was a decrease of 0.9 min [95% confidence interval (CI): 0.35-1.4], 3 min (95% CI: 2.4-3.5), 9.7 min (95% CI: 8.4-11.0), and 3.1 min (95% CI: 2.7-3.5) during 'patient flow program', 'command centre display roll-in', 'command centre activation', and 'hospital wide training program', respectively. However, the transition time from patient treatment until the conclusion of consultation showed an increase of 11.5 min (95% CI: 9.2-13.9), 12.3 min (95% CI: 8.7-15.9), 53.4 min (95% CI: 48.1-58.7), and 50.2 min (95% CI: 47.5-52.9) for the respective four post-intervention periods. Furthermore, the length of stay was not significantly impacted; the change was -8.8 h (95% CI: -17.6 to 0.08), -8.9 h (95% CI: -18.6 to 0.65), -1.67 h (95% CI: -10.3 to 6.9), and -0.54 h (95% CI: -13.9 to 12.8) during the four respective post-intervention periods. It was a similar pattern for the waiting and clinician seen times. Data quality as measured by the proportion of missing dates of records was generally poor (treatment date = 42.7% and clinician seen date = 23.4%) and did not significantly improve during the intervention periods. The findings of the study suggest that a command centre package that includes process change and software technology does not appear to have a consistent positive impact on patient safety and data quality based on the indicators and data we used. Therefore, hospitals considering introducing a command centre should not assume there will be benefits in patient flow and data quality.


Asunto(s)
Hospitales , Medicina Estatal , Humanos , Estudios Retrospectivos , Derivación y Consulta , Reino Unido , Servicio de Urgencia en Hospital , Tiempo de Internación
7.
J Med Internet Res ; 25: e38039, 2023 04 24.
Artículo en Inglés | MEDLINE | ID: mdl-37093631

RESUMEN

BACKGROUND: There is increasing interest in the use of artificial intelligence (AI) in pathology to increase accuracy and efficiency. To date, studies of clinicians' perceptions of AI have found only moderate acceptability, suggesting the need for further research regarding how to integrate it into clinical practice. OBJECTIVE: The aim of the study was to determine contextual factors that may support or constrain the uptake of AI in pathology. METHODS: To go beyond a simple listing of barriers and facilitators, we drew on the approach of realist evaluation and undertook a review of the literature to elicit stakeholders' theories of how, for whom, and in what circumstances AI can provide benefit in pathology. Searches were designed by an information specialist and peer-reviewed by a second information specialist. Searches were run on the arXiv.org repository, MEDLINE, and the Health Management Information Consortium, with additional searches undertaken on a range of websites to identify gray literature. In line with a realist approach, we also made use of relevant theory. Included documents were indexed in NVivo 12, using codes to capture different contexts, mechanisms, and outcomes that could affect the introduction of AI in pathology. Coded data were used to produce narrative summaries of each of the identified contexts, mechanisms, and outcomes, which were then translated into theories in the form of context-mechanism-outcome configurations. RESULTS: A total of 101 relevant documents were identified. Our analysis indicates that the benefits that can be achieved will vary according to the size and nature of the pathology department's workload and the extent to which pathologists work collaboratively; the major perceived benefit for specialist centers is in reducing workload. For uptake of AI, pathologists' trust is essential. Existing theories suggest that if pathologists are able to "make sense" of AI, engage in the adoption process, receive support in adapting their work processes, and can identify potential benefits to its introduction, it is more likely to be accepted. CONCLUSIONS: For uptake of AI in pathology, for all but the most simple quantitative tasks, measures will be required that either increase confidence in the system or provide users with an understanding of the performance of the system. For specialist centers, efforts should focus on reducing workload rather than increasing accuracy. Designers also need to give careful thought to usability and how AI is integrated into pathologists' workflow.


Asunto(s)
Inteligencia Artificial , Narración , Humanos , Aprendizaje Automático , Patología
8.
Surg Innov ; 29(6): 804-810, 2022 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-35451350

RESUMEN

BACKGROUND: Medical technologies have the potential to improve quality and efficiency of healthcare. The design of clinical trials should consider participants' perspectives to optimise enrolment, engagement and satisfaction. This study aims to assess patients' perceptions of their involvement in medical device trials, to inform the designs of future medical technology implementation and evaluation. METHODS: Four focus groups were undertaken with a total of 16 participants who had participated in a study testing hospital inpatient remote monitoring devices. Interviews were audio-recorded, transcribed verbatim and underwent thematic analysis. RESULTS: Four main themes emerged: patients' motivations for participating in medical device research; patients' perceptions of technology in medicine; patients' understanding of trial methodology; and patients' perceptions of the benefits of involvement in medical device trials. The appeal of new technology is a contributing factor to the decision to consent, although concerns remain regarding risks associated with technology in healthcare settings. Perceived benefits of participating in device trials include extra care, social benefits and comradery with other participants seen using the devices, although there is a perceived lack of confidence in using technology amongst older patients. CONCLUSION: Future device trials should prioritise information sharing with participants both before and after the trial. Verbal and written information alongside practical demonstrations can help to combat a lack of confidence with technology. Randomised trials and those with placebo- or sham-controlled arms should not be considered as barriers to participation. Study results should be disseminated to participants in lay format as soon as possible, subject to participant permission.


Asunto(s)
Investigación Biomédica , Humanos , Grupos Focales
9.
BMC Health Serv Res ; 21(1): 702, 2021 Jul 16.
Artículo en Inglés | MEDLINE | ID: mdl-34271925

RESUMEN

BACKGROUND: Secondary use of data via integrated health information technology is fundamental to many healthcare policies and processes worldwide. However, repurposing data can be problematic and little research has been undertaken into the everyday practicalities of inter-system data sharing that helps explain why this is so, especially within (as opposed to between) organisations. In response, this article reports one of the most detailed empirical examinations undertaken to date of the work involved in repurposing healthcare data for National Clinical Audits. METHODS: Fifty-four semi-structured, qualitative interviews were carried out with staff in five English National Health Service hospitals about their audit work, including 20 staff involved substantively with audit data collection. In addition, ethnographic observations took place on wards, in 'back offices' and meetings (102 h). Findings were analysed thematically and synthesised in narratives. RESULTS: Although data were available within hospital applications for secondary use in some audit fields, which could, in theory, have been auto-populated, in practice staff regularly negotiated multiple, unintegrated systems to generate audit records. This work was complex and skilful, and involved cross-checking and double data entry, often using paper forms, to assure data quality and inform quality improvements. CONCLUSIONS: If technology is to facilitate the secondary use of healthcare data, the skilled but largely hidden labour of those who collect and recontextualise those data must be recognised. Their detailed understandings of what it takes to produce high quality data in specific contexts should inform the further development of integrated systems within organisations.


Asunto(s)
Auditoría Clínica , Medicina Estatal , Tecnología Biomédica , Recolección de Datos , Hospitales , Humanos
10.
J Med Internet Res ; 23(11): e28854, 2021 11 23.
Artículo en Inglés | MEDLINE | ID: mdl-34817384

RESUMEN

BACKGROUND: Dashboards can support data-driven quality improvements in health care. They visualize data in ways intended to ease cognitive load and support data comprehension, but how they are best integrated into working practices needs further investigation. OBJECTIVE: This paper reports the findings of a realist evaluation of a web-based quality dashboard (QualDash) developed to support the use of national audit data in quality improvement. METHODS: QualDash was co-designed with data users and installed in 8 clinical services (3 pediatric intensive care units and 5 cardiology services) across 5 health care organizations (sites A-E) in England between July and December 2019. Champions were identified to support adoption. Data to evaluate QualDash were collected between July 2019 and August 2021 and consisted of 148.5 hours of observations including hospital wards and clinical governance meetings, log files that captured the extent of use of QualDash over 12 months, and a questionnaire designed to assess the dashboard's perceived usefulness and ease of use. Guided by the principles of realist evaluation, data were analyzed to understand how, why, and in what circumstances QualDash supported the use of national audit data in quality improvement. RESULTS: The observations revealed that variation across sites in the amount and type of resources available to support data use, alongside staff interactions with QualDash, shaped its use and impact. Sites resourced with skilled audit support staff and established reporting systems (sites A and C) continued to use existing processes to report data. A number of constraints influenced use of QualDash in these sites including that some dashboard metrics were not configured in line with user expectations and staff were not fully aware how QualDash could be used to facilitate their work. In less well-resourced services, QualDash automated parts of their reporting process, streamlining the work of audit support staff (site B), and, in some cases, highlighted issues with data completeness that the service worked to address (site E). Questionnaire responses received from 23 participants indicated that QualDash was perceived as useful and easy to use despite its variable use in practice. CONCLUSIONS: Web-based dashboards have the potential to support data-driven improvement, providing access to visualizations that can help users address key questions about care quality. Findings from this study point to ways in which dashboard design might be improved to optimize use and impact in different contexts; this includes using data meaningful to stakeholders in the co-design process and actively engaging staff knowledgeable about current data use and routines in the scrutiny of the dashboard metrics and functions. In addition, consideration should be given to the processes of data collection and upload that underpin the quality of the data visualized and consequently its potential to stimulate quality improvement. INTERNATIONAL REGISTERED REPORT IDENTIFIER (IRRID): RR2-10.1136/bmjopen-2019-033208.


Asunto(s)
Atención a la Salud , Mejoramiento de la Calidad , Niño , Recolección de Datos , Inglaterra , Humanos , Internet
11.
Eat Weight Disord ; 26(2): 491-498, 2021 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-32107745

RESUMEN

PURPOSE: To examine the prevalence of disordered eating (DE) in elite male and female soccer players and the influence of perfectionism. METHODS: Using a cross-sectional design, elite male (n = 137) and female (n = 70) soccer players and non-athlete controls (n = 179) completed the clinical perfectionism questionnaire (CPQ-12) and the eating attitudes test (EAT-26) to assess perfectionism and DE risk, respectively. RESULTS: Male soccer players had higher EAT-26 scores than controls (10.4 ± 9.9 vs. 6.8 ± 6.7; P = 0.001), but there were no differences in the prevalence of clinical levels of DE (EAT-26 score ≥ 20) (15 vs. 5%, respectively; X2 = 0.079) The proportion of females with DE risk was higher in controls [EAT-26: 13.9 ± 11.6 (25% of population)] than female players [EAT-26: 10.0 ± 9.0% (11% of population)] (X2 = 0.001). With linear regression, perfectionism explained 20% of the variation in DE risk in males (P = 0.001); in females, athletic status (player vs. control) and perfectionism were significant predictors of DE risk, explaining 21% of the variation (P = 0.001). Male reserve team players had higher EAT-26 (+ 3.5) and perfectionism (+ 2.7) scores than first-team players (P < 0.05). There were no differences in the prevalence of DE risk between the male and female soccer players (X2 = 0.595). CONCLUSIONS: The prevalence of DE risk was not different in elite male and female soccer players; in fact, the prevalence was greatest in non-athlete female controls. Perfectionism is a significant predictor of DE risk in males and females. LEVEL OF EVIDENCE: III, case-control study.


Asunto(s)
Trastornos de Alimentación y de la Ingestión de Alimentos , Fútbol , Estudios de Casos y Controles , Estudios Transversales , Trastornos de Alimentación y de la Ingestión de Alimentos/epidemiología , Femenino , Humanos , Masculino , Prevalencia
12.
BMC Health Serv Res ; 20(1): 859, 2020 Sep 11.
Artículo en Inglés | MEDLINE | ID: mdl-32917202

RESUMEN

BACKGROUND: National Clinical Audits (NCAs) are a well-established quality improvement strategy used in healthcare settings. Significant resources, including clinicians' time, are invested in participating in NCAs, yet there is variation in the extent to which the resulting feedback stimulates quality improvement. The aim of this study was to explore the reasons behind this variation. METHODS: We used realist evaluation to interrogate how context shapes the mechanisms through which NCAs work (or not) to stimulate quality improvement. Fifty-four interviews were conducted with doctors, nurses, audit clerks and other staff working with NCAs across five healthcare providers in England. In line with realist principles we scrutinised the data to identify how and why providers responded to NCA feedback (mechanisms), the circumstances that supported or constrained provider responses (context), and what happened as a result of the interactions between mechanisms and context (outcomes). We summarised our findings as Context+Mechanism = Outcome configurations. RESULTS: We identified five mechanisms that explained provider interactions with NCA feedback: reputation, professionalism, competition, incentives, and professional development. Professionalism and incentives underpinned most frequent interaction with feedback, providing opportunities to stimulate quality improvement. Feedback was used routinely in these ways where it was generated from data stored in local databases before upload to NCA suppliers. Local databases enabled staff to access data easily, customise feedback and, importantly, the data were trusted as accurate, due to the skills and experience of staff supporting audit participation. Feedback produced by NCA suppliers, which included national comparator data, was used in a more limited capacity across providers. Challenges accessing supplier data in a timely way and concerns about the quality of data submitted across providers were reported to constrain use of this mode of feedback. CONCLUSION: The findings suggest that there are a number of mechanisms that underpin healthcare providers' interactions with NCA feedback. However, there is variation in the mode, frequency and impact of these interactions. Feedback was used most routinely, providing opportunities to stimulate quality improvement, within clinical services resourced to collect accurate data and to maintain local databases from which feedback could be customised for the needs of the service.


Asunto(s)
Auditoría Clínica/normas , Retroalimentación , Exactitud de los Datos , Atención a la Salud , Inglaterra , Personal de Salud/psicología , Humanos , Motivación , Mejoramiento de la Calidad
13.
J Sports Sci ; 37(20): 2356-2366, 2019 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-31230518

RESUMEN

The purpose of this study was to expand our previously published sweat normative data/analysis (n = 506) to establish sport-specific normative data for whole-body sweating rate (WBSR), sweat [Na+], and rate of sweat Na+ loss (RSSL). Data from 1303 athletes were compiled from observational testing (2000-2017) using a standardized absorbent sweat patch technique to determine local sweat [Na+] and normalized to whole-body sweat [Na+]. WBSR was determined from change in exercise body mass, corrected for food/fluid intake and urine/stool loss. RSSL was the product of sweat [Na+] and WBSR. There were significant differences between sports for WBSR, with highest losses in American football (1.51 ± 0.70 L/h), then endurance (1.28 ± 0.57 L/h), followed by basketball (0.95 ± 0.42 L/h), soccer (0.94 ± 0.38 L/h) and baseball (0.83 ± 0.34 L/h). For RSSL, American football (55.9 ± 36.8 mmol/h) and endurance (51.7 ± 27.8 mmol/h) were greater than soccer (34.6 ± 19.2 mmol/h), basketball (34.5 ± 21.2 mmol/h), and baseball (27.2 ± 14.7 mmol/h). After ANCOVA, significant between-sport differences in adjusted means for WBSR and RSSL remained. In summary, due to the significant sport-specific variation in WBSR and RSSL, American football and endurance have the greatest need for deliberate hydration strategies. Abbreviations: WBSR: whole body sweating rate; SR: sweating rate; Na+: sodium; RSSL: rate of sweat sodium loss.


Asunto(s)
Sodio/análisis , Deportes/fisiología , Sudor/química , Sudoración/fisiología , Adolescente , Adulto , Anciano , Béisbol/fisiología , Baloncesto/fisiología , Niño , Femenino , Fútbol Americano/fisiología , Humanos , Masculino , Persona de Mediana Edad , Resistencia Física/fisiología , Valores de Referencia , Estudios Retrospectivos , Fútbol/fisiología , Adulto Joven
14.
J Med Internet Res ; 20(12): e10802, 2018 12 11.
Artículo en Inglés | MEDLINE | ID: mdl-30538086

RESUMEN

BACKGROUND: Vital signs monitoring is a universal tool for the detection of postoperative complications; however, unwell patients can be missed between traditional observation rounds. New remote monitoring technologies promise to convey the benefits of continuous monitoring to patients in general wards. OBJECTIVE: The aim of this pilot study was to evaluate whether continuous remote vital signs monitoring is a practical and acceptable way of monitoring surgical patients and to optimize the delivery of a definitive trial. METHODS: We performed a prospective, cluster-randomized, parallel-group, unblinded, controlled pilot study. Patients admitted to 2 surgical wards at a large tertiary hospital received either continuous and intermittent vital signs monitoring or intermittent monitoring alone using an early warning score system. Continuous monitoring was provided by a wireless patch, worn on the patient's chest, with data transmitted wirelessly every 2 minutes to a central monitoring station or a mobile device carried by the patient's nurse. The primary outcome measure was time to administration of antibiotics in sepsis. The secondary outcome measures included the length of hospital stay, 30-day readmission rate, mortality, and patient acceptability. RESULTS: Overall, 226 patients were randomized between January and June 2017. Of 226 patients, 140 were randomized to continuous remote monitoring and 86 to intermittent monitoring alone. On average, patients receiving continuous monitoring were administered antibiotics faster after evidence of sepsis (626 minutes, n=22, 95% CI 431.7-820.3 minutes vs 1012.8 minutes, n=12, 95% CI 425.0-1600.6 minutes), had a shorter average length of hospital stay (13.3 days, 95% CI 11.3-15.3 days vs 14.6 days, 95% CI 11.5-17.7 days), and were less likely to require readmission within 30 days of discharge (11.4%, 95% CI 6.16-16.7 vs 20.9%, 95% CI 12.3-29.5). Wide CIs suggest these differences are not statistically significant. Patients found the monitoring device to be acceptable in terms of comfort and perceived an enhanced sense of safety, despite 24% discontinuing the intervention early. CONCLUSIONS: Remote continuous vital signs monitoring on surgical wards is practical and acceptable to patients. Large, well-controlled studies in high-risk populations are required to determine whether the observed trends translate into a significant benefit for continuous over intermittent monitoring. TRIAL REGISTRATION: International Standard Randomised Controlled Trial Number ISRCTN60999823; http://www.isrctn.com /ISRCTN60999823 (Archived by WebCite at http://www.webcitation.org/73ikP6OQz).


Asunto(s)
Cirugía General , Monitoreo Fisiológico/instrumentación , Monitoreo Fisiológico/métodos , Signos Vitales , Dispositivos Electrónicos Vestibles , Tecnología Inalámbrica , Adulto , Femenino , Hospitalización , Humanos , Tiempo de Internación , Masculino , Readmisión del Paciente , Proyectos Piloto , Reino Unido
15.
Eur J Appl Physiol ; 116(5): 867-77, 2016 May.
Artículo en Inglés | MEDLINE | ID: mdl-26908041

RESUMEN

PURPOSE: To determine effects of intensified training (IT) and carbohydrate supplementation on overreaching and immunity. METHODS: In a randomized, double-blind, crossover design, 13 male cyclists (age 25 ± 6 years, VO2max 72 ± 5 ml/kg/min) completed two 8-day periods of IT. On one occasion, participants ingested 2 % carbohydrate (L-CHO) beverages before, during and after training sessions. On the second occasion, 6 % carbohydrate (H-CHO) solutions were ingested before, during and after training, with the addition of 20 g of protein in the post-exercise beverage. Blood samples were collected before and immediately after incremental exercise to fatigue on days 1 and 9. RESULTS: In both trials, IT resulted in decreased peak power (375 ± 37 vs. 391 ± 37 W, P < 0.001), maximal heart rate (179 ± 8 vs. 190 ± 10 bpm, P < 0.001) and haematocrit (39 ± 2 vs. 42 ± 2 %, P < 0.001), and increased plasma volume (P < 0.001). Resting plasma cortisol increased while plasma ACTH decreased following IT (P < 0.05), with no between-trial differences. Following IT, antigen-stimulated whole blood culture production of IL-1α was higher in L-CHO than H-CHO (0.70 (95 % CI 0.52-0.95) pg/ml versus 0.33 (0.24-0.45) pg/ml, P < 0.01), as was production of IL-1ß (9.3 (95 % CI 7-10.4) pg/ml versus 6.0 (5.0-7.8) pg/ml, P < 0.05). Circulating total leukocytes (P < 0.05) and neutrophils (P < 0.01) at rest increased following IT, as did neutrophil:lymphocyte ratio and percentage CD4+ lymphocytes (P < 0.05), with no between-trial differences. CONCLUSION: IT resulted in symptoms consistent with overreaching, although immunological changes were modest. Higher carbohydrate intake was not able to alleviate physiological/immunological disturbances.


Asunto(s)
Ciclismo/fisiología , Biomarcadores/sangre , Carbohidratos de la Dieta/inmunología , Ejercicio Físico/fisiología , Resistencia Física/inmunología , Resistencia Física/fisiología , Hormona Adrenocorticotrópica/sangre , Adulto , Linfocitos T CD4-Positivos/inmunología , Estudios Cruzados , Suplementos Dietéticos , Método Doble Ciego , Fatiga/sangre , Fatiga/inmunología , Humanos , Hidrocortisona/sangre , Interleucina-1alfa/sangre , Interleucina-1beta/sangre , Masculino
16.
Mol Ecol ; 24(9): 2194-211, 2015 May.
Artículo en Inglés | MEDLINE | ID: mdl-25522096

RESUMEN

The wild North American sunflowers Helianthus annuus and H. debilis are participants in one of the earliest identified examples of adaptive trait introgression, and the exchange is hypothesized to have triggered a range expansion in H. annuus. However, the genetic basis of the adaptive exchange has not been examined. Here, we combine quantitative trait locus (QTL) mapping with field measurements of fitness to identify candidate H. debilis QTL alleles likely to have introgressed into H. annuus to form the natural hybrid lineage H. a. texanus. Two 500-individual BC1 mapping populations were grown in central Texas, genotyped for 384 single nucleotide polymorphism (SNP) markers and then phenotyped in the field for two fitness and 22 herbivore resistance, ecophysiological, phenological and architectural traits. We identified a total of 110 QTL, including at least one QTL for 22 of the 24 traits. Over 75% of traits exhibited at least one H. debilis QTL allele that would shift the trait in the direction of the wild hybrid H. a. texanus. We identified three chromosomal regions where H. debilis alleles increased both female and male components of fitness; these regions are expected to be strongly favoured in the wild. QTL for a number of other ecophysiological, phenological and architectural traits colocalized with these three regions and are candidates for the actual traits driving adaptive shifts. G × E interactions played a modest role, with 17% of the QTL showing potentially divergent phenotypic effects between the two field sites. The candidate adaptive chromosomal regions identified here serve as explicit hypotheses for how the genetic architecture of the hybrid lineage came into existence.


Asunto(s)
Aptitud Genética , Helianthus/genética , Hibridación Genética , Sitios de Carácter Cuantitativo , Adaptación Biológica/genética , Alelos , Mapeo Cromosómico , Interacción Gen-Ambiente , Ligamiento Genético , Genotipo , Fenotipo , Polimorfismo de Nucleótido Simple , Texas
17.
J Digit Imaging ; 28(1): 68-76, 2015 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-25128321

RESUMEN

Performing diagnoses using virtual slides can take pathologists significantly longer than with glass slides, presenting a significant barrier to the use of virtual slides in routine practice. Given the benefits in pathology workflow efficiency and safety that virtual slides promise, it is important to understand reasons for this difference and identify opportunities for improvement. The effect of display resolution on time to diagnosis with virtual slides has not previously been explored. The aim of this study was to assess the effect of display resolution on time to diagnosis with virtual slides. Nine pathologists participated in a counterbalanced crossover study, viewing axillary lymph node slides on a microscope, a 23-in 2.3-megapixel single-screen display and a three-screen 11-megapixel display consisting of three 27-in displays. Time to diagnosis and time to first target were faster on the microscope than on the single and three-screen displays. There was no significant difference between the microscope and the three-screen display in time to first target, while the time taken on the single-screen display was significantly higher than that on the microscope. The results suggest that a digital pathology workstation with an increased number of pixels may make it easier to identify where cancer is located in the initial slide overview, enabling quick location of diagnostically relevant regions of interest. However, when a comprehensive, detailed search of a slide has to be made, increased resolution may not offer any additional benefit.


Asunto(s)
Terminales de Computador/normas , Procesamiento de Imagen Asistido por Computador/normas , Microscopía/instrumentación , Patología Clínica/normas , Telepatología/normas , Axila , Estudios Cruzados , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Ganglios Linfáticos/patología , Variaciones Dependientes del Observador , Telepatología/métodos , Factores de Tiempo
18.
Chronobiol Int ; 41(4): 539-547, 2024 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-38438323

RESUMEN

This study aimed to quantify and compare sleep architecture before and after home and away matches in elite soccer players from the English Premier League. Across two seasons, 6 male players (age 28 ± 5 y; body mass 85.1 ± 9.5 kg; height 1.86 ± 0.09 m) wore WHOOP straps to monitor sleep across 13 matches that kicked off before 17:00 h. For each, sleep was recorded the night before (MD-1), after (MD) and following the match (MD +1). Across these 3 days total sleep time (TST), sleep efficiency (SE), sleep disturbances, wake time, light sleep, deep sleep, REM sleep, sleep and wake onsets, alongside external load, were compared. TST was reduced after MD versus MD +1 (392.9 ± 76.4 vs 459.1 ± 66.7 min, p = 0.003) but no differences existed in any other sleep variables between days (p > 0.05). TST did not differ after home (386.9 ± 75.7 min) vs. away matches (401.0 ± 78.3 min) (p = 0.475), nor did other sleep variables (p > 0.05). GPS-derived external load peaked on MD (p < 0.05). In conclusion, despite reduced TST on MD, sleep architecture was unaffected after matches played before 17:00 h, suggesting sleep quality was not significantly compromised.


Asunto(s)
Ritmo Circadiano , Sueño , Fútbol , Humanos , Fútbol/fisiología , Masculino , Sueño/fisiología , Adulto , Ritmo Circadiano/fisiología , Atletas , Adulto Joven , Factores de Tiempo
19.
BMJ Qual Saf ; 33(3): 166-172, 2024 02 19.
Artículo en Inglés | MEDLINE | ID: mdl-37940414

RESUMEN

BACKGROUND: Inpatient falls are the most common safety incident reported by hospitals worldwide. Traditionally, responses have been guided by categorising patients' levels of fall risk, but multifactorial approaches are now recommended. These target individual, modifiable fall risk factors, requiring clear communication between multidisciplinary team members. Spoken communication is an important channel, but little is known about its form in this context. We aim to address this by exploring spoken communication between hospital staff about fall prevention and how this supports multifactorial fall prevention practice. METHODS: Data were collected through semistructured qualitative interviews with 50 staff and ethnographic observations of fall prevention practices (251.25 hours) on orthopaedic and older person wards in four English hospitals. Findings were analysed using a framework approach. FINDINGS: We observed staff engaging in 'multifactorial talk' to address patients' modifiable risk factors, especially during multidisciplinary meetings which were patient focused rather than risk type focused. Such communication coexisted with 'categorisation talk', which focused on patients' levels of fall risk and allocating nursing supervision to 'high risk' patients. Staff negotiated tensions between these different approaches through frequent 'hybrid talk', where, as well as categorising risks, they also discussed how to modify them. CONCLUSION: To support hospitals in implementing multifactorial, multidisciplinary fall prevention, we recommend: (1) focusing on patients' individual risk factors and actions to address them (a 'why?' rather than a 'who' approach); (2) where not possible to avoid 'high risk' categorisations, employing 'hybrid' communication which emphasises actions to modify individual risk factors, as well as risk level; (3) challenging assumptions about generic interventions to identify what individual patients need; and (4) timing meetings to enable staff from different disciplines to participate.


Asunto(s)
Accidentes por Caídas , Hospitales , Humanos , Anciano , Accidentes por Caídas/prevención & control , Pacientes Internos , Factores de Riesgo , Comunicación
20.
Front Oncol ; 14: 1404860, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38952557

RESUMEN

Introduction: Evolution of a patient-reported symptom-based risk stratification system to redesign the suspected head and neck cancer (HNC) referral pathway (EVEREST-HN) will use a broad and open approach to the nomenclature and symptomatology. It aims to capture and utilise the patient reported symptoms in a modern way to identify patients' clinical problems more effectively and risk stratify the patient. Method: The review followed the PRISMA checklist for scoping reviews. A search strategy was carried out using Medline, Embase and Web of Science between January 1st 2012 and October 31st 2023. All titles, abstracts and full paper were screened for eligibility, papers were assessed for inclusion using predetermined criteria. Data was extracted pertaining to the aims, type of study, cancer type, numbers of patients included and symptoms, presenting complaints or signs and symptoms. Results: There were 9,331 publications identified in the searches, following title screening 350 abstracts were reviewed for inclusion and 120 were considered for eligibility for the review. 48 publications met the eligibility criteria and were included in the final review. Data from almost 11,000 HNC patients was included. Twenty-one of the publications were from the UK, most were retrospective examination of patient records. Data was extracted and charted according to the anatomical area of the head and neck where the symptoms are subjectively and objectively found, and presented according to lay terms for symptoms, clinical terms for symptoms and the language of objective clinical findings. Discussion: Symptoms of HNC are common presenting complaints, interpreting these along with clinical history, examination and risk factors will inform a clinician's decision to refer as suspected cancer. UK Head and Neck specialists believe a different way of triaging the referrals is needed to assess the clinical risk of an undiagnosed HNC. EVEREST-HN aims to achieve this using the patient history of their symptoms. This review has highlighted issues in terms of what is considered a symptom, a presenting complaint and a clinical finding or sign.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA