Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 34
Filter
1.
Dimens Crit Care Nurs ; 43(4): 212-216, 2024.
Article in English | MEDLINE | ID: mdl-38787778

ABSTRACT

BACKGROUND: Clostridioides difficile (C. diff) infection causes significant morbidity for hospitalized patients. A large medical intensive care unit had an increase in C. diff infection rates. OBJECTIVES: The aim of this project was to reduce the C. diff polymerase chain reaction (PCR) test positivity rate and the rate of C. diff PCR tests ordered. Rates were compared between preintervention (July 2017 to December 2019) and postintervention (January 2021 to December 2022) timeframes. METHODS: Unit leadership led a robust quality improvement project, including use of quality improvement tools such as A3, Gemba walks, and plan-do-study-act cycles. Interventions were tailored to the barriers identified, including standardization of in-room supply carts; use of single-packaged oral care kits; new enteric precautions signage; education to staff, providers, and visitors; scripting for patients and visitors; and use of a C. diff testing algorithm. Statistical process control charts were used to assess for improvements. RESULTS: The average rate of C. diff PCR test positivity decreased from 34.9 PCR positive tests per 10 000 patient days to 12.3 in the postintervention period, a 66% reduction. The average rate of PCR tests ordered was 28 per 1000 patient days in the preintervention period; this decreased 44% to 15.7 in the postintervention period. DISCUSSION: We found clinically significant improvements in the rate of C. diff infection and PCR tests ordered as a result of implementing tailored interventions in a large medical intensive care unit. Other units should consider using robust quality improvement methods and tools to conduct similar initiatives to reduce patient harm and improve care and outcomes.


Subject(s)
Clostridium Infections , Cross Infection , Intensive Care Units , Quality Improvement , Humans , Clostridium Infections/prevention & control , Clostridium Infections/epidemiology , Clostridium Infections/diagnosis , Cross Infection/prevention & control , Clostridioides difficile/isolation & purification , Polymerase Chain Reaction , Infection Control
2.
Nutrients ; 16(7)2024 Mar 22.
Article in English | MEDLINE | ID: mdl-38612948

ABSTRACT

Although effective communication is fundamental to nutrition and dietetics practice, providing novice practitioners with efficacious training remains a challenge. Traditionally, human simulated patients have been utilised in health professions training, however their use and development can be cost and time prohibitive. Presented here is a platform the authors have created that allows students to interact with virtual simulated patients to practise and develop their communication skills. Leveraging the structured incorporation of large language models, it is designed by pedagogical content experts and comprises individual cases based on curricula and student needs. It is targeted towards the practice of rapport building, asking of difficult questions, paraphrasing and mistake making, all of which are essential to learning. Students appreciate the individualised and immediate feedback based on validated communication tools that encourage self-reflection and improvement. Early trials have shown students are enthusiastic about this platform, however further investigations are required to determine its impact as an experiential communication skills tool. This platform harnesses the power of artificial intelligence to bridge the gap between theory and practice in communication skills training, requiring significantly reduced costs and resources than traditional simulated patient encounters.


Subject(s)
Dietetics , Humans , Artificial Intelligence , Educational Status , Nutritional Status , Communication
3.
Nutr Diet ; 2024 Feb 26.
Article in English | MEDLINE | ID: mdl-38409526

ABSTRACT

AIM: Objective structured clinical examinations have long been used in dietetics education. This observational study aims to describe the development, deployment, feasibility and validity of assessment using an oral interview in place of traditional objective structured clinical examinations, and to determine the ability of this assessment to identify students who are either not ready for placement or may require early support and/or remediation. METHODS: Student assessment data were collected over a two-and-a-half-year period and used to test the predictive ability of an oral interview to determine dietetic placement outcomes and highlight a need for early remediation. Descriptive statistics as well as a between-group one-way ANOVA was used to describe results. RESULTS: A total of 169 students participated in the oral interview and subsequent medical nutrition therapy placement over the study period. Significant differences in oral interview score were seen between students who passed placement and students who passed with remediation or those who failed. Oral interview performance was able to predict placement outcome, yet required less resources than traditional objective structured clinical examinations. CONCLUSION: An oral interview may provide the same utility as the objective structured clinical examination in dietetics education .

4.
Nutr Rev ; 2024 Jan 14.
Article in English | MEDLINE | ID: mdl-38219216

ABSTRACT

BACKGROUND: Assessment for vitamin C deficiency (VCD) is rarely undertaken in an acute hospital setting in high-income countries. However, with growing interest in VCD in community settings, there is emerging evidence investigating the prevalence and impact of VCD during hospitalization. OBJECTIVES: In this scoping review, the prevalence of VCD in adult hospitalized patients is explored, patient characteristics are described, and risk factors and clinical outcomes associated with VCD are identified. METHODS: A systematic scoping review was conducted in accordance with the PRISMA-ScR framework. The Ovid MEDLINE, Ovid Embase, Scopus, CINAHL Plus, Allied and Complementary Medicine Database, and the Cochrane Library databases were searched for interventional, comparative, and case-series studies that met eligibility criteria, including adult hospital inpatients in high-income countries, as defined by the Organization for Economic Co-operation and Development, that reported VCD prevalence using World Health Organization reference standards. These standards define VCD deficiency as plasma or serum vitamin C level <11.4 µmol/L, wholeblood level <17 µmol/L, or leukocytes <57 nmol/108 cells. RESULTS: Twenty-three articles were included, representing 22 studies. The cumulative prevalence of VCD was 27.7% (n = 2494; 95% confidence interval [CI], 21.3-34.0). High prevalence of VCD was observed in patients with severe acute illness and poor nutritional status. Scurvy was present in 48% to 62% of patients with VCD assessed in 2 studies (n = 71). Being retired (P = 0.015) and using excessive amounts of alcohol and tobacco (P = 0.0003) were independent risk factors for VCD (n = 184). Age was not conclusively associated with VCD (n = 631). Two studies examined nutrition associations (n = 309); results were inconsistent. Clinical outcomes for VCD included increased risk of frailty (adjusted odds ratio, 4.3; 95%CI, 1.33-13.86; P = 0.015) and cognitive impairment (adjusted odds ratio, 2.93; 95%CI, 1.05-8.19, P = 0.031) (n = 160). CONCLUSIONS: VCD is a nutritional challenge facing the healthcare systems of high-income countries. Research focused on early identification and treatment of patients with VCD is warranted. SYSTEMATIC REVIEW REGISTRATION: Open Science Framework ( https://doi.org/10.17605/OSF.IO/AJGHX ).

5.
Nutr Diet ; 80(2): 173-182, 2023 04.
Article in English | MEDLINE | ID: mdl-36916070

ABSTRACT

AIM: To determine the safety, operational feasibility and environmental impact of collecting unopened non-perishable packaged hospital food items for reuse. METHODS: This pilot study tested packaged foods from an Australian hospital for bacterial species, and compared this to acceptable safe limits. A waste management strategy was trialled (n = 10 days) where non-perishable packaged foods returning to the hospital kitchen were collected off trays, and the time taken to do this and the number and weight of packaged foods collected was measured. Data were extrapolated to estimate the greenhouse gasses produced if they were disposed of in a landfill. RESULTS: Microbiological testing (n = 66 samples) found bacteria (total colony forming units and five common species) on packaging appeared to be within acceptable limits. It took an average of 5.1 ± 10.1 sec/tray to remove packaged food items from trays returning to the kitchen, and an average of 1768 ± 19 packaged food items were per collected per day, equating to 6613 ± 78 kg/year of waste which would produce 19 tonnes/year of greenhouse gasses in landfill. CONCLUSIONS: A substantial volume of food items can be collected from trays without significantly disrupting current processes. Collecting and reusing or donating non-perishable packaged food items that are served but not used within hospitals is a potential strategy to divert food waste from landfill. This pilot study provides initial data addressing infection control and feasibility concerns. While food packages in this hospital appear safe, further research with larger samples and testing additional microbial species is recommended.


Subject(s)
Food , Refuse Disposal , Humans , Food Packaging , Hospitals, Packaged , Pilot Projects , Australia
6.
Tob Control ; 32(e1): e125-e129, 2023 04.
Article in English | MEDLINE | ID: mdl-35064014

ABSTRACT

INTRODUCTION: Flavoured tobacco control policy exemptions and electronic cigarette products may contribute to increased youth access and tobacco use disparities. METHODS: We assessed public support among California Central Valley residents for four policies to regulate flavoured tobacco products and e-cigarettes. The probability-based, multimode survey was conducted with English-speaking and Spanish-speaking registered voters (n=845) across 11 counties between 13 and 18 August 2020. Weighted logistic regression analyses measured odds of policy support, adjusting for predictor variables (attitudes and beliefs) and covariates. RESULTS: The weighted sample was 50% female and predominantly Latino (30%) or non-Hispanic white (46%); 26% had a high school education or less, and 22% an annual household income

Subject(s)
Electronic Nicotine Delivery Systems , Tobacco Products , Vaping , Adolescent , Humans , Female , Male , Nicotiana , Vaping/epidemiology , Policy , California/epidemiology , Flavoring Agents
7.
J Patient Saf ; 18(4): 302-309, 2022 06 01.
Article in English | MEDLINE | ID: mdl-35044999

ABSTRACT

OBJECTIVES: The aims of the study were to evaluate whether in situ (on-site) simulation training is associated with increased telemedicine use for patients presenting to rural emergency departments (EDs) with severe sepsis and septic shock and to evaluate the association between simulation training and telehealth with acute sepsis bundle (SEP-1) compliance and mortality. METHODS: This was a quasi-experimental study of patients presenting to 2 rural EDs with severe sepsis and/or septic shock before and after rollout of in situ simulation training that included education on sepsis management and the use of telehealth. Unadjusted and adjusted analyses were conducted to describe the association of simulation training with sepsis process of care markers and with mortality. RESULTS: The study included 1753 patients, from 2 rural EDs, 629 presented before training and 1124 presented after training. There were no differences in patient characteristics between the 2 groups. Compliance with several SEP-1 bundle components improved after training: antibiotics within 3 hours, intravenous fluid administration, repeat lactic acid assessment, and vasopressor administration. The use of telemedicine increased from 2% to 5% after training. Use of telemedicine was associated with increases in repeat lactic acid assessment and reassessment for septic shock. We did not demonstrate an improvement in mortality across either of the 2 group comparisons. CONCLUSIONS: We demonstrate an association between simulation and improved care delivery. Implementing an in situ simulation curriculum in rural EDs was associated with a small increase in the use of telemedicine and improvements in sepsis process of care markers but did not demonstrate improvement in mortality. The small increase in telemedicine limited conclusions on its impact.


Subject(s)
Sepsis , Shock, Septic , Emergency Service, Hospital , Guideline Adherence , Hospital Mortality , Humans , Lactic Acid , Sepsis/therapy , Shock, Septic/therapy , Technology
8.
J Am Coll Health ; 70(7): 2099-2107, 2022 10.
Article in English | MEDLINE | ID: mdl-33258737

ABSTRACT

ObjectiveTo increase campus-wide wellness for student service members/veterans (SSM/Vs), student services professionals, healthcare providers, and faculty collaborated to implement the Social Ecological Framework (SEF) over a three-year project.ParticipantsOne thousand six hundred and seventy eight SSM/Vs enrolled at a medium-sized doctoral granting institution with high-research activity (R2). SSM/Vs were directly and indirectly impacted through a series of initiatives, including stigma reduction efforts, wellness promotion, faculty training, therapeutic services, and peer-advising.MethodsData collection included student success measures such as retention, student satisfaction/feedback, peer-advising meetings, psychotherapy sessions, TBI screenings, and growth measures from a mental health stigma scale.ResultsA significant increase in therapy sessions conducted at the Wellness Center, increased faculty trainings, new and strengthened partnerships, and an increase in SSM/V retention.ConclusionsResults suggest that collaborative efforts applying the SEF can create improved educational conditions and outcomes for SSM/Vs. A review of SSM/V wellness literature and suggestions for other campuses are offered.


Subject(s)
Veterans , Faculty , Humans , Peer Group , Students/psychology , Universities , Veterans/psychology
9.
Adv Simul (Lond) ; 5: 25, 2020.
Article in English | MEDLINE | ID: mdl-32999737

ABSTRACT

BACKGROUND: New technologies for clinical staff are typically introduced via an "in-service" that focuses on knowledge and technical skill. Successful adoption of new healthcare technologies is influenced by multiple other factors as described by the Consolidated Framework in Implementation Research (CFIR). A simulation-based introduction to new technologies provides opportunity to intentionally address specific factors that influence adoption. METHODS: The new technology proposed for adoption was a telehealth cart that provided direct video communication with electronic intensive care unit (eICU) staff for a rural Emergency Department (ED). A novel 3-Act-3-Debrief in situ simulation structure was created to target predictive constructs from the CFIR and connect debriefing to specific workflows. The structure and content of the simulation in relation to the framework is described. Participants completed surveys pre-simulation/post-simulation to measure change in their readiness to adopt the new technology. RESULTS: The scenario was designed and pilot tested before implementation at two rural EDs. There were 60 interprofessional participants across the 2 sites, with 58 pre-simulation and 59 post-simulation surveys completed. The post-simulation mean ratings for each readiness measure (feasibility, quality, resource availability, role clarity, staff receptiveness, and tech usability) increased significantly as a result of the simulation experience. CONCLUSIONS: A novel 3-stage simulation-debriefing structure positively targets factors influencing the adoption of new healthcare technologies.

10.
AEM Educ Train ; 4(1): 36-42, 2020 Jan.
Article in English | MEDLINE | ID: mdl-31989069

ABSTRACT

INTRODUCTION: Traditional simulation debriefing is both time- and resource-intensive. Shifting the degree of primary learning responsibility from the faculty to the learner through self-guided learning has received greater attention as a means of reducing this resource intensity. The aim of the study was to determine if video-assisted self-debriefing, as a form of self-guided learning, would have equivalent learning outcomes compared to standard debriefing. METHODS: This randomized cohort study consisting of 49 PGY-1 to -3 emergency medicine residents compared performance after video self-assessment utilizing an observer checklist versus standard debriefing for simulated emergency department procedural sedation (EDPS). The primary outcome measure was performance on the second EDPS scenario. RESULTS: Independent-samples t-test found that both control (standard debrief) and intervention (video self-assessment) groups demonstrated significantly increased scores on Scenario 2 (standard-t(40) = 2.20, p < 0.05; video-t(45) = 3.88, p < 0.05). There was a large and significant positive correlation between faculty and resident self-evaluation (r = 0.70, p < 0.05). There was no significant difference between faculty and residents self-assessment mean scores (t(24) = 1.90, p = 0.07). CONCLUSIONS: Residents receiving feedback on their performance via video-assisted self-debriefing improved their performance in simulated EDPS to the same degree as with standard faculty debriefing. Video-assisted self-debriefing is a promising avenue for leveraging the benefits of simulation-based training with reduced resource requirements.

11.
Simul Healthc ; 14(2): 129-136, 2019 Apr.
Article in English | MEDLINE | ID: mdl-30730469

ABSTRACT

INTRODUCTION: With the growth of telehealth, simulation personnel will be called upon to support training that integrates these new technologies and processes. We sought to integrate remote telehealth electronic intensive care unit (eICU) personnel into in situ simulations with rural emergency department (ED) care teams. We describe how we overcame technical challenges of creating shared awareness of the patient's condition and the care team's progress among those executing the simulation, the care team, and the eICU. METHODS: The objective of the simulations was to introduce telehealth technology and new processes of engaging the eICU via telehealth during sepsis care in 2 rural EDs. Scenario development included experts in sepsis, telehealth, and emergency medicine. We describe the operational systems challenges, alternatives considered, and solutions used. Participants completed surveys on self-confidence presimulation/postsimulation in using telehealth and in managing patients with sepsis (1-10 Likert scale, with 10 "completely confident"). Pre-post responses were compared by two-tailed paired t test. RESULTS: We successfully engaged the staff of two EDs: 42 nurses, 9 physicians or advanced practice providers, and 9 technicians (N = 60). We used a shared in situ simulation clinical actions observational checklist, created within an off-the-shelf survey software program, completed during the simulations by an on-site observer, and shared with the eICU team via teleconferencing software, to message and cue eICU nurse engagement. The eICU nurse also participated in debriefing via the telehealth video system with successful simulation engagement. These solutions avoided interfering with real ED or eICU operations. The postsimulation mean ± SD ratings of confidence using telehealth increased from 5.3 ± 2.9 to 8.9 ± 1.1 (Δ3.5, P < 0.05) and in managing patients with sepsis increased from 7.1 ± 2.5 to 8.9 ± 1.1 (Δ1.8, P < 0.05). CONCLUSIONS: We created shared awareness between remote eICU personnel and in situ simulations in rural EDs via a low-cost method using survey software combined with teleconferencing methods.


Subject(s)
Emergency Medicine/education , Emergency Service, Hospital/organization & administration , Hospitals, Rural/organization & administration , Patient Care Team/organization & administration , Simulation Training/organization & administration , Telemedicine/organization & administration , Clinical Competence , Health Personnel/education , Humans , Sepsis/therapy , Simulation Training/economics
12.
Arch Dis Child Fetal Neonatal Ed ; 104(2): F182-F186, 2019 Mar.
Article in English | MEDLINE | ID: mdl-29588296

ABSTRACT

OBJECTIVE: To predict length of stay in neonatal care for all admissions of very preterm singleton babies. SETTING: All neonatal units in England. PATIENTS: Singleton babies born at 24-31 weeks gestational age from 2011 to 2014. Data were extracted from the National Neonatal Research Database. METHODS: Competing risks methods were used to investigate the competing outcomes of death in neonatal care or discharge from the neonatal unit. The occurrence of one event prevents the other from occurring. This approach can be used to estimate the percentage of babies alive, or who have been discharged, over time. RESULTS: A total of 20 571 very preterm babies were included. In the competing risks model, gestational age was adjusted for as a time-varying covariate, allowing the difference between weeks of gestational age to vary over time. The predicted percentage of death or discharge from the neonatal unit were estimated and presented graphically by week of gestational age. From these percentages, estimates of length of stay are provided as the number of days following birth and corrected gestational age at discharge. CONCLUSIONS: These results can be used in the counselling of parents about length of stay and the risk of mortality.


Subject(s)
Infant, Extremely Premature , Length of Stay/statistics & numerical data , England/epidemiology , Gestational Age , Humans , Infant , Infant Mortality , Infant, Newborn , Risk Factors
13.
J Chiropr Med ; 18(4): 305-310, 2019 Dec.
Article in English | MEDLINE | ID: mdl-32952476

ABSTRACT

OBJECTIVE: The purpose of this study was to determine what the peer-reviewed literature says about the clinical applications, therapeutic dosages, bioavailability, efficacy, and safety of monolaurin as a dietary supplement. METHODS: This was a narrative review using the PubMed database and the terms "monolaurin" and its chemical synonyms. Commercial websites that sell monolaurin were also searched for pertinent references. The reference sections of the newer articles were searched for any other relevant articles. Consensus was reached among the authors as to what articles had clinical relevance. RESULTS: Twenty-eight articles were found that appeared to address the clinical use of monolaurin. CONCLUSION: There are many articles that address the antimicrobial effects of monolaurin in vitro. Only 3 peer-reviewed papers that evidence in vivo antimicrobial effects of monolaurin in humans were located, and these were only for intravaginal and intraoral-that is, topical-use. No peer-reviewed evidence was found for the clinical use of monolaurin as a human dietary supplement other than as a nutrient.

14.
Acad Emerg Med ; 25(2): 205-220, 2018 02.
Article in English | MEDLINE | ID: mdl-28833892

ABSTRACT

OBJECTIVES: All residency programs in the United States are required to report their residents' progress on the milestones to the Accreditation Council for Graduate Medical Education (ACGME) biannually. Since the development and institution of this competency-based assessment framework, residency programs have been attempting to ascertain the best ways to assess resident performance on these metrics. Simulation was recommended by the ACGME as one method of assessment for many of the milestone subcompetencies. We developed three simulation scenarios with scenario-specific milestone-based assessment tools. We aimed to gather validity evidence for this tool. METHODS: We conducted a prospective observational study to investigate the validity evidence for three mannequin-based simulation scenarios for assessing individual residents on emergency medicine (EM) milestones. The subcompetencies (i.e., patient care [PC]1, PC2, PC3) included were identified via a modified Delphi technique using a group of experienced EM simulationists. The scenario-specific checklist (CL) items were designed based on the individual milestone items within each EM subcompetency chosen for assessment and reviewed by experienced EM simulationists. Two independent live raters who were EM faculty at the respective study sites scored each scenario following brief rater training. The inter-rater reliability (IRR) of the assessment tool was determined by measuring intraclass correlation coefficient (ICC) for the sum of the CL items as well as the global rating scales (GRSs) for each scenario. Comparing GRS and CL scores between various postgraduate year (PGY) levels was performed with analysis of variance. RESULTS: Eight subcompetencies were chosen to assess with three simulation cases, using 118 subjects. Evidence of test content, internal structure, response process, and relations with other variables were found. The ICCs for the sum of the CL items and the GRSs were >0.8 for all cases, with one exception (clinical management GRS = 0.74 in sepsis case). The sum of CL items and GRSs (p < 0.05) discriminated between PGY levels on all cases. However, when the specific CL items were mapped back to milestones in various proficiency levels, the milestones in the higher proficiency levels (level 3 [L3] and 4 [L4]) did not often discriminate between various PGY levels. L3 milestone items discriminated between PGY levels on five of 12 occasions they were assessed, and L4 items discriminated only two of 12 times they were assessed. CONCLUSION: Three simulation cases with scenario-specific assessment tools allowed evaluation of EM residents on proficiency L1 to L4 within eight of the EM milestone subcompetencies. Evidence of test content, internal structure, response process, and relations with other variables were found. Good to excellent IRR and the ability to discriminate between various PGY levels was found for both the sum of CL items and the GRSs. However, there was a lack of a positive relationship between advancing PGY level and the completion of higher-level milestone items (L3 and L4).


Subject(s)
Education, Medical, Graduate/standards , Educational Measurement/methods , Emergency Medicine/education , Internship and Residency/standards , Accreditation/standards , Benchmarking , Clinical Competence/standards , Female , Humans , Manikins , Prospective Studies , Reproducibility of Results , Simulation Training/methods , United States
15.
Med Teach ; 39(9): 967-974, 2017 Sep.
Article in English | MEDLINE | ID: mdl-28562135

ABSTRACT

INTRODUCTION: During residency, some trainees require the identification and remediation of deficiencies to achieve the knowledge, skills and attitudes necessary for independent practice. Given the limited published frameworks for remediation, we characterize remediation from the perspective of educators and propose a holistic framework to guide the approach to remediation. METHODS: We conducted semistructured focus groups to: explore methods for identifying struggling residents; categorize common domains of struggle; describe personal factors that contribute to difficulties; define remediation interventions and understand what constitutes successful completion. Data were analyzed through conventional content analysis. RESULTS: Nineteen physicians across multiple specialties and institutions participated in seven focus groups. Thirteen categories emerged around remediation. Some themes addressed practical components of remediation, while others reflected barriers to the process and the impact of remediation on the resident and program. The themes were used to inform development of a novel holistic framework for remediation. CONCLUSIONS: The approach to remediation requires comprehensive identification of individual factors impacting performance. The intervention should not only include a tailored learning plan but also address confounders that impact likelihood of remediation success. Our holistic framework intends to guide educators creating remediation plans to ensure all domains are addressed.


Subject(s)
Clinical Competence , Faculty, Medical , Internship and Residency , Physicians , Focus Groups , Humans , Qualitative Research
16.
BMJ Open ; 6(10): e010466, 2016 10 18.
Article in English | MEDLINE | ID: mdl-27797978

ABSTRACT

OBJECTIVE: In the UK, 1 in 10 babies require specialist neonatal care. This care can last from hours to months depending on the need of the baby. The increasing survival of very preterm babies has increased neonatal care resource use. Evidence from multiple studies is crucial to identify factors which may be important for predicting length of stay (LOS). The ability to predict LOS is vital for resource planning, decision-making and parent counselling. The objective of this review was to identify which factors are important to consider when predicting LOS in the neonatal unit. DESIGN: A systematic review was undertaken which searched MEDLINE, EMBASE and Scopus for papers from 1994 to 2016 (May) for research investigating prediction of neonatal LOS. Strict inclusion and exclusion criteria were applied. Quality of each study was discussed, but not used as a reason for exclusion from the review. MAIN OUTCOME MEASURE: Prediction of LOS in the neonatal unit. RESULTS: 9 studies were identified which investigated the prediction of neonatal LOS indicating a lack of evidence in the area. Inherent factors, particularly birth weight, sex and gestational age allow for a simple and objective prediction of LOS, which can be calculated on the first day of life. However, other early occurring factors may well also be important and estimates may need revising throughout the baby's stay in hospital. CONCLUSIONS: Predicting LOS is vital to aid the commissioning of services and to help clinicians in their counselling of parents. The lack of evidence in this area indicates a need for larger studies to investigate methods of accurately predicting LOS.


Subject(s)
Counseling/methods , Intensive Care Units, Neonatal , Length of Stay/statistics & numerical data , Parents/psychology , Respiration, Artificial/statistics & numerical data , Adult , Birth Weight , England/epidemiology , Female , Health Care Surveys , Humans , Infant, Newborn , Infant, Premature , Infant, Very Low Birth Weight , Intensive Care Units, Neonatal/statistics & numerical data , Length of Stay/trends , Male , Pregnancy , Prognosis , Risk Factors
17.
PLoS One ; 11(10): e0165202, 2016.
Article in English | MEDLINE | ID: mdl-27764232

ABSTRACT

Modelling length of stay in neonatal care is vital to inform service planning and the counselling of parents. Preterm babies, at the highest risk of mortality, can have long stays in neonatal care and require high resource use. Previous work has incorporated babies that die into length of stay estimates, but this still overlooks the levels of care required during their stay. This work incorporates all babies, and the levels of care they require, into length of stay estimates. Data were obtained from the National Neonatal Research Database for singleton babies born at 24-31 weeks gestational age discharged from a neonatal unit in England from 2011 to 2014. A Cox multistate model, adjusted for gestational age, was used to consider a baby's two competing outcomes: death or discharge from neonatal care, whilst also considering the different levels of care required: intensive care; high dependency care and special care. The probabilities of receiving each of the levels of care, or having died or been discharged from neonatal care are presented graphically overall and adjusted for gestational age. Stacked predicted probabilities produced for each week of gestational age provide a useful tool for clinicians when counselling parents about length of stay and for commissioners when considering allocation of resources. Multistate modelling provides a useful method for describing the entire neonatal care pathway, where rates of in-unit mortality can be high. For a healthcare service focussed on costs, it is important to consider all babies that contribute towards workload, and the levels of care they require.


Subject(s)
Intensive Care, Neonatal , Models, Theoretical , Databases, Factual , Female , Gestational Age , Humans , Infant, Newborn , Infant, Premature , Male , Proportional Hazards Models
18.
MMWR Morb Mortal Wkly Rep ; 64(28): 771-2, 2015 Jul 24.
Article in English | MEDLINE | ID: mdl-26203632

ABSTRACT

In March 2014, the Colorado Department of Public Health and Environment (CDPHE) learned of the death of a man aged 19 years after consuming an edible marijuana product. CDPHE reviewed autopsy and police reports to assess factors associated with his death and to guide prevention efforts. The decedent's friend, aged 23 years, had purchased marijuana cookies and provided one to the decedent. A police report indicated that initially the decedent ate only a single piece of his cookie, as directed by the sales clerk. Approximately 30-60 minutes later, not feeling any effects, he consumed the remainder of the cookie. During the next 2 hours, he reportedly exhibited erratic speech and hostile behaviors. Approximately 3.5 hours after initial ingestion, and 2.5 hours after consuming the remainder of the cookie, he jumped off a fourth floor balcony and died from trauma. The autopsy, performed 29 hours after time of death, found marijuana intoxication as a chief contributing factor. Quantitative toxicologic analyses for drugs of abuse, synthetic cannabinoid, and cathinones ("bath salts") were performed on chest cavity blood by gas chromatography and mass spectrometry. The only confirmed findings were cannabinoids (7.2 ng/mL delta-9 tetrahydrocannabinol [THC] and 49 ng/mL delta-9 carboxy-THC, an inactive marijuana metabolite). The legal whole blood limit of delta-9 THC for driving a vehicle in Colorado is 5.0 ng/mL. This was the first reported death in Colorado linked to marijuana consumption without evidence of polysubstance use since the state approved recreational use of marijuana in 2012.


Subject(s)
Cannabis/toxicity , Eating , Colorado , Fatal Outcome , Humans , Male , Young Adult
20.
Arch Dis Child Fetal Neonatal Ed ; 99(6): F505-9, 2014 Nov.
Article in English | MEDLINE | ID: mdl-25096292

ABSTRACT

Healthcare improvement is synonymous with quality measurement and involves assessing how well care is delivered and the results achieved in comparison with a desired standard or standards. Quality measurement has become part of routine healthcare in the developed world as a means of detecting inadequate quality performance which, if not dealt with promptly, can have far-reaching consequences as seen in recent well publicised UK examples. The growth in the use of quality measurement has led to increasing attention on the processes and measures employed--in particular how measures are chosen, reported and used. This has included consideration of the attributes that make a good quality measure using testing protocols to ensure that any potential measure is fit for purpose and using summative reporting frameworks. All of these tools are already used in some specialties outside neonatal care. This article explores this wider experience and considers how the lessons learnt might helpfully be applied to neonatal care.


Subject(s)
Perinatal Care/standards , Quality of Health Care , Delivery of Health Care/standards , Humans , Infant, Newborn , Quality Improvement , Quality Indicators, Health Care , United Kingdom
SELECTION OF CITATIONS
SEARCH DETAIL
...