Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 107
Filter
1.
Front Public Health ; 12: 1257163, 2024.
Article in English | MEDLINE | ID: mdl-38362210

ABSTRACT

Importance: The United States (US) Medicare claims files are valuable sources of national healthcare utilization data with over 45 million beneficiaries each year. Due to their massive sizes and costs involved in obtaining the data, a method of randomly drawing a representative sample for retrospective cohort studies with multi-year follow-up is not well-documented. Objective: To present a method to construct longitudinal patient samples from Medicare claims files that are representative of Medicare populations each year. Design: Retrospective cohort and cross-sectional designs. Participants: US Medicare beneficiaries with diabetes over a 10-year period. Methods: Medicare Master Beneficiary Summary Files were used to identify eligible patients for each year in over a 10-year period. We targeted a sample of ~900,000 patients per year. The first year's sample is stratified by county and race/ethnicity (white vs. minority), and targeted at least 250 patients in each stratum with the remaining sample allocated proportional to county population size with oversampling of minorities. Patients who were alive, did not move between counties, and stayed enrolled in Medicare fee-for-service (FFS) were retained in the sample for subsequent years. Non-retained patients (those who died or were dropped from Medicare) were replaced with a sample of patients in their first year of Medicare FFS eligibility or patients who moved into a sampled county during the previous year. Results: The resulting sample contains an average of 899,266 ± 408 patients each year over the 10-year study period and closely matches population demographics and chronic conditions. For all years in the sample, the weighted average sample age and the population average age differ by <0.01 years; the proportion white is within 0.01%; and the proportion female is within 0.08%. Rates of 21 comorbidities estimated from the samples for all 10 years were within 0.12% of the population rates. Longitudinal cohorts based on samples also closely resembled the cohorts based on populations remaining after 5- and 10-year follow-up. Conclusions and relevance: This sampling strategy can be easily adapted to other projects that require random samples of Medicare beneficiaries or other national claims files for longitudinal follow-up with possible oversampling of sub-populations.


Subject(s)
Fee-for-Service Plans , Medicare , Aged , Female , Humans , Cross-Sectional Studies , Health Expenditures , Retrospective Studies , United States , Male
2.
Accid Anal Prev ; 190: 107139, 2023 Sep.
Article in English | MEDLINE | ID: mdl-37320981

ABSTRACT

OBJECTIVE: Automated Driving System (ADS) fleets are currently being deployed in several dense-urban operational design domains within the United States. In these dense-urban areas, pedestrians have historically comprised a significant portion, and sometimes the majority, of injury and fatal collisions. An expanded understanding of the injury risk in collision events involving pedestrians and human-driven vehicles can inform continued ADS development and safety benefits evaluation. There is no current systematic investigation of United States pedestrian collisions, so this study used reconstruction data from the German In-Depth Accident Study (GIDAS) to develop mechanistic injury risk models for pedestrians involved in collisions with vehicles. DATA SOURCE: The study queried the GIDAS database for cases from 1999 to 2021 involving passenger vehicle or heavy vehicle collisions with pedestrians. METHODS: We describe the injury patterns and frequencies for passenger vehicle-to-pedestrian and heavy vehicle-to-pedestrian collisions, where heavy vehicles included heavy trucks and buses. Injury risk functions were developed at the AIS2+, 3+, 4+ and 5+ levels for pedestrians involved in frontal collisions with passenger vehicles and separately for frontal collisions with heavy vehicles. Model predictors included mechanistic factors of collision speed, pedestrian age, sex, pedestrian height relative to vehicle bumper height, and vehicle acceleration before impact. Children (≤17 y.o.) and elderly (≥65 y.o.) pedestrians were included. We further conducted weighted and imputed analyses to understand the effects of missing data elements and of weighting towards the overall population of German pedestrian crashes. RESULTS: We identified 3,112 pedestrians involved in collisions with passenger vehicles, where 2,524 of those collisions were frontal vehicle strikes. Furthermore, we determined 154 pedestrians involved in collisions with heavy vehicles, where 87 of those identified collisions were frontal vehicle strikes. Children were found to be at higher risk of injury compared to young adults, and the highest risk of serious injuries (AIS 3+) existed for the oldest pedestrians in the dataset. Collisions with heavy vehicles were more likely to produce serious (AIS 3+) injuries at low speeds than collisions with passenger vehicles. Injury mechanisms differed between collisions with passenger vehicles and with heavy vehicles. The initial engagement caused 36% of pedestrians' most-severe injuries in passenger vehicle collisions, compared with 23% in heavy vehicles collisions. Conversely, the vehicle underside caused 6% of the most-severe injuries in passenger vehicle collisions and 20% in heavy vehicles collisions. SIGNIFICANCE: U.S. pedestrian fatalities have risen 59% since their recent recorded low in 2009. It is imperative that we understand and describe injury risk so that we can target effective strategies for injury and fatality reduction. This study builds on previous analyses by including the most modern vehicles, including children and elderly pedestrians, incorporating additional mechanistic predictors, broadening the scope of included crashes, and using multiple imputation and weighting to better estimate these effects relative to the entire population of German pedestrian collisions. This study is the first to investigate the risk of injury to pedestrians in collisions with heavy vehicles based on field data.


Subject(s)
Pedestrians , Wounds and Injuries , Child , Young Adult , Humans , Aged , Accidents, Traffic , Motor Vehicles , Wounds and Injuries/epidemiology
3.
IEEE Trans Biomed Eng ; 70(6): 1750-1757, 2023 06.
Article in English | MEDLINE | ID: mdl-37015585

ABSTRACT

Automated eye-tracking technology could enhance diagnosis for many neurological diseases, including stroke. Current literature focuses on gaze estimation through a form of calibration. However, patients with neuro-ocular abnormalities may have difficulty completing a calibration procedure due to inattention or other neurological deficits. OBJECTIVE: We investigated 1) the need for calibration to measure eye movement symmetry in healthy controls and 2) the potential of eye movement symmetry to distinguish between healthy controls and patients. METHODS: We analyzed fixations, smooth pursuits, saccades, and conjugacy measured by a Spearman correlation coefficient and utilized a linear mixed-effects model to estimate the effect of calibration. RESULTS: Healthy participants (n = 18) did not differ in correlations between calibrated and non-calibrated conditions for all tests. The calibration condition did not improve the linear mixed effects model (log-likelihood ratio test p = 0.426) in predicting correlation coefficients. Interestingly, the patient group (n = 17) differed in correlations for the DOT (0.844 [95% CI 0.602, 0.920] vs. 0.98 [95% CI 0.976, 0.985]), H (0.903 [95% CI 0.746, 0.958] vs. 0.979 [95% CI 0.971, 0.986]), and OKN (0.898 [95% CI 0.785, 0.958] vs. 0.993 [95% CI 0.987, 0.996]) tests compared to healthy controls along the x-axis. These differences were not observed along the y-axis. SIGNIFICANCE: This study suggests that automated eye tracking can be deployed without calibration to measure eye movement symmetry. It may be a good discriminator between normal and abnormal eye movement symmetry. Validation of these findings in larger populations is required.


Subject(s)
Eye Movements , Stroke , Humans , Fixation, Ocular , Saccades , Stroke/diagnosis , Calibration
4.
Accid Anal Prev ; 186: 107047, 2023 Jun.
Article in English | MEDLINE | ID: mdl-37003164

ABSTRACT

Motor vehicle collisions (MVCs) are a leading cause of acute spinal injuries. Chronic spinal pathologies are common in the population. Thus, determining the incidence of different types of spinal injuries due to MVCs and understanding biomechanical mechanism of these injuries is important for distinguishing acute injuries from chronic degenerative disease. This paper describes methods for determining causation of spinal pathologies from MVCs based on rates of injury and analysis of the biomechanics require to produce these injuries. Rates of spinal injuries in MVCs were determined using two distinct methodologies and interpreted using a focused review of salient biomechanical literature. One methodology used incidence data from the Nationwide Emergency Department Sample and exposure data from the Crash Report Sample System supplemented with a telephone survey to estimate total national exposure to MVC. The other used incidence and exposure data from the Crash Investigation Sampling System. Linking the clinical and biomechanical findings yielded several conclusions. First, spinal injuries caused by an MVC are relatively rare (511 injured occupants per 10,000 exposed to an MVC), which is consistent with the biomechanical forces required to generate injury. Second, spinal injury rates increase as impact severity increases, and fractures are more common in higher-severity exposures. Third, the rate of sprain/strain in the cervical spine is greater than in the lumbar spine. Fourth, spinal disc injuries are extremely rare in MVCs (0.01 occupants per 10,000 exposed) and typically occur with concomitant trauma, which is consistent with the biomechanical findings 1) that disc herniations are fatigue injuries caused by cyclic loading, 2) the disc is almost never the first structure to be injured in impact loading unless it is highly flexed and compressed, and 3) that most crashes involve predominantly tensile loading in the spine, which does not cause isolated disc herniations. These biomechanical findings illustrate that determining causation when an MVC occupant presents with disc pathology must be based on the specifics of that presentation and the crash circumstances and, more broadly, that any causation determination must be informed by competent biomechanical analysis.


Subject(s)
Fractures, Bone , Intervertebral Disc Displacement , Spinal Injuries , Humans , Accidents, Traffic , Intervertebral Disc Displacement/complications , Spinal Injuries/epidemiology , Spinal Injuries/etiology , Motor Vehicles
5.
Article in English | MEDLINE | ID: mdl-38764700

ABSTRACT

Objective: While rates for non-traumatic lower extremity amputations (LEA) have been declining, concerns exist over disparities. Our objectives are to track major LEA (MLEA) rates over time among Medicare beneficiaries residing in a high diabetes prevalence region in the southeastern USA (the diabetes belt) and surrounding areas. Methods: We used Medicare claims files for ~900 000 fee-for-service beneficiaries aged ≥65 years in 2006-2015 to track MLEA rates per 1000 patients with diabetes. We additionally conducted a cross-sectional analysis of data for 2015 to compare regional and racial disparities in major amputation risks after adjusting for demographic, socioeconomic, access-to-care and foot complications and other health factors. The Centers for Disease Control and Prevention defined the diabetes belt as 644 counties across Appalachian and southeastern US counties with high prevalence. Results: MLEA rates were 3.9 per 1000 in the Belt compared with 2.8 in the surrounding counties in 2006 and decreased to 2.3 and 1.6 in 2015. Non-Hispanic black patients had 8.5 and 6.9 MLEAs per 1000 in 2006 and 4.8 and 3.5 in 2015 in the Belt and surrounding counties, respectively, while the rates were similar for non-Hispanic white patients in the two areas. Although amputation rates declined rapidly in both areas, non-Hispanic black patients in the Belt consistently had >3 times higher rates than non-Hispanic whites in the Belt. After adjusting for patient demographics, foot complications and healthcare access, non-Hispanic blacks in the Belt had about twice higher odds of MLEAs compared with non-Hispanic whites in the surrounding areas. Discussion: Our data show persistent disparities in major amputation rates between the diabetes belt and surrounding counties. Racial disparities were much larger in the Belt. Targeted policies to prevent MLEAs among non-Hispanic black patients are needed to reduce persistent disparities in the Belt.

6.
Article in English | MEDLINE | ID: mdl-35991000

ABSTRACT

Objective: To examine whether Annual Wellness Visits (AWVs) were associated with increased use of preventive services in Medicare patients with diabetes living in the Diabetes Belt. Methods: We used a case-control design where outcomes were utilization of preventive services recommended for patients with diabetes (foot exam, eye exam, A1c test, and microalbuminuria test) and the exposure was AWVs using data for Medicare patients with diabetes in 2014 - 2015 residing in the Diabetes Belt (N = 412,009). Results: Only 13.4% of patients in 2014 and 17.4% in 2015 used AWVs. Eye exams (61% vs 53%), foot exams (93% vs 79%), A1c tests (81% vs 71%), and microalbuminuria tests (45% vs 28%) were more common among patients who had an AWV in the preceding year compared with those who did not. These differences remained significant after adjusting for patient demographics, comorbidities, county level medical resources, and geographic factors. Conclusions: AWVs were significantly associated with increased preventive care use among patients with diabetes living in the Diabetes Belt. Low AWV utilization by patients with diabetes in and around the Diabetes Belt is concerning.

7.
Proc Natl Acad Sci U S A ; 119(36): e2208972119, 2022 09 06.
Article in English | MEDLINE | ID: mdl-36037372

ABSTRACT

Children in low-resource settings carry enteric pathogens asymptomatically and are frequently treated with antibiotics, resulting in opportunities for pathogens to be exposed to antibiotics when not the target of treatment (i.e., bystander exposure). We quantified the frequency of bystander antibiotic exposures for enteric pathogens and estimated associations with resistance among children in eight low-resource settings. We analyzed 15,697 antibiotic courses from 1,715 children aged 0 to 2 y from the MAL-ED birth cohort. We calculated the incidence of bystander exposures and attributed exposures to respiratory and diarrheal illnesses. We associated bystander exposure with phenotypic susceptibility of E. coli isolates in the 30 d following exposure and at the level of the study site. There were 744.1 subclinical pathogen exposures to antibiotics per 100 child-years. Enteroaggregative Escherichia coli was the most frequently exposed pathogen, with 229.6 exposures per 100 child-years. Almost all antibiotic exposures for Campylobacter (98.8%), enterotoxigenic E. coli (95.6%), and typical enteropathogenic E. coli (99.4%), and the majority for Shigella (77.6%), occurred when the pathogens were not the target of treatment. Respiratory infections accounted for half (49.9%) and diarrheal illnesses accounted for one-fourth (24.6%) of subclinical enteric bacteria exposures to antibiotics. Bystander exposure of E. coli to class-specific antibiotics was associated with the prevalence of phenotypic resistance at the community level. Antimicrobial stewardship and illness-prevention interventions among children in low-resource settings would have a large ancillary benefit of reducing bystander selection that may contribute to antimicrobial resistance.


Subject(s)
Anti-Bacterial Agents , Drug Resistance, Bacterial , Enterobacteriaceae , Environmental Exposure , Anti-Bacterial Agents/pharmacology , Anti-Bacterial Agents/therapeutic use , Child, Preschool , Diarrhea/drug therapy , Diarrhea/microbiology , Drug Resistance, Bacterial/drug effects , Enterobacteriaceae/drug effects , Enterobacteriaceae/physiology , Enterobacteriaceae Infections/drug therapy , Enterobacteriaceae Infections/microbiology , Enterobacteriaceae Infections/transmission , Humans , Infant
8.
Ann Surg ; 276(5): e347-e352, 2022 11 01.
Article in English | MEDLINE | ID: mdl-35946794

ABSTRACT

OBJECTIVE: While errors can harm patients they remain poorly studied. This study characterized errors in the care of surgical patients and examined the association of errors with morbidity and mortality. BACKGROUND: Errors have been reported to cause <10% or >60% of adverse events. Such discordant results underscore the need for further exploration of the relationship between error and adverse events. METHODS: Patients with operations performed at a single institution and abstracted into the American College of Surgeons National Surgical Quality Improvement Program from January 1, 2018, to December 31, 2018 were examined. This matched case control study comprised cases who experienced a postoperative morbidity or mortality. Controls included patients without morbidity or mortality, matched 2:1 using age (±10 years), sex, and Current Procedural Terminology (CPT) group. Two faculty surgeons independently reviewed records for each case and control patient to identify diagnostic, technical, judgment, medication, system, or omission errors. A conditional multivariable logistic regression model examined the association between error and morbidity. RESULTS: Of 1899 patients, 170 were defined as cases who experienced a morbidity or mortality. The majority of cases (n=93; 55%) had at least 1 error; of the 329 matched control patients, 112 had at least 1 error (34%). Technical errors occurred most often among both cases (40%) and controls (23%). Logistic regression demonstrated a strong independent relationship between error and morbidity (odds ratio=2.67, 95% confidence interval: 1.64-4.35, P <0.001). CONCLUSION: Errors in surgical care were associated with postoperative morbidity. Reducing errors requires measurement of errors.


Subject(s)
Postoperative Complications , Quality Improvement , Case-Control Studies , Humans , Morbidity , Odds Ratio , Postoperative Complications/etiology , Risk Factors
9.
Front Neurol ; 13: 878282, 2022.
Article in English | MEDLINE | ID: mdl-35847210

ABSTRACT

Background: Current EMS stroke screening tools facilitate early detection and triage, but the tools' accuracy and reliability are limited and highly variable. An automated stroke screening tool could improve stroke outcomes by facilitating more accurate prehospital diagnosis and delivery. We hypothesize that a machine learning algorithm using video analysis can detect common signs of stroke. As a proof-of-concept study, we trained a computer algorithm to detect presence and laterality of facial weakness in publically available videos with comparable accuracy, sensitivity, and specificity to paramedics. Methods and Results: We curated videos of people with unilateral facial weakness (n = 93) and with a normal smile (n = 96) from publicly available web-based sources. Three board certified vascular neurologists categorized the videos according to the presence or absence of weakness and laterality. Three paramedics independently analyzed each video with a mean accuracy, sensitivity and specificity of 92.6% [95% CI 90.1-94.7%], 87.8% [95% CI 83.9-91.7%] and 99.3% [95% CI 98.2-100%]. Using a 5-fold cross validation scheme, we trained a computer vision algorithm to analyze the same videos producing an accuracy, sensitivity and specificity of 88.9% [95% CI 83.5-93%], 90.3% [95% CI 82.4-95.5%] and 87.5 [95% CI 79.2-93.4%]. Conclusions: These preliminary results suggest that a machine learning algorithm using computer vision analysis can detect unilateral facial weakness in pre-recorded videos with an accuracy and sensitivity comparable to trained paramedics. Further research is warranted to pursue the concept of augmented facial weakness detection and external validation of this algorithm in independent data sets and prospective patient encounters.

10.
Appl Ergon ; 102: 103743, 2022 Jul.
Article in English | MEDLINE | ID: mdl-35313260

ABSTRACT

Heat stress is associated with workplace injuries, likely through a combination of fatigue, reduced cognitive function, and thermal discomfort. The purpose of this study was to evaluate four cognitive tasks for sensitivity to heat stress. Eight participants performed treadmill exercise followed by assessments of serial reaction time (RT), Stroop effect, verbal delayed memory, and continuous performance working memory in an environmental chamber. A control (21.1 °C) trial, and "Hot 1" and "Hot 2" (both 37.8 °C) trials were run sequentially on two separate days to evaluate the four cognitive tasks. Heat strain (comparing Hot 1 and Hot 2 with the control trial) resulted in impairments in the serial RT test response and Stroop accuracy. Delayed memory was impacted only in the Hot 2 trial compared with the control trial. Given the demonstrated impact of heat on cognitive processes relevant to workers' real-world functioning in the workplace, understanding how to assess and monitor vigilant attention in the workplace is essential.


Subject(s)
Heat Stress Disorders , Hot Temperature , Cognition/physiology , Exercise , Heat Stress Disorders/etiology , Humans , Memory, Short-Term
11.
J Healthc Qual ; 44(2): 78-87, 2022.
Article in English | MEDLINE | ID: mdl-34469925

ABSTRACT

BACKGROUND AND PURPOSE: The Medicare Value-Based Purchasing (VBP) program established performance-based financial incentives for hospitals. We hypothesized that total performance scores (TPS) would vary by hospital type. METHODS: Value-Based Purchasing reports were collected from 2015 to 2017 and merged with the Centers for Medicare and Medicaid Services (CMS) Impact File data. A total of 3,005 hospitals were grouped into physician-owned surgical hospitals (POSH), accountable care organizations (ACO), Kaiser, Vizient, and General hospitals. Longitudinal linear mixed-effects models compared temporal differences of TPS and secondary composite outcome, process, patient satisfaction, safety, and cost efficiency measures between hospital types. RESULTS: Total performance scores decreased across all hospital types (p < .001). Physician-owned surgical hospitals had the highest TPS (59.9), followed by Kaiser (49.2), ACO (36.7), General (34.8), and Vizient (30.7) (p < .001). Hospital types differed significantly in size, geography, mean case-mix index, Medicare patient discharges, percent Medicare days to inpatient days, Disproportionate Share Hospital payments, and uncompensated care per claim. Scores improved in 84% of POSH and 14.6% of Kaiser hospitals using score reallocations. CONCLUSION: In comparison with General hospitals, the TPS was higher for POSH and Kaiser and lower for Vizient in part due to weighting reallocation and individual domain scores. IMPLICATIONS: Centers for Medicare and Medicaid Services scoring system changes have not addressed the methodological biases favoring certain hospital types.


Subject(s)
Accountable Care Organizations , Value-Based Purchasing , Aged , Centers for Medicare and Medicaid Services, U.S. , Hospitals , Humans , Medicare , United States
12.
Exp Clin Psychopharmacol ; 30(2): 141-150, 2022 Apr.
Article in English | MEDLINE | ID: mdl-33119385

ABSTRACT

Alcohol use is common among military personnel. However, alcohol use and problems are challenging to measure because military personnel do not have similar levels of confidentiality as civilians and can face sanctions for reporting illegal behavior (e.g., underage drinking) or for drinking during prohibited times (e.g., during basic training). The current study aimed to determine if the use of the alcohol purchase task (APT), which has previously been associated with alcohol use and alcohol-related problems in civilian populations, is a valid measure of alcohol-related risk in the military when asking about alcohol consumption is less feasible. Participants were 26,231 Air Force airmen who completed surveys including questions about sensation seeking, alcohol expectancies, perception of peer drinking, intent to drink, and family history of alcohol misuse, which are known predictors of alcohol use, and the APT, from which demand indices of intensity and Omax were derived. Individuals who were single, male, White, and had a high school diploma/GED had higher intensity and Omax scores, and non-Hispanic individuals had higher intensity scores. Age was negatively correlated with intensity and Omax. Regressions were used to determine if intensity and Omax were associated with known predictors of alcohol use and risk. Intensity and Omax showed significant but small associations with all included predictors of alcohol consumption and alcohol risk. Effect sizes were larger for individuals ages 21+ compared to individuals under 21. Thus, this study provides initial support for the validity of the APT as an index of alcohol-related risk among military personnel. (PsycInfo Database Record (c) 2022 APA, all rights reserved).


Subject(s)
Alcohol-Related Disorders , Military Personnel , Adult , Alcohol Drinking/epidemiology , Consumer Behavior , Ethanol , Humans , Male , United States/epidemiology , Young Adult
13.
Ann Surg ; 276(6): e698-e705, 2022 12 01.
Article in English | MEDLINE | ID: mdl-33156066

ABSTRACT

OBJECTIVE: Our objective was to examine the associations between early discharge and readmission after major abdominal operations. BACKGROUND: Advances in patient care resulted in earlier patient discharge after complex abdominal operations. Whether early discharge is associated with patient readmissions remains controversial. METHODS: Patients who had colorectal, liver, and pancreas operations abstracted in 2011-2017 American College of Surgeons National Surgical Quality Improvement Program Participant Use Data Files were included. Patient readmission was stratified by 6 operative groups. Patients who were discharged before median discharge date within each operative group were categorized as an early discharge. Analyses tested associations between early discharge and likelihood of 30-day postoperative unplanned readmission. RESULTS: A total of 364,609 patients with major abdominal operations were included. Individual patient groups and corresponding median day of discharge were: laparoscopic colectomy (n = 152,575; median = 4), open colectomy (n =137,462; median = 7), laparoscopic proctectomy (n = 12,238; median = 5), open proctectomy (n = 24,925; median = 6), major hepatectomy (n = 9,805; median = 6), pancreatoduodenectomy (n = 27,604; median = 8). Early discharge was not associated with an increase in proportion of readmissions in any operative group. Early discharge was associated with a decrease in average proportion of patient readmissions compared to patients discharged on median date in each of the operative groups: laparoscopic colectomy 6% versus 8%, open colectomy 11% versus 14%, laparoscopic proctectomy 13% versus 16%, open proctectomy 13% vs 17%, major hepatectomy 8% versus 12%, pancreatoduodenectomy 16% versus 20% (all P ≤ 0.02). Serious morbidity composite was significantly lower in patients who were discharged early than those who were not in each operative group (all P < 0.001). CONCLUSIONS: Early discharge in selected patients after major abdominal operations is associated with lower, and not higher, rate of 30-day unplanned readmission.


Subject(s)
Patient Readmission , Proctectomy , Humans , Patient Discharge , Risk Factors , Colectomy/adverse effects , Postoperative Complications/epidemiology , Retrospective Studies
14.
Eval Health Prof ; 45(4): 354-361, 2022 12.
Article in English | MEDLINE | ID: mdl-34308666

ABSTRACT

Self-generated identification codes (SGICs) are strings of information based on stable participant characteristics. They are often used in longitudinal research to match data between time points while protecting participant anonymity. However, the use of SGICs with military personnel has been infrequent, even though military personnel do not have the same privacy protections as civilians. The current paper reports results from two studies that tested the feasibility, reliability, and validity of using a SGIC to collect sensitive longitudinal data among military personnel. In study one, a team of 105 participants was tracked three times over a period of 12 weeks. The 10-item SGIC produced optimal matching over the 12-weeks. In study two, 1,844 participants were randomly assigned to a SGIC group or an anonymous control group, and then were asked to provide information about their alcohol use. Although match rates declined over time, there were no observed differences between study groups in participants' beliefs about the use of a SGIC. However, differences were identified in reported alcohol use behaviors between the groups, with controls reporting significantly more drinks per week and higher AUDIT-10 scores. While these findings raise potential concerns about using SGICs for epidemiological assessments of highly sensitive problem behaviors, these codes may still be useful in determining group differences in behavior change in randomized studies.


Subject(s)
Military Personnel , Humans , Feasibility Studies , Reproducibility of Results , Alcohol Drinking
15.
Anesthesiology ; 136(1): 104-114, 2022 01 01.
Article in English | MEDLINE | ID: mdl-34724550

ABSTRACT

BACKGROUND: Central airway occlusion is a feared complication of general anesthesia in patients with mediastinal masses. Maintenance of spontaneous ventilation and avoiding neuromuscular blockade are recommended to reduce this risk. Physiologic arguments supporting these recommendations are controversial and direct evidence is lacking. The authors hypothesized that, in adult patients with moderate to severe mediastinal mass-mediated tracheobronchial compression, anesthetic interventions including positive pressure ventilation and neuromuscular blockade could be instituted without compromising central airway patency. METHODS: Seventeen adult patients with large mediastinal masses requiring general anesthesia underwent awake intubation followed by continuous video bronchoscopy recordings of the compromised portion of the airway during staged induction. Assessments of changes in anterior-posterior airway diameter relative to baseline (awake, spontaneous ventilation) were performed using the following patency scores: unchanged = 0; 25 to 50% larger = +1; more than 50% larger = +2; 25 to 50% smaller = -1; more than 50% smaller = -2. Assessments were made by seven experienced bronchoscopists in side-by-side blinded and scrambled comparisons between (1) baseline awake, spontaneous breathing; (2) anesthetized with spontaneous ventilation; (3) anesthetized with positive pressure ventilation; and (4) anesthetized with positive pressure ventilation and neuromuscular blockade. Tidal volumes, respiratory rate, and inspiratory/expiratory ratio were similar between phases. RESULTS: No significant change from baseline was observed in the mean airway patency scores after the induction of general anesthesia (0 [95% CI, 0 to 0]; P = 0.953). The mean airway patency score increased with the addition of positive pressure ventilation (0 [95% CI, 0 to 1]; P = 0.024) and neuromuscular blockade (1 [95% CI, 0 to 1]; P < 0.001). No patient suffered airway collapse or difficult ventilation during any anesthetic phase. CONCLUSIONS: These observations suggest a need to reassess prevailing assumptions regarding positive pressure ventilation and/or paralysis and mediastinal mass-mediated airway collapse, but do not prove that conventional (nonstaged) inductions are safe for such patients.


Subject(s)
Airway Obstruction/diagnostic imaging , Airway Obstruction/surgery , Anesthesia, General/methods , Bronchoscopy/methods , Mediastinal Neoplasms/diagnostic imaging , Mediastinal Neoplasms/surgery , Adult , Aged , Female , Humans , Male , Middle Aged , Prospective Studies , Video-Assisted Techniques and Procedures
16.
BJA Open ; 42022 Dec.
Article in English | MEDLINE | ID: mdl-36687665

ABSTRACT

Background: High airway driving pressure is associated with adverse outcomes in critically ill patients receiving mechanical ventilation, but large multicentre studies investigating airway driving pressure during major surgery are lacking. We hypothesised that increased driving pressure is associated with postoperative pulmonary complications in patients undergoing major abdominal surgery. Methods: In this preregistered multicentre retrospective observational cohort study, the authors reviewed major abdominal surgical procedures in 11 hospitals from 2004 to 2018. The primary outcome was a composite of postoperative pulmonary complications, defined as postoperative pneumonia, unplanned tracheal intubation, or prolonged mechanical ventilation for more than 48 h. Associations between intraoperative dynamic driving pressure and outcomes, adjusted for patient and procedural factors, were evaluated. Results: Among 14 218 qualifying cases, 389 (2.7%) experienced postoperative pulmonary complications. After adjustment, the mean dynamic driving pressure was associated with postoperative pulmonary complications (adjusted odds ratio for every 1 cm H2O increase: 1.04; 95% confidence interval [CI], 1.02-1.06; P<0.001). Neither tidal volume nor PEEP was associated with postoperative pulmonary complications. Increased BMI, shorter height, and female sex were predictors for higher dynamic driving pressure (ß=0.35, 95% CI 0.32-0.39, P<0.001; ß=-0.01, 95% CI -0.02 to 0.00, P=0.005; and ß=0.74, 95% CI 0.63-0.86, P<0.001, respectively). Conclusions: Dynamic airway driving pressure, but not tidal volume or PEEP, is associated with postoperative pulmonary complications in models controlling for a large number of risk predictors and covariates. Such models are capable of risk prediction applicable to individual patients.

17.
BMJ Glob Health ; 7(9)2022 09.
Article in English | MEDLINE | ID: mdl-36660904

ABSTRACT

INTRODUCTION: Diarrhoea remains a leading cause of child morbidity and mortality. Systematically collected and analysed data on the aetiology of hospitalised diarrhoea in low-income and middle-income countries are needed to prioritise interventions. METHODS: We established the Global Pediatric Diarrhea Surveillance network, in which children under 5 years hospitalised with diarrhoea were enrolled at 33 sentinel surveillance hospitals in 28 low-income and middle-income countries. Randomly selected stool specimens were tested by quantitative PCR for 16 causes of diarrhoea. We estimated pathogen-specific attributable burdens of diarrhoeal hospitalisations and deaths. We incorporated country-level incidence to estimate the number of pathogen-specific deaths on a global scale. RESULTS: During 2017-2018, 29 502 diarrhoea hospitalisations were enrolled, of which 5465 were randomly selected and tested. Rotavirus was the leading cause of diarrhoea requiring hospitalisation (attributable fraction (AF) 33.3%; 95% CI 27.7 to 40.3), followed by Shigella (9.7%; 95% CI 7.7 to 11.6), norovirus (6.5%; 95% CI 5.4 to 7.6) and adenovirus 40/41 (5.5%; 95% CI 4.4 to 6.7). Rotavirus was the leading cause of hospitalised diarrhoea in all regions except the Americas, where the leading aetiologies were Shigella (19.2%; 95% CI 11.4 to 28.1) and norovirus (22.2%; 95% CI 17.5 to 27.9) in Central and South America, respectively. The proportion of hospitalisations attributable to rotavirus was approximately 50% lower in sites that had introduced rotavirus vaccine (AF 20.8%; 95% CI 18.0 to 24.1) compared with sites that had not (42.1%; 95% CI 33.2 to 53.4). Globally, we estimated 208 009 annual rotavirus-attributable deaths (95% CI 169 561 to 259 216), 62 853 Shigella-attributable deaths (95% CI 48 656 to 78 805), 36 922 adenovirus 40/41-attributable deaths (95% CI 28 469 to 46 672) and 35 914 norovirus-attributable deaths (95% CI 27 258 to 46 516). CONCLUSIONS: Despite the substantial impact of rotavirus vaccine introduction, rotavirus remained the leading cause of paediatric diarrhoea hospitalisations. Improving the efficacy and coverage of rotavirus vaccination and prioritising interventions against Shigella, norovirus and adenovirus could further reduce diarrhoea morbidity and mortality.


Subject(s)
Rotavirus Vaccines , Humans , Child , Child, Preschool , Incidence , Developing Countries , Diarrhea/epidemiology , Diarrhea/prevention & control , Hospitalization
18.
Mil Med ; 2021 Dec 04.
Article in English | MEDLINE | ID: mdl-34865112

ABSTRACT

BACKGROUND: Alcohol misuse poses significant public health concerns in the U.S. Military. An Alcohol Misconduct Prevention Program (AMPP), which includes a brief alcohol intervention (BAI) session, plus random breathalyzer program, has been shown to reduce alcohol-related incidents (ARIs) among Airmen undergoing training. PURPOSE: The current study sought to examine whether a booster BAI administered at the end of Airmen's training reduced ARIs out to a 1-year follow-up. METHODS: Participants were 26,231 U.S. Air Force Technical Trainees recruited between March 2016 and July 2018. Participants were cluster randomized by cohort to two conditions: AMPP + BAI Booster or AMPP + Bystander Intervention. The primary analysis was a comparison of the interventions' efficacies in preventing Article 15 ARIs at a 1-year follow-up, conducted using a generalized estimating equations logistic regression model controlling for covariates. RESULTS: There was no significant difference by condition in Article 15 ARIs at the 1-year follow-up (P = .912). CONCLUSIONS: Findings suggest that a booster may not be necessary to produce maximum effects beyond the initial AMPP intervention. It is also possible that alcohol behaviors changed as a result of the intervention but were not captured by our outcome measures. Future research should consider alternative outcomes or participant-tracking measures to determine whether a different or more intensive BAI booster is effective. The majority of Article 15 ARIs were for underage drinking; therefore, developing an intervention focused on this problem behavior could lead to large reductions in training costs in the military.

19.
Health Serv Outcomes Res Methodol ; 21(3): 324-338, 2021 Sep.
Article in English | MEDLINE | ID: mdl-34824558

ABSTRACT

For patients with diabetes, annual preventive care is essential to reduce the risk of complications. Local healthcare resources affect the utilization of diabetes preventive care. Our objectives were to evaluate the relative efficiency of counties in providing diabetes preventive care and explore potential to improve efficiencies. The study setting is public and private healthcare providers in US counties with available data. County-level demographics were extracted from the Area Health Resources File using data from 2010 to 2013, and individual-level information of diabetes preventive service use was obtained from the 2010 Behavioral Risk Factor Surveillance System. 1112 US counties were analyzed. Cluster analysis was used to place counties into three similar groups in terms of economic wellbeing and population characteristics. Group 1 consisted of metropolitan counties with prosperous or comfortable economic levels. Group 2 mostly consisted of non-metropolitan areas between distress and mid-tier levels, while Group 3 were mostly prosperous or comfortable counties in metropolitan areas. We used data enveopement analysis to assess efficiencies within each group. The majority of counties had modest efficiency in providing diabetes preventive care; 36 counties (57.1%), 345 counties (61.1%), and 263 counties (54.3%) were inefficient (efficiency scores < 1) in Group 1, Group 2, and Group 3, respectively. For inefficient counties, foot and eye exams were often identified as sources of inefficiency. Available health professionals in some counties were not fully utilized to provide diabetes preventive care. Identifying benchmarking targets from counties with similar resources can help counties and policy makers develop actionable strategies to improve performance.

20.
Traffic Inj Prev ; 22(sup1): S74-S81, 2021.
Article in English | MEDLINE | ID: mdl-34672889

ABSTRACT

OBJECTIVE: Transporting severely injured pediatric patients to a trauma center has been shown to decrease mortality. A decision support tool to assist emergency medical services (EMS) providers with trauma triage would be both as parsimonious as possible and highly accurate. The objective of this study was to determine the minimum set of predictors required to accurately predict severe injury in pediatric patients. METHODS: Crash data and patient injuries were obtained from the NASS and CISS databases. A baseline multivariable logistic model was developed to predict severe injury in pediatric patients using the following predictors: age, sex, seat row, restraint use, ejection, entrapment, posted speed limit, any airbag deployment, principal direction of force (PDOF), change in velocity (delta-V), single vs. multiple collisions, and non-rollover vs. rollover. The outcomes of interest were injury severity score (ISS) ≥16 and the Target Injury List (TIL). Accuracy was measured by the cross-validation mean of the receiver operator curve (ROC) area under the curve (AUC). We used Bayesian Model Averaging (BMA) based on all subsets regression to determine the importance of each variable separately for each outcome. The AUC of the highest performing model for each number of variables was compared to the baseline model to assess for a statistically significant difference (p < 0.05). A reduced variable set model was derived using this information. RESULTS: The baseline models performed well (ISS ≥ 16: AUC 0.91 [95% CI: 0.86-0.95], TIL: AUC 0.90 [95% CI: 0.86-0.94]). Using BMA, the rank of the importance of the predictors was identical for both ISS ≥ 16 and TIL. There was no statistically significant decrease in accuracy until the models were reduced to fewer than five and six variables for predicting ISS ≥ 16 and TIL, respectively. A reduced variable set model developed using the top five variables (delta-V, entrapment, ejection, restraint use, and near-side collision) to predict ISS ≥ 16 had an AUC 0.90 [95% CI: 0.84-0.96]. Among the models that did not include delta-V, the highest AUC was 0.82 [95% CI: 0.77-0.87]. CONCLUSIONS: A succinct logistic regression model can accurately predict severely injured pediatric patients, which could be used for prehospital trauma triage. However, there remains a critical need to obtain delta-V in real-time.


Subject(s)
Accidents, Traffic , Wounds and Injuries , Bayes Theorem , Child , Humans , Injury Severity Score , Motor Vehicles , Trauma Centers
SELECTION OF CITATIONS
SEARCH DETAIL
...