RESUMEN
INTRODUCTION: This study examines the experience of non-Hispanic Black Americans (hereinafter referred to as Black persons) and non-Hispanic White Americans (hereinafter referred to as White persons) with regard to the incidence (i.e., number of persons diagnosed with HIV), prevalence (i.e., number of persons living with HIV), and mortality rates of persons with HIV in the United States in 2019. With regard to mortality rates, this study examines the mortality rate of all Black persons and White persons with HIV in 2019 as well as the mortality rate of hospitalized Black persons and White persons with HIV in 2019. METHODS: Data on the racial characteristics of all persons in the United States in 2019 were obtained from the United States Census Bureau, and data on the racial characteristics of all persons with HIV in the United States were obtained from HIV Surveillance Reports produced by the Centers for Disease Control and Prevention (CDC). In addition, data on all hospital patients in seven states (California, Florida, Michigan, New Jersey, New York, South Carolina and Wisconsin) in 2019 were obtained from the Agency for Healthcare Research and Quality (AHRQ) Hospital Cost and Utilization Project (HCUP) State Inpatient Database (SID). These seven states included 44 percent of all persons living with HIV in the United States in 2019. RESULTS: This study found that Black persons were more likely to be diagnosed with HIV, live with HIV, and die with HIV than White persons in the United States. This is illustrated by the fact that in 2019 Black persons comprised 13.4 percent of the population, yet they comprised 42.1 percent of persons diagnosed with HIV, 40.4 percent of persons living with HIV, and 42.9 percent of persons who died with HIV. By comparison, in 2019 White persons comprised 76.3 percent of the population, yet they comprised 24.8 percent of persons diagnosed with HIV, 29.1 percent of persons living with HIV, and 31.8 percent of persons who died with HIV. Nevertheless, this study did not find a statistically significant difference between the in-hospital mortality rates of Black and White persons in seven states in 2019. CONCLUSIONS: The burden of HIV was considerably greater on Black persons than White persons in the United States in 2019.
RESUMEN
Patients with mild cognitive impairment (MCI) are at a high risk of developing future dementia. However, early identification and active intervention could potentially reduce its morbidity and the incidence of dementia. Near-infrared spectroscopy (NIRS) has been proposed as a noninvasive modality for detecting oxygenation changes in the time-varying hemodynamics of the prefrontal cortex. This study sought to provide an effective method for detecting patients with mild cognitive impairment using NIRS and the Wisconsin Card Sorting Test (WCST) to evaluate changes in blood oxygenation. The results revealed that all groups with a lower MMSE grade had a higher increase in HHb concentration during a modified WCST. The increase in the change in HbO2 concentration in the stroke group was smaller than that in the normal group due to weak cerebrovascular reactivity. This article is protected by copyright. All rights reserved.
RESUMEN
Recent research indicates that early detection of breast cancer (BC) is critical in achieving favorable treatment outcomes and reducing the mortality rate associated with it. With the difficulty in obtaining a balanced dataset that is primarily sourced for the diagnosis of the disease, many researchers have relied on data augmentation techniques, thereby having varying datasets with varying quality and results. The dataset we focused on in this study is crafted from SHapley Additive exPlanations (SHAP)-augmentation and random augmentation (RA) approaches to dealing with imbalanced data. This was carried out on the Wisconsin BC dataset and the effectiveness of this approach to the diagnosis of BC was checked using six machine-learning algorithms. RA synthetically generated some parts of the dataset while SHAP helped in assessing the quality of the attributes, which were selected and used for the training of the models. The result from our analysis shows that the performance of the models used generally increased to more than 3% for most of the models using the dataset obtained by the integration of SHAP and RA. Additionally, after diagnosis, it is important to focus on providing quality care to ensure the best possible outcomes for patients. The need for proper management of the disease state is crucial so as to reduce the recurrence of the disease and other associated complications. Thus the interpretability provided by SHAP enlightens the management strategies in this study focusing on the quality of care given to the patient and how timely the care is.
RESUMEN
BACKGROUND: Tapering long-term opioid therapy is an increasingly common practice, yet rapid opioid dose reductions may increase the risk of overdose. The objective of this study was to compare overdose risk following opioid dose reduction rates of ≤10%, 11% to 20%, 21% to 30%, and >30% per month to stable dosing. METHODS: We conducted a retrospective cohort study in three health systems in Colorado and Wisconsin. Participants were patients ≥18 years of age prescribed long-term opioid therapy between January 1, 2006, and June 30, 2019. Five opioid dosing patterns and drug overdoses (fatal and nonfatal) were identified using electronic health records, pharmacy records, and the National Death Index. Cox proportional hazard regression was conducted on a propensity score-weighted cohort to estimate adjusted hazard ratios (aHRs) for follow-up periods of 1, 3, 6, 9, and 12 months after a dose reduction. RESULTS: In a cohort of 17 540 patients receiving long-term opioid therapy, 42.7% of patients experienced a dose reduction. Relative to stable dosing, a dose reduction rate of >30% was associated with an increased risk of overdose and the aHR estimates decreased as the follow-up increased; the aHRs for the 1-, 6- and 12-month follow-ups were 5.33 (95% CI, 1.98-14.34), 1.81 (95% CI,1.08-3.03), and 1.49 (95% CI, 0.97-2.27), respectively. The slower tapering rates were not associated with overdose risk. CONCLUSIONS: Patients receiving long-term opioid therapy exposed to dose reduction rates of >30% per month had increased overdose risk relative to patients exposed to stable dosing. Results support the use of slow dose reductions to minimize the risk of overdose.
RESUMEN
BACKGROUND: Prescription drug monitoring programs (PDMPs) are state-based surveillance tools used to track controlled substances dispensed to patients and identify patients at-risk of misuse. Starting April 2017, Wisconsin required all prescribers access PDMP to review patient information before issuing a controlled substance prescription order for more than a 3-day supply. A primary goal of PDMP use mandates is to reduce avoidable prescribing and mitigate opioid related mortality and morbidity. Current literature has not evaluated the existence of a time point post-policy implementation, at which the trend in opioid dispensing changes, reflecting normalization/maintenance of opioid prescribing. OBJECTIVE: We sought to evaluate the impact of the PDMP use mandate on trends in opioid prescriptions dispensed and test a hypothesis that a change or inflection in opioid prescriptions dispensed occurred post-mandate implementation. METHODS: Interrupted Time Series Analysis (ITSA) design was used to examine whether the level (immediate impact) and trend in opioid prescribing changed significantly after the PDMP use mandate was implemented. We used a novel Change Point Analysis (CPA) approach to test the hypothesis i.e., identify if and when a change or inflection in opioid dispensing trend occurred after implementation of the PDMP use mandate. RESULTS: ITSA model results showed a significant drop in opioid prescriptions dispensed (p < 0.05) immediately after the mandate implementation (i.e., April 2017). Results of the CPA identified a significant inflection in opioid prescriptions dispensed starting January 2019 (21-months post-policy implementation). An ITSA model using the inflection point as an interruption showed that the trend in opioid prescriptions dispensed became flatter after the inflection point, suggesting normalization. CONCLUSION: Using a novel CPA approach, the findings showed an inflection in the trend in opioid prescriptions dispensed post-PDMP use mandate implementation, implying that most of the avoidable prescribing likely was curtailed. The results suggest that the patient information presumably accessed from the WI PDMP interface was useful in helping prescribers to make an informed clinical decision about opioid prescribing.
RESUMEN
In almost every state, courts can jail those who fail to pay fines, fees, and other court debts-even those resulting from traffic or other non-criminal violations. While debtors' prisons for private debts have been widely illegal in the United States for more than 150 years, the effect of courts aggressively pursuing unpaid fines and fees is that many Americans are nevertheless jailed for unpaid debts. However, heterogeneous, incomplete, and siloed records have made it difficult to understand the scope of debt imprisonment practices. We culled data from millions of records collected through hundreds of public records requests to county jails to produce a first-of-its-kind dataset documenting imprisonment for court debts in three U.S. states. Using these data, we present novel order-of-magnitude estimates of the prevalence of debt imprisonment, finding that between 2005 and 2018, around 38,000 residents of Texas and around 8,000 residents of Wisconsin were jailed each year for failure to pay (FTP), with the median individual spending one day in jail in both Texas and Wisconsin. Drawing on additional data on FTP warrants from Oklahoma, we also find that unpaid fines and fees leading to debt imprisonment most commonly come from traffic offenses, for which a typical Oklahoma court debtor owes around $250, or $500 if a warrant was issued for their arrest.
Asunto(s)
Cárceles Locales , Prisiones , Humanos , Honorarios y Precios , Aplicación de la Ley , Trastornos de la MemoriaRESUMEN
Executive functions(EFs) may be associated with the emergence of non-suicidal self-injury(NSSI) due to their role as behavior controllers. EFs includes three core cognitive processes: inhibitory control, working memory, and cognitive flexibility(i.e. the ability to selectively alter cognitive strategies to generate appropriate behavior in the changing environment). This study aimed to systematically explore the three core EFs in depressed adolescents with NSSI. The data was obtained from the baseline data of the Chinese adolescent depression Cohort. The adolescents underwent cognitive assessments to yield domain-specific scores in EFs using the Digit Span Backward test(DSB), the Stroop Color-word interference test- color-word condition(Stroop-CW), and the Wisconsin Card Sorting tests(WCST). The significant differences in WCST scores were found between the NSSI group and the non-NSSI group. NSSI frequency was moderately positively correlated with total errors and negatively correlated with the number of categories completed. The number of categories completed in the "≥200â³ NSSI frequency group was significantly lower than that in the "≤10â³ NSSI group. The current findings suggested that depressed adolescents who had engaged in NSSI have poorer cognitive flexibility performance compared to adolescents without NSSI. As the frequency of NSSI increased, cognitive flexibility might become worse. These results provide evidence of a connection between executive dysfunctions and NSSI in depressed adolescents.
RESUMEN
Respiratory disease is an ongoing challenge for calves in the dairy sector with a relatively high prevalence and impact on welfare and economics. Applying scoring protocols for detecting respiratory disease requires that they are easily implemented, consistent between observers and fast to use in daily management. This study was conducted in one Danish dairy farm from September 2020 through January 2021. The study included 126 heifer calves enrolled in the age of 17 to 24 d. All calves were observed every second day for a period of 46 d. At each visit all calves were scored with a new Visual Analog Scale (VAS) and the Wisconsin Calf Health Scoring Chart (WCHSC). We calculated agreement between the 2 scoring systems based on conditional probability to score higher or lower than a cut-off in the VAS compared with a specified cut-off in WCHSC used as reference test. A generalized mixed effects regression model was performed to develop a model that could estimate the prevalence of respiratory disease and the overall agreement between the 2 scoring systems. The overall agreement between the VAS and WCHSC was 89.6%. The second part of the study assessed inter-observer reliability between 2 experienced observers and between an experienced observer and veterinary students. The inter-observer reliability was calculated by intra-class correlation coefficient and was 0.58 between experienced observers and was 0.34 between an experienced observer and veterinary students indicating a moderate to poor reliability between the observers. It was possible to use VAS as an alternative clinical scoring method, which primarily focuses on the general condition of the individual calf rather than specific categories of clinical signs. Our study set up lacked a comparison to other diagnostic tools i.e., thoracic ultrasound to confirm the findings which should be considered in future studies when exploring VAS as a screening tool for detection of respiratory disease in dairy calves.
RESUMEN
BACKGROUND: The severity criteria for eating disorders (EDs) proposed in the DSM-5 have been established without sufficient empirical support. Drive for thinness (DT) and duration of illness have been proposed as two alternative severity measures, however their empirical evidence is also limited. To date, no research has assessed the validity of current eating disorder (ED) severity criteria regarding cognitive flexibility factors. Cognitive flexibility is often impaired in EDs, becoming a possible severity symptom. The current study assessed for the first time (1) whether the severity indexes for EDs proposed in the DSM-5 were associated with deficits in cognitive flexibility and, (2) whether drive for thinness and illness duration, acted as an alternative, more meaningful severity indices for deficiencies in cognitive flexibility. METHODS: Participants were 161 patients diagnosed with an ED, who were categorized according to DSM-5 severity categories, DT and duration of illness. Discriminative capacity of each classification was assessed for cognitive flexibility measured by Wisconsin card sorting test (WCST). RESULTS: The findings for the DSM-5 classification comprised: (a) In the anorexia nervosa (AN) group, patients with moderate severity showed better scores in WCST than patients with mild and severe/extreme severity. Also, patients with moderate severity showed lower percentage of cognitive flexibility deficits than the other two severity categories; (b) For the binge spectrum disorders (BSD) group, the patients with mild severity showed a higher percentage of cognitive flexibility deficits than did the moderate and severe/extreme categories. When assessing the alternative severity index of DT, no differences were found in cognitive flexibility in any of the groups. Regarding illness duration, in the AN group the task performance of the patients with longer illness duration was worse than the performance of the short duration group and, in the BSD group, patients with longer duration also showed more deficits in cognitive flexibility than the patients with shorter duration of illness. CONCLUSIONS: Our findings point out the limitations of the DSM-5 severity criteria to categorize cognitive flexibility in EDs and support illness duration as an alternative severity approach for EDs.
RESUMEN
We aimed to evaluate the impact of surgical treatment for urinary stones on perioperative health-related quality of life (HRQOL) using the Japanese Wisconsin Stone Quality of Life questionnaire (J-WISQOL), an HRQOL measure designed for patients with urinary stones. This study prospectively enrolled 123 patients with urinary stones who visited three academic hospitals for stone treatment. The participants completed the J-WISQOL within 4 weeks before and after the urinary stone treatment. Treatments included shock wave lithotripsy (SWL), ureteroscopy lithotripsy, and endoscopic combined intrarenal surgery. J-WISQOL was assessed for age, stone size and location, type of treatment, stone-free status, postoperative ureteral stent placement, hospital stay, and complications in all patients. Patients with stones in the ureter had significantly greater social impact D1 and disease impact D3 than those with stones in the kidney. In a comparison of pre- and postoperative J-WISQOL, patients without postoperative ureteral stent placement scored significantly higher on social impact D1 and disease impact D3. Patients with shorter hospital stays had significantly higher social impact D1 and disease impact D3 (p < 0.001) than those with longer hospital stays. SWL significantly improved the total score, social impact D1, and disease impact D3 compared with other treatments. Perioperative HRQOL in patients with urinary stones is particularly affected by the type of treatment, ureteral stent placement, and hospital stay, which should be considered in surgical selection and patient decision-making.
Asunto(s)
Calidad de Vida , Cálculos Urinarios , Urolitiasis , Humanos , Cálculos Urinarios/cirugía , Urolitiasis/cirugía , Encuestas y CuestionariosRESUMEN
This study investigates the impact of racial residential segregation on COVID-19 mortality during the first year of the US epidemic. Data comes from the Center for Disease Control and Prevention (CDC), and the Robert Wood Johnson Foundation's and the University of Wisconsin's joint county health rankings project. The observation includes a record of 8,670,781 individuals in 1488 counties. We regressed COVID-19 deaths, using hierarchical logistic regression models, on individual and county-level predictors. We found that as racial residential segregation increased, mortality rates increased. Controlling for segregation, Blacks and Asians had a greater risk of mortality, while Hispanics and other racial groups had a lower risk of mortality, compared to Whites. The impact of racial residential segregation on COVID-19 mortality did not vary by racial group.
RESUMEN
Research on neighborhoods and health typically measures neighborhood context at a single point in time. However, neighborhood exposures accumulate over the life course, influenced by both residential mobility and neighborhood change, with potential implications for estimating the impact of neighborhoods on health. Commercial databases offer fine-grained longitudinal residential address data that can enrich life course spatial epidemiology research and validated methods for re-constructing residential histories from these databases are needed. Our study draws on unique data from a geographically diverse, population-based representative sample of adult Wisconsin residents and the LexisNexis® Accurint®, a commercial personal profile database, to develop a systematic and reliable methodology for constructing individual residential histories. Our analysis demonstrates that creating residential histories across diverse geographical contexts is feasible, and highlights differences in the information obtained from available residential histories by age, education, race/ethnicity, and rural/urban/suburban residency. Researchers should consider potential address data availability and information biases favoring socioeconomically advantaged individuals and their implications for studying health inequalities. Despite these limitations, LexisNexis data can generate varied residential exposure metrics and be linked to contextual data to enrich research into the contextual determinants of health at varied geographic scales.
RESUMEN
Introduction: Metabolomics technology facilitates studying associations between small molecules and disease processes. Correlating metabolites in cerebrospinal fluid (CSF) with Alzheimer's disease (AD) CSF biomarkers may elucidate additional changes that are associated with early AD pathology and enhance our knowledge of the disease. Methods: The relative abundance of untargeted metabolites was assessed in 161 individuals from the Wisconsin Registry for Alzheimer's Prevention. A metabolome-wide association study (MWAS) was conducted between 269 CSF metabolites and protein biomarkers reflecting brain amyloidosis, tau pathology, neuronal and synaptic degeneration, and astrocyte or microglial activation and neuroinflammation. Linear mixed-effects regression analyses were performed with random intercepts for sample relatedness and repeated measurements and fixed effects for age, sex, and years of education. The metabolome-wide significance was determined by a false discovery rate threshold of 0.05. The significant metabolites were replicated in 154 independent individuals from then Wisconsin Alzheimer's Disease Research Center. Mendelian randomization was performed using genome-wide significant single nucleotide polymorphisms from a CSF metabolites genome-wide association study. Results: Metabolome-wide association study results showed several significantly associated metabolites for all the biomarkers except Aß42/40 and IL-6. Genetic variants associated with metabolites and Mendelian randomization analysis provided evidence for a causal association of metabolites for soluble triggering receptor expressed on myeloid cells 2 (sTREM2), amyloid ß (Aß40), α-synuclein, total tau, phosphorylated tau, and neurogranin, for example, palmitoyl sphingomyelin (d18:1/16:0) for sTREM2, and erythritol for Aß40 and α-synuclein. Discussion: This study provides evidence that CSF metabolites are associated with AD-related pathology, and many of these associations may be causal.
RESUMEN
BACKGROUND: No-show appointments pose a significant challenge for healthcare providers, particularly in rural areas. In this study, we developed an evidence-based predictive model for patient no-shows at the Marshfield Clinic Health System (MCHS) rural provider network in Wisconsin, with the aim of improving overbooking approaches in outpatient settings and reducing the negative impact of no-shows in our underserved rural patient populations. METHODS: Retrospective data (2021) were obtained from the MCHS scheduling system, which included 1,260,083 total appointments from 263,464 patients, as well as their demographic, appointment, and insurance information. We used descriptive statistics to associate variables with show or no-show status, logistic regression, and random forests utilized, and eXtreme Gradient Boosting (XGBoost) was chosen to develop the final model, determine cut-offs, and evaluate performance. We also used the model to predict future no-shows for appointments from 2022 and onwards. RESULTS: The no-show rate was 6.0% in both the train and test datasets. The train and test datasets both yielded 5.98. Appointments scheduled further in advance (> 60 days of lead time) had a higher (7.7%) no-show rate. Appointments for patients aged 21-30 had the highest no-show rate (11.8%), and those for patients over 60 years of age had the lowest (2.9%). The model predictions yielded an Area Under Curve (AUC) of 0.84 for the train set and 0.83 for the test set. With the cut-off set to 0.4, the sensitivity was 0.71 and the positive predictive value was 0.18. Model results were used to recommend 1 overbook for every 6 at-risk appointments per provider per day. CONCLUSIONS: Our findings demonstrate the feasibility of developing a predictive model based on administrative data from a predominantly rural healthcare system. Our new model distinguished between show and no-show appointments with high performance, and 1 overbook was advised for every 6 at-risk appointments. This data-driven approach to mitigating the impact of no-shows increases treatment availability in rural areas by overbooking appointment slots on days with an elevated risk of no-shows.
Asunto(s)
Instituciones de Atención Ambulatoria , Pacientes Ambulatorios , Humanos , Persona de Mediana Edad , Anciano , Estudios Retrospectivos , Personal de Salud , Atención a la SaludRESUMEN
Bretziella fagacearum (Bretz) Z.W. deBeer, Marinc., T.A. Duong, & M.J. Wingf., the ascomycete fungus causing the "oak wilt" disease, is considered a virulent threat to North American oak forests, but the influence of the physical environment on this pathosystem remains unclear, particularly at the forest scale. This study explored the influence of terrain and soil factors on B. fagacearum infections, applying discrete and continuous spatial models to investigate the question: besides proximity to other infections, which environmental factors influenced B. fagacearum incidence? Locations of infections were recorded from 586 confirmed B. fagacearum sites, identified from 2004 through 2021 in a 76 km2 area of deep, sandy glacial outwash in Chequamegon-Nicolet National Forest, northern Wisconsin. Public datasets derived from remote sensing were incorporated as covariates, describing terrain elevation (USGS 10-m DEM), soil physical and chemical properties (POLARIS), and forest composition (WiscLand2). Spatial models included Generalized Additive Models (GAM) and Neyman-Scott Cluster Process Models (CPM). Results indicated that spatial dependence and the distribution of oak forests were the most important drivers of B. fagacearum distribution in this area, with more minor influence from elevation, hill shade, and drainage patterns. Comparison between modeling approaches indicated that-at this scale and in this area-the most accurate models were those which included host distribution, spatial dependence, as well as quantitative terrain and soil descriptions. However, a close approximation could be attained using nonlinear models (GAMs) which incorporated only host distribution and spatial dependence.
RESUMEN
Wastewater surveillance has been used to assist public health authorities in tracking local transmission of SARS-CoV-2. The usefulness of wastewater surveillance to track community spread of other respiratory pathogens, including influenza virus and respiratory syncytial virus (RSV), is less clear. During the 2022-23 respiratory diseases season, concentrations of influenza A virus and RSV in wastewater samples in three major Wisconsin cities were compared with emergency department (ED) visits associated with these pathogens. In all three cities, higher concentrations of influenza A virus and RSV in wastewater were associated with higher numbers of associated ED visits (Kendall's tau range = 0.50-0.63 for influenza-associated illness and 0.30-0.49 for RSV-associated illness). Detections of both influenza A virus and RSV in wastewater often preceded a rise in associated ED visits for each pathogen, and virus material remained detectable in wastewater for up to 3 months after pathogen-specific ED visits declined. These results demonstrate that wastewater surveillance has the potential to complement conventional methods of influenza and RSV surveillance, detecting viral signals earlier and for a longer duration than do clinical data. Continued use of wastewater surveillance as a supplement to established surveillance systems such as ED visits might improve local understanding and response to seasonal respiratory virus outbreaks.
Asunto(s)
COVID-19 , Virus de la Influenza A , Gripe Humana , Infecciones por Virus Sincitial Respiratorio , Virus Sincitial Respiratorio Humano , Humanos , Aguas Residuales , Gripe Humana/epidemiología , Incidencia , Monitoreo Epidemiológico Basado en Aguas Residuales , Wisconsin/epidemiología , SARS-CoV-2 , Infecciones por Virus Sincitial Respiratorio/epidemiología , Servicio de Urgencia en HospitalRESUMEN
OBJECTIVE: To document physicians' beliefs about abortion safety and the associations between these beliefs and physician support for, referral for, and participation in abortion care. METHODS: In a 2019 survey at the University of Wisconsin School of Medicine and Public Health, we assessed physicians' abortion attitudes, beliefs, and practices (N = 893). We conducted bivariate analyses followed by logistic regression to document relationships between physician beliefs about abortion safety and their support for, referral to, and participation in abortion care. RESULTS: Four-in-five physicians (78%, n = 690) believed that abortion is very or extremely safe. Medical specialty (Obstetrics-Gynecology vs. other; adjusted odds ratio [aOR] = 10.58, 95% CI: 1.41-79.56), educational exposure to abortion (aOR = 1.43, 95% CI: 1.02-2.01), and religiosity (aOR = 0.59, 95% CI: 0.41-0.85) were associated with physicians' beliefs about the safety of abortion. Providers who believed that abortion was very/extremely safe were more likely to support medication (aOR = 2.99, 95% CI: 1.93-4.65) and procedural abortion (aOR = 3.56, 95% CI: 2.31-5.50) and refer patients for abortion care (aOR = 3.14, 95% CI: 1.90-5.01). CONCLUSION: Although abortions are associated with extremely few adverse events, a sizable portion of surveyed physicians had incorrect perceptions of the safety of abortion. These beliefs were associated with decreased support and referrals for abortion care. Educational exposure to abortion is associated with more accurate assessments of abortion safety, underscoring the importance of training in this area. Considering the current abortion policy landscape, it is imperative for physicians to hold accurate knowledge about abortion so they can provide comprehensive counseling and, when indicated, referrals for safe and legal care.
RESUMEN
To make informed COVID-19 related decisions, individuals need information about their personal risks and how those risks may vary with specific demographic and health characteristics. The Fight COVID Milwaukee web-based risk assessment tool allows for assessment of COVID-19 mortality risk as a function of personal and neighborhood characteristics. The purpose of this study is to explore public understanding of this risk assessment tool and risk perception through community focus groups. Individuals were recruited from the general adult population in Milwaukee County, Wisconsin, USA, to participate in nine online focus groups where the risk assessment tool was presented for feedback. Three main themes were identified in the focus groups regarding the web-based risk assessment tool: some challenges in accessibility, variable ease of understanding, and personal usefulness but uncertain value for others. This paper explores how members of the community interpret individual risk assessments and life expectancy estimations, and how these vary with age, gender, race/ethnicity, socioeconomic status, and pre-existing comorbidities.
Asunto(s)
COVID-19 , Adulto , Humanos , COVID-19/epidemiología , Grupos Focales , Etnicidad , Esperanza de Vida , Medición de RiesgoRESUMEN
Heat abatement (e.g., soakers, fans) effectively reduces the negative physiological and production effects of heat stress, but no previous studies have documented effective interventions for the reduced lying times observed in response to hot weather. Although likely adaptive for heat dissipation, the reduction in motivated lying behavior may be an animal welfare concern. We evaluated the effect of air speed from fans with variable frequency drives on the heat stress responses of cows in a naturally ventilated freestall barn. Eight groups of lactating Holsteins (16 cows/group) were exposed to 3 treatments in a replicated crossover design: control (fans off, 0.4 ± 0.2 m/s, measured 0.5 m above the stall surface to represent cow resting height) vs. 60% (1.7 ± 0.5 m/s; ≥ 1 m/s in all stalls) and 100% (2.4 ± 0.8 m/s) fan power. Each treatment was applied for 3 d of acclimation and 4 d of data collection. The effects of treatment on daily maximum vaginal temperature (VT) and lying time (LT; both measured with data loggers), respiration rate (RR; recorded from video), unshaved scapular skin temperature (ST), milk yield (MY), and dry matter intake (DMI) were analyzed using linear mixed models. All models included the fixed effect of treatment and a repeated term for treatment day within group of cows, with group as the subject. The models for LT, VT, and RR also included a fixed effect for same-day Temperature Humidity Index (THI; recorded in the pens with data loggers) and the THI × treatment interaction. The models for DMI and MY, using data from the latter 3 d of each treatment period, also included a fixed effect for the previous day's THI and the -1 d THI × treatment interaction. Lying time differed among treatments (100% vs. 60% fan power vs. control: 14.2 vs. 13.9 vs. 13.2 h/d, respectively, SEM = 0.15 h/d), but both fan treatments prevented the reduction in LT observed in the control treatment as THI increased. Relative to the control, both fan treatments effectively reduced ST, RR, and VT and increasing DMI and MY. In the control, average values were elevated for both RR (68.7 ± 1.5 breaths/min, mean ± SEM, greater than a common benchmark of 60 breaths/min) and VT (39.3 ± 0.05°C) but remained in the normal range in both fan treatments (54.2 vs. 50.7 breaths/min in the 60% vs. 100% fan power treatments; 39.0°C in both fan treatments). Both fan treatments resulted in greater overall MY (42.6 vs. 43.0 ± 0.4 kg/d in the 60% vs. 100% fan power treatments) relative to the control (41.0 kg/d) and similarly avoided the reduction in MY when -1 d THI increased. Compared with natural ventilation alone, fans delivering air speeds of at least 1 m/s at cow resting height were effective not only for reducing thermoregulatory responses, but also for maintaining lying time, DMI, and milk yield in heat stress conditions. This is the first study to demonstrate an intervention to improve animal welfare by maintaining lying times during periods of heat stress.