RESUMEN
The uncertainty and unknowability of emerging infectious diseases have caused many major public health and security incidents in recent years. As a new tick-borne disease, Dabieshan tick virus (DBTV) necessitate systematic epidemiological and spatial distribution analysis. In this study, tick samples from Liaoning Province were collected and used to evaluate distribution of DBTV in ticks. Outbreak points of DBTV and the records of the vector Haemaphysalis longicornis in China were collected and used to establish a prediction model using niche model combined with environmental factors. We found that H. longicornis and DBTV were widely distributed in Liaoning Province. The risk analysis results showed that the DBTV in the eastern provinces of China has a high risk, and the risk is greatly influenced by elevation, land cover, and meteorological factors. The risk geographical area predicted by the model is significantly larger than the detected positive areas, indicating that the etiological survey is seriously insufficient. This study provided molecular and important epidemiological evidence for etiological ecology of DBTV. The predicted high-risk areas indicated the insufficient monitoring and risk evaluation and the necessity of future monitoring and control work.
Asunto(s)
Enfermedades por Picaduras de Garrapatas , Garrapatas , Animales , Humanos , Haemaphysalis longicornis , Enfermedades por Picaduras de Garrapatas/epidemiología , China/epidemiologíaRESUMEN
OBJECTIVE: Arteriovenous fistula (AVF) for hemodialysis access is traditionally considered superior to grafts due to infection resistance and purported improved patency. However, challenges to AVF maturation and limited patient survival may reduce AVF benefits. The objective of this study is to identify factors associated with risk of AVF requiring revision before maturation and/or mortality within 2 years of creation. METHODS: We performed a retrospective review of 250 AVFs created between May 2017 and November 2020 at a single institution. Maturation was defined as the date the surgeon deemed the AVF ready for use or the patient successfully used the AVF for dialysis. The Risk Analysis Index was used to calculate frailty. The primary outcome was a composite of endovascular/surgical revision to promote maturation and/or mortality within 2 years of AVF creation (REVDEAD). The primary outcome was categorized as met if the patient required a revision to promote maturation or if the patient experienced mortality within 2 years of AVF creation, or if both occurred. REVDEAD was compared with those who did not meet the primary outcome and will be referred to as NOREVDEAD. RESULTS: Survival at 2 years after AVF creation was 82%, and 54 (22%) patients underwent AVF revision. Of those, 31 (59%) patients progressed to AVF maturation. Of the 250 AVFs, 91 (36%) met the primary outcome of REVDEAD and 159 (64%) did not (NOREVDEAD). There was no difference between the REVDEAD and NOREVDEAD groups in age (P = .18), sex (P = .75), White race (P = .97), Hispanic ethnicity (P = .62), obesity (P = .76), coronary artery disease (P = .07), congestive heart failure (P = .29), diabetes mellitus (P = .78), chronic obstructive pulmonary disease (P = .10), dialysis status (P = .63), hypertension (P = .32), peripheral arterial disease (P = .34), or dysrhythmia (P = .13). There was no difference between the groups in the forearm vs the upper arm location of AVF (P = .42) or the vein diameter (P = .58). Forearm access, as opposed to upper arm AVF creation, was associated with higher rate of revision before maturation (P = .05). More patients in REVDEAD were frail or very frail (60% vs 48%, P = .05). Of the AVFs that matured, maturation required longer time in REVDEAD at 110.0 ± 9.1 days vs 78.8 ± 5.6 days (mean ± standard deviation) (P = .003). Adjusted for the vein diameter and the forearm vs the upper arm, frailty increased the odds of REVDEAD by 1.9 (95% confidence interval: 1.1, 3.3). CONCLUSIONS: Frail patients who underwent AVF were significantly more likely to die within 2 years of AVF creation with no significant association between frailty and the need for revisions to promote maturation. Forearm AVFs were more likely to require revisions; in patients who are frail, with a high likelihood of 2-year mortality, graft may be more appropriate than AVF. If AVF is being considered in a frail patient, upper arm AVFs should be prioritized over forearm AVFs.
Asunto(s)
Derivación Arteriovenosa Quirúrgica , Fragilidad , Fallo Renal Crónico , Humanos , Derivación Arteriovenosa Quirúrgica/efectos adversos , Grado de Desobstrucción Vascular , Resultado del Tratamiento , Venas/cirugía , Diálisis Renal , Estudios RetrospectivosRESUMEN
OBJECTIVE: To evaluate the impact of age on oncological outcomes in a large contemporary cohort of patients with non-muscle-invasive bladder cancer (NMIBC) treated with adequate Bacillus Calmette-Guérin (BCG). PATIENTS AND METHODS: We performed an Institutional Review Board-approved retrospective study analysing patients with NMIBC treated with adequate BCG at our institution from 2000 to 2020. Adequate BCG was defined as per United States Food and Drug Administration (FDA) guidelines as being receipt of at least five of six induction BCG instillations with a minimum of two additional doses (of planned maintenance or of re-induction) of BCG instillations within a span of 6 months. The study's primary outcome was to determine if age >70 years was associated with progression to MIBC cancer or distant metastasis. The cumulative incidence method and the competing-risk regression analyses were used to investigate the association of advanced age (>70 years) with progression, high-grade (HG) recurrence and cancer-specific mortality (CSM). RESULTS: Overall, data from 632 patients were analysed: 355 patients (56.2%) were aged ≤70 years and 277 (43.8%) were >70 years. Age >70 years did not adversely affect either cumulative incidence of progression or HG recurrence (P = 0.067 and P = 0.644, respectively). On competing-risk regression analyses, age >70 years did not emerge as an independent predictor of progression or HG recurrence (sub-standardised hazard ratio [SHR] 1.57, 95% confidence interval [CI] 0.87-2.81, P = 0.134; and SHR 1.05, 95% CI 0.77-1.44, P = 0.749). Not unexpectedly, patients in the older group did have higher overall mortality (P < 0.001) but not CSM (P = 0.057). CONCLUSION: Age >70 years was not associated with adverse oncological outcomes in a large contemporary cohort of patients receiving adequate intravesical BCG for NMIBC. BCG should not be withheld from older patients seeking for bladder sparing options.
Asunto(s)
Neoplasias Vesicales sin Invasión Muscular , Neoplasias de la Vejiga Urinaria , Humanos , Vacuna BCG/uso terapéutico , Estudios Retrospectivos , Administración Intravesical , Neoplasias de la Vejiga Urinaria/patología , Adyuvantes Inmunológicos/uso terapéutico , Invasividad Neoplásica , Recurrencia Local de Neoplasia/tratamiento farmacológico , Recurrencia Local de Neoplasia/patologíaRESUMEN
PURPOSE: Frailty is an independent risk factor for adverse postoperative outcomes following intracranial meningioma resection (IMR). The role of the Risk Analysis Index (RAI) in predicting postoperative outcomes following IMR is nascent but may inform preoperative patient selection and surgical planning. METHODS: IMR patients from the Nationwide Inpatient Sample were identified using diagnostic and procedural codes (2019-2020). The relationship between preoperative RAI-measured frailty and primary outcomes (non-home discharge (NHD), in-hospital mortality) and secondary outcomes (extended length of stay (eLOS), complication rates) was assessed via multivariate analyses. The discriminatory accuracy of the RAI for primary outcomes was measured in area under the receiver operating characteristic (AUROC) curve analysis. RESULTS: A total of 23,230 IMR patients (mean age = 59) were identified, with frailty statuses stratified by RAI score: 0-20 "robust" (R)(N = 10,665, 45.9%), 21-30 "normal" (N)(N = 8,895, 38.3%), 31-40 "frail" (F)(N = 2,605, 11.2%), and 41+ "very frail" (VF)(N = 1,065, 4.6%). Rates of NHD (R 11.5%, N 29.7%, F 60.8%, VF 61.5%), in-hospital mortality (R 0.5%, N 1.8%, F 3.8%, VF 7.0%), eLOS (R 13.2%, N 21.5%, F 40.9%, VF 46.0%), and complications (R 7.5%, N 11.6%, F 15.7%, VF 16.0%) significantly increased with increasing frailty thresholds (p < 0.001). The RAI demonstrated strong discrimination for NHD (C-statistic: 0.755) and in-hospital mortality (C-statistic: 0.754) in AUROC curve analysis. CONCLUSION: Increasing RAI-measured frailty is significantly associated with increased complication rates, eLOS, NHD, and in-hospital mortality following IMR. The RAI demonstrates strong discrimination for predicting NHD and in-hospital mortality following IMR, and may aid in preoperative risk stratification.
Asunto(s)
Fragilidad , Mortalidad Hospitalaria , Neoplasias Meníngeas , Meningioma , Alta del Paciente , Humanos , Femenino , Masculino , Persona de Mediana Edad , Alta del Paciente/estadística & datos numéricos , Fragilidad/complicaciones , Fragilidad/mortalidad , Meningioma/cirugía , Meningioma/mortalidad , Neoplasias Meníngeas/cirugía , Neoplasias Meníngeas/mortalidad , Anciano , Medición de Riesgo/métodos , Complicaciones Posoperatorias/mortalidad , Complicaciones Posoperatorias/epidemiología , Factores de Riesgo , Procedimientos Neuroquirúrgicos/mortalidad , Procedimientos Neuroquirúrgicos/efectos adversos , Pronóstico , Adulto , Estudios RetrospectivosRESUMEN
BACKGROUND: Graft loss increases the risk of patient death after simultaneous pancreas-kidney (SPK) transplantation. The relative risk of each graft failure is complex due to the influence of several competing events. METHODS: This retrospective, single-center study compared 4-year patient survival according to the graft status using Kaplan-Meier (KM) and Competing Risk Analysis (CRA). Patient survival was also assessed according to five eras (Era 1: 2001-2003; Era 2: 2004-2006; Era 3: 2007-2009; Era 4: 2010-2012; Era 5: 2012-2015). RESULTS: Between 2000 and 2015, 432 SPK transplants were performed. Using KM, patient survival was 86.5% for patients without graft loss (n = 333), 93.4% for patients with pancreas graft loss (n = 46), 43.7% for patients with kidney graft loss (n = 16), and 25.4% for patients with pancreas and kidney graft loss (n = 37). Patient survival was underestimated using KM versus CRA methods in patients with pancreas and kidney graft losses (25.4% vs. 36.2%), respectively. Induction with lymphocyte depleting antibodies was associated with 81% reduced risk (HR.19, 95% CI.38-.98, p = .0048), while delayed kidney function (HR 2.94, 95% CI 1.09-7.95, p = .033) and surgical complications (HR 2.94, 95% CI 1.22-7.08, p = .016) were associated with higher risk of death. Four-year patient survival increased from Era 1 to Era 5 (79% vs. 87.9%, p = .047). CONCLUSION: In this cohort of patients, kidney graft loss, with or without pancreas graft loss, was associated with higher mortality after SPK transplantation. Compared to CRA, the KM model underestimated survival only among patients with pancreas and kidney graft losses. Patient survival increased over time.
Asunto(s)
Diabetes Mellitus Tipo 1 , Trasplante de Riñón , Trasplante de Páncreas , Humanos , Diabetes Mellitus Tipo 1/cirugía , Estudios Retrospectivos , Trasplante de Páncreas/métodos , Medición de Riesgo , Páncreas , Supervivencia de InjertoRESUMEN
AIM: This study aimed to examine how malnutrition, as reflected by the Geriatric Nutritional Risk Index (GNRI), is associated with colorectal cancer (CRC) recurrence and cause of death. METHODS: Consecutive stage I-III CRC patients (n = 601) were divided into two groups using GNRI 98 as the cutoff. The relationship of GNRI with overall survival (OS) and recurrence-free survival (RFS) was evaluated, followed by competing risk analysis to determine prognostic factors of non-CRC-related death, and hazard function analysis to examine changes in the risk of recurrence and death. RESULTS: Median body mass index was lower in the low GNRI group than in the high GNRI group (19.8 vs. 23.5; p < 0.001). After adjusting for known prognostic factors, a low GNRI was independently associated with reduced OS/RFS, and was a significant predictor of non-CRC-related death. The risk of recurrence was higher and peaked earlier in the low GNRI group than in the high GNRI group, although after 3 years, both groups had a similar risk. Meanwhile, the low GNRI group had a higher risk of non-CRC-related death over the course of 5 years. CONCLUSION: It is important to consider preoperative nutritional status along with the cancer stage when developing strategies to improve outcomes for CRC patients.
Asunto(s)
Neoplasias Colorrectales , Desnutrición , Humanos , Anciano , Evaluación Nutricional , Factores de Riesgo , Desnutrición/complicaciones , Estado Nutricional , Neoplasias Colorrectales/cirugía , Evaluación Geriátrica , Pronóstico , Estudios RetrospectivosRESUMEN
OBJECTIVE: The aim was to describe the baseline characteristics of French patients referred with acute limb ischaemia (ALI), and their clinical management and outcome (death, amputation). METHODS: This retrospective observational cohort study used the National Health Data System. All adults hospitalised for ALI who underwent revascularisation with an endovascular or open surgical approach between 1 January 2015 and 31 December 2020 were included and followed up until death or the end of the study (31 December 2021). A one year look back period was used to capture patients' medical history. The risks of death, and major and minor amputations were described using Kaplan-Meier and Aalen-Johansen estimators. A Cox model was used to report the adjusted association between groups and risk of death and Fine-Gray models for the risk of amputations considering the competing risk of death. RESULTS: Overall, 51 390 patients (median age 70 years, 69% male) were included and had a median follow up of 2.7 years: 39 411 (76.7%) were treated with an open approach and 11 979 (23.3%) with a percutaneous endovascular approach. The preferred approach for the revascularisation varied between French regions. The one year overall survival was 78.0% and 85.2% in the surgery and endovascular groups, respectively. The surgery group had a higher risk of death (hazard ratio [HR] 1.17, 95% CI 1.12 - 1.21), a higher risk of major amputation (sub-distribution HR 1.20, 95% CI 1.10 - 1.30) and lower risk of minor amputation (sub-distribution HR 0.66, 95% CI 0.60 - 0.71) than the endovascular group. Diabetes and dialysis increased the risk of major amputation by 52% and 78%, respectively. Subsequent ALI was the third most common cause of hospital re-admission within one year. CONCLUSION: ALI remains a condition at high risk of death and amputation. Individual risk factors and ALI severity need to be considered to choose between approaches. Continued prevention efforts, improved management, and access to the most suitable approach are necessary.
RESUMEN
PURPOSE: To identify the key infection processes and risk factors in Computed Tomography (CT) examination process within the standard prevention and control measures for the COVID-19 epidemic, aiming to mitigate cross-infection occurrences in the hospital. METHOD: The case hospital has assembled a team of 30 experts specialized in CT examination. Based on the CT examination process, the potential failure modes were assessed from the perspective of severity (S), occurrence probability (O), and detectability (D); they were then combined with corresponding risk prevention measures. Finally, key infection processes and risk factors were identified according to the risk priority number (RPN) and expert analysis. RESULTS: Through the application of RPN and further analysis, four key potential infection processes were identified, including "CT request form (A1)," "during the scan of CT patient (B2)," "CT room and objects disposal (C2)," and "medical waste (garbage) disposal (C3)". In addition, eight key risk factors were also identified, including "cleaning personnel does not wear masks normatively (C32)," "nurse does not select the vein well, resulting in extravasation of the peripheral vein for enhanced CT (B25)," "patient cannot find the CT room (A13)," "patient has obtained a CT request form but does not know the procedure (A12)," "patient is too unwell to continue with the CT scan (B24)," "auxiliary staff (or technician) does not have a good grasp of the sterilization and disinfection standards (C21)," "auxiliary staff (or technician) does not sterilize the CT machine thoroughly (C22)," and "cleaning personnel lacks of knowledge of COVID-19 prevention and control (C33)". CONCLUSION: Hospitals can publicize the precautions regarding CT examination through various channels, reducing the incidence of CT examination failure. Hospitals' cleaning services are usually outsourced, and the educational background of the staff employed in these services is generally not high. Therefore, during training and communication, it is more necessary to provide a series of scope and training programs that are aligned with their understanding level. The model developed in this study effectively identifies the key infection prevention process and critical risk factors, enhancing the safety of medical staff and patients. This has significant research implications for the potential epidemic of major infectious diseases.
Asunto(s)
COVID-19 , Infección Hospitalaria , Humanos , Infección Hospitalaria/prevención & control , Factores de Riesgo , Tomografía Computarizada por Rayos X , TomografíaRESUMEN
Concerns over the ecological impacts of urban road runoff have increased, partly due to recent research into the harmful impacts of tire particles and their chemical leachates. This study aimed to help the community of researchers, regulators and policy advisers in scoping out the priority areas for further study. To improve our understanding of these issues an interdisciplinary, international network consisting of experts (United Kingdom, Norway, United States, Australia, South Korea, Bangladesh, Finland, Austria, China and Canada) was formed. We synthesised the current state of the knowledge and highlighted priority research areas for tire particles (in their different forms) and their leachates. Ten priority research questions with high importance were identified under four themes (environmental presence and detection; chemicals of concern; biotic impacts; mitigation and regulation). The priority research questions include the importance of increasing the understanding of the fate and transport of these contaminants; better alignment of toxicity studies; obtaining the holistic understanding of the impacts; and risks they pose across different ecosystem services. These issues have to be addressed globally for a sustainable solution. We highlight how the establishment of the intergovernmental science-policy panel on chemicals, waste, and pollution prevention could further address these issues on a global level through coordinated knowledge transfer of car tire research and regulation. We hope that the outputs from this research paper will reduce scientific uncertainty in assessing and managing environmental risks from TWP and their leachates and aid any potential future policy and regulatory development.
RESUMEN
Currently used pesticides (CUPs) were introduced to have lower persistence and bioaccumulation, and lesser bioavailability towards non-target species. Nevertheless, CUPs still represent a concern for both human health and the environment. India is an important agricultural country experiencing a conversion from the use of obsolete organochlorine pesticides to a newer generation of phytosanitary products. As for other developing countries, very little is known about the transfer of CUPs to the human diet in India, where systematic monitoring is not in place. In this study, we analyzed ninety four CUPs and detected thirty CUPs in several food products belonging to five types: cereals and pulses, vegetables, fruits, animal-based foods, and water. Samples were taken from markets in Delhi (aggregating food produced all over India) and in the periurban area of Dehradun (northern India) (representing food produced locally and through more traditional practices). Overall, chlorpyrifos and chlorpropham were the most detected CUPs with a detection frequency of 33% and 25%, respectively. Except for vegetables and fruits, the levels of CUPs in all other food types were significantly higher in samples from Delhi (p < 0.05). Exposure dosage of CUPs through different food matrices was calculated, and chlorpropham detected in potatoes had the maximum exposure dosage to humans (2.46 × 10-6 mg/kg/day). Risk analysis based on the hazard quotient technique indicated that chlorpyrifos in rice (2.76 × 10-2) can be a concern.
Asunto(s)
Cloropirifos , Plaguicidas , Humanos , Animales , Plaguicidas/análisis , Exposición Dietética/análisis , Clorprofam/análisis , Verduras , India , Contaminación de Alimentos/análisisRESUMEN
BACKGROUND: Quality measures determine reimbursement rates and penalties in value-based payment models. Frailty impacts these quality metrics across surgical specialties. We compared the discriminatory thresholds for the risk analysis index (RAI), modified frailty index-5 (mFI-5) and increasing patient age for the outcomes of extended length of stay (LOS [eLOS]), prolonged LOS within 30 days (pLOS), and protracted LOS (LOS > 30). METHODS: Patients ≥18 years old who underwent neurosurgical procedures between 2012 and 2020 were queried from the ACS-NSQIP. We performed receiver operating characteristic analysis, and multivariable analyses to examine discriminatory thresholds and identify independent associations. RESULTS: There were 411,605 patients included, with a median age of 59 years (IQR, 48-69), 52.2% male patients, and a white majority 75.2%. For eLOS: RAI C-statistic 0.653 (95% CI: 0.652-0.655), versus mFI-5 C-statistic 0.552 (95% CI: 0.550-0.554) and increasing patient age C-statistic 0.573 (95% CI: 0.571-0.575). Similar trends were observed for pLOS- RAI: 0.718, mFI-5: 0.568, increasing patient age: 0.559, and for LOS>30- RAI: 0.714, mFI-5: 0.548, and increasing patient age: 0.506. Patients with major complications had eLOS 10.1%, pLOS 26.5%, and LOS >30 45.5%. RAI showed a larger effect for all three outcomes, and major complications in multivariable analyses. CONCLUSION: Increasing frailty was associated with three key quality metrics that is, eLOS, pLOS, LOS > 30 after neurosurgical procedures. The RAI demonstrated a higher discriminating threshold compared to both mFI-5 and increasing patient age. Preoperative frailty screening may improve quality metrics through risk mitigation strategies and better preoperative communication with patients and their families.
Asunto(s)
Fragilidad , Tiempo de Internación , Procedimientos Neuroquirúrgicos , Humanos , Persona de Mediana Edad , Masculino , Femenino , Fragilidad/diagnóstico , Anciano , Tiempo de Internación/estadística & datos numéricos , Medición de Riesgo , Procedimientos Neuroquirúrgicos/estadística & datos numéricos , Indicadores de Calidad de la Atención de Salud , Estudios Retrospectivos , Adulto , Factores de EdadRESUMEN
PURPOSE: The risk of cardiovascular diseases' death (CVD) in patients with differentiated thyroid cancer (DTC) treated with radioactive iodine (RAI) after surgery has not been adequately studied. METHODS: Data of DTC patients who received RAI after surgery were retrieved from the Surveillance, Epidemiology, and End Result (SEER) database (2004-2015). Standardized mortality rate (SMR) analysis was used to evaluate the CVD risk in patients with RAI vs general population. A 1:1 propensity score matching (PSM) was applied to balance inter-group bias, and Pearson's correlation coefficient was used to detect collinearity between variables. The Cox proportional hazard model and multivariate competing risk model were utilized to evaluate the impact of RAI on CVD. At last, we curved forest plots to compare differences in factors significantly associated with CVD or cancer-related deaths. RESULTS: DTC patients with RAI treatment showed lower SMR for CVD than general population (RAI: SMR = 0.66, 95% CI 0.62-0.71, P < 0.05). After PSM, Cox proportional hazard regression demonstrated a decreased risk of CVD among patients with RAI compared to patients without (HR = 0.76, 95% CI 0.6-0.97, P = 0.029). However, in competing risk regression analysis, there was no significant difference (adjusted HR = 0.82, 95% CI 0.66-1.01, P = 0.11). The independent risk factors associated with CVD were different from those associated with cancer-related deaths. CONCLUSION: The CVD risk between DTC patients treated with RAI and those who did not was no statistical difference. Noteworthy, they had decreased CVD risk compared with the general population.
Asunto(s)
Adenocarcinoma , Enfermedades Cardiovasculares , Neoplasias de la Tiroides , Humanos , Neoplasias de la Tiroides/epidemiología , Neoplasias de la Tiroides/radioterapia , Radioisótopos de Yodo/uso terapéutico , Adenocarcinoma/cirugía , Modelos de Riesgos Proporcionales , Enfermedades Cardiovasculares/epidemiología , Enfermedades Cardiovasculares/inducido químicamente , TiroidectomíaRESUMEN
Tetrachlorvinphos (TCVP) is the pesticidal active ingredient found in some flea and tick collars for dogs and cats. Recent studies sponsored by The Hartz Mountain Corporation, confirm the safety of TCVP as an active ingredient in pet collars. Based upon data from these new studies and results previously relied upon by the U.S. Environmental Protection Agency, the following conclusions have been made: Torsion study data clearly indicate that approximately 93% of released formulation from TCVP containing pet collars is in a liquid phase immediately following activation.Further, even more relevant to human health risk analysis associated with post-application exposures, in vivo data from dogs wearing TCVP pet collars definitively document that TCVP dust released from the collar is rapidly absorbed into the sebum. The maximum ratio of dust to liquid was 0.023% dust to 99.977% liquid.In vivo fur data provide scientific evidence confirming that the mechanism of dissemination of TCVP from pet collars is as a liquid suspended or dissolved in the animal's sebum, even though it may be released from the collar as a solid. Thus, potential post-application exposure to TCVP, including immediately following collar placement, is almost entirely to a liquid phase.Based upon EPA's refined and conservative "untrimmed" collar risk assessment, post-application incidental oral hand-to-mouth activity by children aged 1 to <2 years of age results in margins of exposure significantly greater than the level of concern of 1000, and therefore do not present unreasonable health risk.
Asunto(s)
Enfermedades de los Gatos , Enfermedades de los Perros , Insecticidas , Estados Unidos , Niño , Humanos , Animales , Gatos , Perros , Preescolar , Tetraclorvinfos/análisis , Insecticidas/toxicidad , Enfermedades de los Gatos/prevención & control , Medición de Riesgo , Polvo/análisisRESUMEN
BACKGROUND: Food safety stands as a critical public health concern in China. People's perceptions and communication regarding food safety crises significantly impact their emotions and food preferences. The rise of social media has also complicated information sharing and decision-making. Exploring public's cognitive, emotional and behaviral responses toward food safety risks is crucial for improving food safety practices and public health. METHODS: From fall 2018 to fall 2023, 23 in-depth interviews were held using a semi-structured protocol aligned with the Risk Analysis Framework (RAF) components: risk assessment, communication, and management. RESULTS: Our findings showed that the public blamed unethical agriculture practices and food processing for food safety issues. People, dissatisfied with mainstream media, turned to social media to collect food safety information. Many adopted self-protective behaviors, assuming personal responsibility for food safety. CONCLUSION: Findings from this study highlighted individuals' concerns about environmental pollution and the use of chemical substances in food safety issues. The results underscored the need for accurate and prompt media coverage, stronger government regulation, industry self-regulation, and targeted consumer education to effectively tackle these challenges.
Asunto(s)
Inocuidad de los Alimentos , Investigación Cualitativa , Humanos , China , Femenino , Adulto , Masculino , Persona de Mediana Edad , Comunicación , Conocimientos, Actitudes y Práctica en Salud , Medios de Comunicación Sociales , Opinión Pública , Adulto Joven , Medición de RiesgoRESUMEN
PURPOSE: Frailty is an independent risk factor for adverse postoperative outcomes following spine surgery. The ability of the Risk Analysis Index (RAI) to predict adverse outcomes following posterior lumbar interbody fusion (PLIF) has not been studied extensively and may improve preoperative risk stratification. METHODS: Patients undergoing PLIF were queried from Nationwide Inpatient Sample (NIS) (2019-2020). The relationship between RAI-measured preoperative frailty and primary outcomes (mortality, non-home discharge (NHD)) and secondary outcomes (extended length of stay (eLOS), complication rates) was assessed via multivariate analyses. The discriminatory accuracy of the RAI for primary outcomes was measured in area under the receiver operating characteristic (AUROC) curve analysis. RESULTS: A total of 429,380 PLIF patients (mean age = 61y) were identified, with frailty cohorts stratified by standard RAI convention: 0-20 "robust" (R)(38.3%), 21-30 "normal" (N)(54.3%), 31-40 "frail" (F)(6.1%) and 41+ "very frail" (VF)(1.3%). The incidence of primary and secondary outcomes increased as frailty thresholds increased: mortality (R 0.1%, N 0.1%, F 0.4%, VF 1.3%; p < 0.001), NHD (R 6.5%, N 18.1%, F 36.9%, VF 42.0%; p < 0.001), eLOS (R 18.0%, N 21.9%, F 31.6%, VF 43.8%; p < 0.001) and complication rates (R 6.6%, N 8.8%, F 11.1%, VF 12.2%; p < 0.001). The RAI demonstrated acceptable discrimination for NHD (C-statistic: 0.706) and mortality (C-statistic: 0.676) in AUROC curve analysis. CONCLUSION: Increasing RAI-measured frailty is significantly associated with increased NHD, eLOS, complication rates, and mortality following PLIF. The RAI demonstrates acceptable discrimination for predicting NHD and mortality, and may be used to improve frailty-based risk assessment for spine surgeons.
Asunto(s)
Fragilidad , Vértebras Lumbares , Alta del Paciente , Fusión Vertebral , Humanos , Fusión Vertebral/efectos adversos , Fusión Vertebral/mortalidad , Fusión Vertebral/métodos , Masculino , Persona de Mediana Edad , Femenino , Anciano , Vértebras Lumbares/cirugía , Alta del Paciente/estadística & datos numéricos , Medición de Riesgo/métodos , Fragilidad/epidemiología , Complicaciones Posoperatorias/epidemiología , Complicaciones Posoperatorias/mortalidad , Factores de Riesgo , Adulto , Estados Unidos/epidemiología , Pacientes Internos/estadística & datos numéricos , Anciano de 80 o más Años , Tiempo de Internación/estadística & datos numéricosRESUMEN
The Risk Analysis Quality Test Release 1.0 (RAQT1.0) was developed as a framework to encourage mutual understanding between technical risk analysts and risk management decision makers of risk assessment quality indicators. The initial version (release 1.0) was published by the Society for Risk Analysis (SRA) in 2020 with the intent of learning from early test applications whether the approach was useful and whether changes in approach or contents would be helpful. The results of applications across three diverse fields are reported here. The applications include both retrospective evaluations of past risk assessments and prospective guidance on the design of future risk assessment projects or systems. The fields represented include Quantitative Microbial Risk Assessment, Cultural Property Risk Analysis, and Software Development Cyber Risk Analysis. The RAQT1.0 proved helpful for identifying shortcomings in all applications. Ways in which the RAQT1.0 might be improved are also identified.
RESUMEN
Artificial intelligence (AI) has seen numerous applications for risk analysis and provides ample opportunities for developing new and improved methods and models for this purpose. In the present article, we conceptualize the use of AI for risk analysis by framing it as an input-algorithm-output process and linking such a setup to three tasks in establishing a risk description: consequence characterization, uncertainty characterization, and knowledge management. We then give an overview of currently used concepts and methods for AI-based risk analysis and outline potential future uses by extrapolating beyond currently produced types of output. We end with a discussion of the limits of automation, both near-term limitations and a more fundamental question related to allowing AI to automatically prescribe risk management decisions. We conclude that there are opportunities for using AI for risk analysis to a greater extent than is commonly the case today; however, critical concerns about proper uncertainty representation and the need for risk-informed rather than risk-based decision-making also lead us to conclude that risk analysis and decision-making processes cannot be fully automated.
RESUMEN
Risk management requires a balance between knowledge and values. Knowledge consists of justified beliefs and evidence, with evidence including data, assumptions, and models. While quality and integrity of evidence are valued in the sciences, risk science involves uncertainty, which suggests that evidence can be incomplete or imperfect. The use of inappropriate evidence can invalidate risk studies and contribute to misinformation and poor risk management decisions. Additionally, the interpretation of quality and integrity of evidence may vary by the risk study mission, decision-maker values, and stakeholder needs. While risk science has developed standards for risk studies, there remains a lack of clarity for how to demonstrate quality and integrity of evidence, recognizing that evidence can be presented in many formats (e.g., data, ideas, and theories), be leveraged at various stages of a risk study (e.g., hypotheses, analyses, and communication), and involve differing expectations across stakeholders. This study develops and presents a classification system to evaluate quality and integrity of evidence that is based on current risk science guidance, best practices from non-risk disciplines, and lessons learned from recent risk events. The classification system is demonstrated on a cyber-security application. This study will be of interest to risk researchers, risk professionals, and data analysts.
RESUMEN
The integration of artificial intelligence (AI) systems has ushered in a profound transformation. This conversion is marked by revolutionary extrapolative capabilities, a shift toward data-centric decision-making processes, and the enhancement of tools for managing risks. However, the adoption of these AI innovations has sparked controversy due to their unpredictable and opaque disposition. This study employs the transactional stress model to empirically investigate how six technological stressors (techno-stressors) impact both techno-eustress (positive stress) and techno-distress (negative stress) experienced by finance professionals and experts. To collect data for this research, an e-survey was distributed to a diverse group of 251 participants from various sources. The findings, particularly the identification and development of techno-accountability as a significant factor, contribute to the risk analysis domain by improving the failure mode and effect analysis framework to better fit the rapidly evolving landscape of AI-driven innovations.
RESUMEN
The origin of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) is contentious. Most studies have focused on a zoonotic origin, but definitive evidence such as an intermediary animal host is lacking. We used an established risk analysis tool for differentiating natural and unnatural epidemics, the modified Grunow-Finke assessment tool (mGFT) to study the origin of SARS-COV-2. The mGFT scores 11 criteria to provide a likelihood of natural or unnatural origin. Using published literature and publicly available sources of information, we applied the mGFT to the origin of SARS-CoV-2. The mGFT scored 41/60 points (68%), with high inter-rater reliability (100%), indicating a greater likelihood of an unnatural than natural origin of SARS-CoV-2. This risk assessment cannot prove the origin of SARS-CoV-2 but shows that the possibility of a laboratory origin cannot be easily dismissed.