Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 34
Filtrar
1.
Risk Anal ; 2024 Oct 09.
Artigo em Inglês | MEDLINE | ID: mdl-39380431

RESUMO

Secure and reliable power systems are vital for modern societies and economies. While there is a focus in the literature on predicting power outages caused by severe weather events, relatively little literature exists on identifying hot spots, locations where outages occur repeatedly and at a higher rate than expected. Reliably identifying hotspots can provide critical input for risk management efforts by power utilities, helping them to focus scarce resources on the most problematic portions of their system. In this article, we show how existing work on Moran's I spatial statistic can be adapted to identify power outage hotspots based on the types and quantities of data available to utilities in practice. The local Moran's I statistic was calculated on a grid cell level and a set of criteria were used to filter out which grid cells are considered hotspots. The hotspot identification approach utilized in this article is an easy method for utilities to use in practice, and it provides the type of information needed to directly support utility decisions about prioritizing areas of a power system to inspect and potentially reinforce.

2.
Risk Anal ; 44(3): 686-704, 2024 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-37666505

RESUMO

A wide variety of weather conditions, from windstorms to prolonged heat events, can substantially impact power systems, posing many risks and inconveniences due to power outages. Accurately estimating the probability distribution of the number of customers without power using data about the power utility system and environmental and weather conditions can help utilities restore power more quickly and efficiently. However, the critical shortcoming of current models lies in the difficulties of handling (i) data streams and (ii) model uncertainty due to combining data from various weather events. Accordingly, this article proposes an adaptive ensemble learning algorithm for data streams, which deploys a feature- and performance-based weighting mechanism to adaptively combine outputs from multiple competitive base learners. As a proof of concept, we use a large, real data set of daily customer interruptions to develop the first adaptive all-weather outage prediction model using data streams. We benchmark several approaches to demonstrate the advantage of our approach in offering more accurate probabilistic predictions. The results show that the proposed algorithm reduces the probabilistic predictions' error of the base learners between 4% and 22% with an average of 8%, which also result in substantially more accurate point predictions. The improvement made by our algorithm is enhanced as we exchange base learners with simpler models.

3.
Risk Anal ; 43(12): 2644-2658, 2023 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-36958984

RESUMO

Data-driven predictive modeling is increasingly being used in risk assessments. While such modeling may provide improved consequence predictions and probability estimates, it also comes with challenges. One is that the modeling and its output does not measure and represent uncertainty due to lack of knowledge, that is, "epistemic uncertainty." In this article, we demonstrate this point by conceptually linking the main elements and output of data-driven predictive models with the main elements of a general risk description, thereby placing data-driven predictive modeling on a risk science foundation. This allows for an evaluation of such modeling with reference to risk science recommendations for what constitutes a complete risk description. The evaluation leads us to conclude that, as a minimum, to cover all elements of a complete risk description a risk assessment using data-driven predictive modeling needs to be supported by assessments of the uncertainty and risk related to the assumptions underlying the modeling. In response to this need, we discuss an approach for assessing assumptions in data-driven predictive modeling.

4.
Risk Anal ; 43(4): 762-782, 2023 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-35672878

RESUMO

The risks from singular natural hazards such as a hurricane have been extensively investigated in the literature. However, little is understood about how individual and collective responses to repeated hazards change communities and impact their preparation for future events. Individual mitigation actions may drive how a community's resilience evolves under repeated hazards. In this paper, we investigate the effect that learning by homeowners can have on household mitigation decisions and on how this influences a region's vulnerability to natural hazards over time, using hurricanes along the east coast of the United States as our case study. To do this, we build an agent-based model (ABM) to simulate homeowners' adaptation to repeated hurricanes and how this affects the vulnerability of the regional housing stock. Through a case study, we explore how different initial beliefs about the hurricane hazard and how the memory of recent hurricanes could change a community's vulnerability both under current and potential future hurricane scenarios under climate change. In some future hurricane environments, different initial beliefs can result in large differences in the region's long-term vulnerability to hurricanes. We find that when some homeowners mitigate soon after a hurricane-when their memory of the event is the strongest-it can help to substantially decrease the vulnerability of a community.

5.
Surgery ; 170(5): 1561-1567, 2021 11.
Artigo em Inglês | MEDLINE | ID: mdl-34183178

RESUMO

BACKGROUND: Optimizing organ yield (number of organs transplanted per donor) is a potentially modifiable way to increase the number of organs available for transplant. Models to predict the expected deceased donor organ yield have been developed based on ordinary least squares regression and logistic regression. However, alternative modeling methodologies incorporating machine learning may have superior performance compared with conventional approaches. METHODS: We evaluated the predictive accuracy of 14 machine learning models for predicting overall organ yield in a cross-validation procedure. The models were parameterized using data from the Organ Procurement and Transplantation Network database from 2000 to 2018. The inclusion criteria for the study were adult deceased donors between 18 and 84 years of age that had at least 1 organ procured for transplantation. RESULTS: A total of 89,520 donors met the inclusion criteria. Their mean (standard deviation) age was 44 (15) years, and approximately 58% were male. Our cross-validation analysis showed that a tree-based gradient boosting model outperformed the remaining 13 models. Compared with the currently used prediction models, the gradient boosting model improves prediction accuracy by reducing the mean absolute error between 3 and 11 organs per 100 donors. CONCLUSION: Our analysis demonstrated that the gradient boosting methodology had the best performance in predicting overall deceased donor organ yield and can potentially serve as an aid to assess organ procurement organization performance.


Assuntos
Aprendizado de Máquina , Modelos Estatísticos , Coleta de Tecidos e Órgãos , Obtenção de Tecidos e Órgãos , Adulto , Feminino , Humanos , Masculino , Pessoa de Meia-Idade
6.
PLoS Comput Biol ; 17(2): e1008713, 2021 02.
Artigo em Inglês | MEDLINE | ID: mdl-33556077

RESUMO

There is an emerging consensus that achieving global tuberculosis control targets will require more proactive case finding approaches than are currently used in high-incidence settings. Household contact tracing (HHCT), for which households of newly diagnosed cases are actively screened for additional infected individuals is a potentially efficient approach to finding new cases of tuberculosis, however randomized trials assessing the population-level effects of such interventions in settings with sustained community transmission have shown mixed results. One potential explanation for this is that household transmission is responsible for a variable proportion of population-level tuberculosis burden between settings. For example, transmission is more likely to occur in households in settings with a lower tuberculosis burden and where individuals mix preferentially in local areas, compared with settings with higher disease burden and more dispersed mixing. To better understand the relationship between endemic incidence levels, social mixing, and the impact of HHCT, we developed a spatially explicit model of coupled household and community transmission. We found that the impact of HHCT was robust across settings of varied incidence and community contact patterns. In contrast, we found that the effects of community contact tracing interventions were sensitive to community contact patterns. Our results suggest that the protective benefits of HHCT are robust and the benefits of this intervention are likely to be maintained across epidemiological settings.


Assuntos
Busca de Comunicante , Tuberculose/metabolismo , Tuberculose/transmissão , Algoritmos , Simulação por Computador , Progressão da Doença , Características da Família , Saúde Global , Humanos , Incidência , Probabilidade , Informática em Saúde Pública , Ensaios Clínicos Controlados Aleatórios como Assunto , Fatores de Risco , Tuberculose/epidemiologia
7.
Risk Anal ; 41(7): 1087-1092, 2021 07.
Artigo em Inglês | MEDLINE | ID: mdl-29944738

RESUMO

Many of the most complicated and pressing problems in hazards research require the integration of numerous disciplines. The lack of a common knowledge base, however, often prohibits clear communication and interaction among interdisciplinary researchers, sometimes leading to unsuccessful outcomes. Drawing on experience with several projects and collective expertise that spans multiple disciplines, the authors argue that a promising way to enhance participation and enable communication is to have a common model, or boundary object, that can integrate knowledge from different disciplines. The result is that researchers from different disciplines who use different research methods and approaches can work together toward a shared goal. This article offers four requirements for boundary objects that may enhance hazards research. Based on these requirements, agent-based models have the necessary characteristics to be a boundary object. The article concludes by examining both the value of and the challenges from using agent-based models as the boundary object in interdisciplinary projects.

8.
Risk Anal ; 40(8): 1538-1553, 2020 08.
Artigo em Inglês | MEDLINE | ID: mdl-32402139

RESUMO

We urgently need to put the concept of resilience into practice if we are to prepare our communities for climate change and exacerbated natural hazards. Yet, despite the extensive discussion surrounding community resilience, operationalizing the concept remains challenging. The dominant approaches for assessing resilience focus on either evaluating community characteristics or infrastructure functionality. While both remain useful, they have several limitations to their ability to provide actionable insight. More importantly, the current conceptualizations do not consider essential services or how access is impaired by hazards. We argue that people need access to services such as food, education, health care, and cultural amenities, in addition to water, power, sanitation, and communications, to get back some semblance of normal life. Providing equitable access to these types of services and quickly restoring that access following a disruption are paramount to community resilience. We propose a new conceptualization of community resilience that is based on access to essential services. This reframing of resilience facilitates a new measure of resilience that is spatially explicit and operational. Using two illustrative examples from the impacts of Hurricanes Florence and Michael, we demonstrate how decisionmakers and planners can use this framework to visualize the effect of a hazard and quantify resilience-enhancing interventions. This "equitable access to essentials" approach to community resilience integrates with spatial planning, and will enable communities not only to "bounce back" from a disruption, but to "bound forward" and improve the resilience and quality of life for all residents.

10.
Risk Anal ; 38(6): 1258-1278, 2018 06.
Artigo em Inglês | MEDLINE | ID: mdl-29148087

RESUMO

Although individual behavior plays a major role in community flood risk, traditional flood risk models generally do not capture information on how community policies and individual decisions impact the evolution of flood risk over time. The purpose of this study is to improve the understanding of the temporal aspects of flood risk through a combined analysis of the behavioral, engineering, and physical hazard aspects of flood risk. Additionally, the study aims to develop a new modeling approach for integrating behavior, policy, flood hazards, and engineering interventions. An agent-based model (ABM) is used to analyze the influence of flood protection measures, individual behavior, and the occurrence of floods and near-miss flood events on community flood risk. The ABM focuses on the following decisions and behaviors: dissemination of flood management information, installation of community flood protection, elevation of household mechanical equipment, and elevation of homes. The approach is place based, with a case study area in Fargo, North Dakota, but is focused on generalizable insights. Generally, community mitigation results in reduced future damage, and individual action, including mitigation and movement into and out of high-risk areas, can have a significant influence on community flood risk. The results of this study provide useful insights into the interplay between individual and community actions and how it affects the evolution of flood risk. This study lends insight into priorities for future work, including the development of more in-depth behavioral and decision rules at the individual and community level.

11.
Risk Anal ; 38(12): 2722-2737, 2018 12.
Artigo em Inglês | MEDLINE | ID: mdl-27779791

RESUMO

Tropical cyclones can significantly damage the electrical power system, so an accurate spatiotemporal forecast of outages prior to landfall can help utilities to optimize the power restoration process. The purpose of this article is to enhance the predictive accuracy of the Spatially Generalized Hurricane Outage Prediction Model (SGHOPM) developed by Guikema et al. (2014). In this version of the SGHOPM, we introduce a new two-step prediction procedure and increase the number of predictor variables. The first model step predicts whether or not outages will occur in each location and the second step predicts the number of outages. The SGHOPM environmental variables of Guikema et al. (2014) were limited to the wind characteristics (speed and duration of strong winds) of the tropical cyclones. This version of the model adds elevation, land cover, soil, precipitation, and vegetation characteristics in each location. Our results demonstrate that the use of a new two-step outage prediction model and the inclusion of these additional environmental variables increase the overall accuracy of the SGHOPM by approximately 17%.

12.
PLoS One ; 12(9): e0182719, 2017.
Artigo em Inglês | MEDLINE | ID: mdl-28953893

RESUMO

The decisions that individuals make when recovering from and adapting to repeated hazards affect a region's vulnerability in future hazards. As such, community vulnerability is not a static property but rather a dynamic property dependent on behavioral responses to repeated hazards and damage. This paper is the first of its kind to build a framework that addresses the complex interactions between repeated hazards, regional damage, mitigation decisions, and community vulnerability. The framework enables researchers and regional planners to visualize and quantify how a community could evolve over time in response to repeated hazards under various behavioral scenarios. An illustrative example using parcel-level data from Anne Arundel County, Maryland-a county that experiences fairly frequent hurricanes-is presented to illustrate the methodology and to demonstrate how the interplay between individual choices and regional vulnerability is affected by the region's hurricane experience.


Assuntos
Tempestades Ciclônicas , Humanos , Maryland , Modelos Teóricos
13.
Risk Anal ; 37(10): 1879-1897, 2017 10.
Artigo em Inglês | MEDLINE | ID: mdl-28032648

RESUMO

Recently, the concept of black swans has gained increased attention in the fields of risk assessment and risk management. Different types of black swans have been suggested, distinguishing between unknown unknowns (nothing in the past can convincingly point to its occurrence), unknown knowns (known to some, but not to relevant analysts), or known knowns where the probability of occurrence is judged as negligible. Traditional risk assessments have been questioned, as their standard probabilistic methods may not be capable of predicting or even identifying these rare and extreme events, thus creating a source of possible black swans. In this article, we show how a simulation model can be used to identify previously unknown potentially extreme events that if not identified and treated could occur as black swans. We show that by manipulating a verified and validated model used to predict the impacts of hazards on a system of interest, we can identify hazard conditions not previously experienced that could lead to impacts much larger than any previous level of impact. This makes these potential black swan events known and allows risk managers to more fully consider them. We demonstrate this method using a model developed to evaluate the effect of hurricanes on energy systems in the United States; we identify hurricanes with potentially extreme impacts, storms well beyond what the historic record suggests is possible in terms of impacts.

14.
JCO Clin Cancer Inform ; 1: 1-8, 2017 11.
Artigo em Inglês | MEDLINE | ID: mdl-30657400

RESUMO

PURPOSE: Patients scheduled for outpatient infusion sometimes may be deferred for treatment after arriving for their appointment. This can be the result of a secondary illness, not meeting required bloodwork counts, or other medical complications. The ability to generate high-quality predictions of patient deferrals can be highly valuable in managing clinical operations, such as scheduling patients, determining which drugs to make before patients arrive, and establishing the proper staffing for a given day. METHODS: In collaboration with the University of Michigan Comprehensive Cancer Center, we have developed a predictive model that uses patient-specific data to estimate the probability that a patient will defer or not show for treatment on a given day. This model incorporates demographic, treatment protocol, and prior appointment history data. We tested a wide range of predictive models including logistic regression, tree-based methods, neural networks, and various ensemble models. We then compared the performance of these models, evaluating both their prediction error and their complexity level. RESULTS: We have tested multiple classification models to determine which would best determine whether a patient will defer or not show for treatment on a given day. We found that a Bayesian additive regression tree model performs best with the University of Michigan Comprehensive Cancer Center data on the basis of out-of-sample area under the curve, Brier score, and F1 score. We emphasize that similar statistical procedures must be taken to reach a final model in alternative settings. CONCLUSION: This article introduces the existence and selection process of a wide variety of statistical models for predicting patient deferrals for a specific clinical environment. With proper implementation, these models will enable clinicians and clinical managers to achieve the in-practice benefits of deferral predictions.


Assuntos
Assistência Ambulatorial/estatística & dados numéricos , Neoplasias/epidemiologia , Pacientes Ambulatoriais , Centros Médicos Acadêmicos , Algoritmos , Agendamento de Consultas , Humanos , Modelos Estatísticos , Neoplasias/tratamento farmacológico , Reprodutibilidade dos Testes
15.
PLoS One ; 11(8): e0158375, 2016.
Artigo em Inglês | MEDLINE | ID: mdl-27508461

RESUMO

The Pacific coast of the Tohoku region of Japan experiences repeated tsunamis, with the most recent events having occurred in 1896, 1933, 1960, and 2011. These events have caused large loss of life and damage throughout the coastal region. There is uncertainty about the degree to which seawalls reduce deaths and building damage during tsunamis in Japan. On the one hand they provide physical protection against tsunamis as long as they are not overtopped and do not fail. On the other hand, the presence of a seawall may induce a false sense of security, encouraging additional development behind the seawall and reducing evacuation rates during an event. We analyze municipality-level and sub-municipality-level data on the impacts of the 1896, 1933, 1960, and 2011 tsunamis, finding that seawalls larger than 5 m in height generally have served a protective role in these past events, reducing both death rates and the damage rates of residential buildings. However, seawalls smaller than 5 m in height appear to have encouraged development in vulnerable areas and exacerbated damage. We also find that the extent of flooding is a critical factor in estimating both death rates and building damage rates, suggesting that additional measures, such as multiple lines of defense and elevating topography, may have significant benefits in reducing the impacts of tsunamis. Moreover, the area of coastal forests was found to be inversely related to death and destruction rates, indicating that forests either mitigated the impacts of these tsunamis, or displaced development that would otherwise have been damaged.


Assuntos
Desastres/prevenção & controle , Modelos Teóricos , Tsunamis , Desastres/estatística & dados numéricos , Florestas , Humanos , Japão , Máquina de Vetores de Suporte
16.
Risk Anal ; 36(4): 645, 2016 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-27093337
17.
Risk Anal ; 36(12): 2298-2312, 2016 12.
Artigo em Inglês | MEDLINE | ID: mdl-26890212

RESUMO

There is increasing concern over deep uncertainty in the risk analysis field as probabilistic models of uncertainty cannot always be confidently determined or agreed upon for many of our most pressing contemporary risk challenges. This is particularly true in the climate change adaptation field, and has prompted the development of a number of frameworks aiming to characterize system vulnerabilities and identify robust alternatives. One such methodology is robust decision making (RDM), which uses simulation models to assess how strategies perform over many plausible conditions and then identifies and characterizes those where the strategy fails in a process termed scenario discovery. While many of the problems to which RDM has been applied are characterized by multiple objectives, research to date has provided little insight into how treatment of multiple criteria impacts the failure scenarios identified. In this research, we compare different methods for incorporating multiple objectives into the scenario discovery process to evaluate how they impact the resulting failure scenarios. We use the Lake Tana basin in Ethiopia as a case study, where climatic and environmental uncertainties could impact multiple planned water infrastructure projects, and find that failure scenarios may vary depending on the method used to aggregate multiple criteria. Common methods used to convert multiple attributes into a single utility score can obscure connections between failure scenarios and system performance, limiting the information provided to support decision making. Applying scenario discovery over each performance metric separately provides more nuanced information regarding the relative sensitivity of the objectives to different uncertain parameters, leading to clearer insights on measures that could be taken to improve system robustness and areas where additional research might prove useful.

18.
Risk Anal ; 36(10): 1936-1947, 2016 10.
Artigo em Inglês | MEDLINE | ID: mdl-26854751

RESUMO

In August 2012, Hurricane Isaac, a Category 1 hurricane at landfall, caused extensive power outages in Louisiana. The storm brought high winds, storm surge, and flooding to Louisiana, and power outages were widespread and prolonged. Hourly power outage data for the state of Louisiana were collected during the storm and analyzed. This analysis included correlation of hourly power outage figures by zip code with storm conditions including wind, rainfall, and storm surge using a nonparametric ensemble data mining approach. Results were analyzed to understand how correlation of power outages with storm conditions differed geographically within the state. This analysis provided insight on how rainfall and storm surge, along with wind, contribute to power outages in hurricanes. By conducting a longitudinal study of outages at the zip code level, we were able to gain insight into the causal drivers of power outages during hurricanes. Our analysis showed that the statistical importance of storm characteristic covariates to power outages varies geographically. For Hurricane Isaac, wind speed, precipitation, and previous outages generally had high importance, whereas storm surge had lower importance, even in zip codes that experienced significant surge. The results of this analysis can inform the development of power outage forecasting models, which often focus strictly on wind-related covariates. Our study of Hurricane Isaac indicates that inclusion of other covariates, particularly precipitation, may improve model accuracy and robustness across a range of storm conditions and geography.

19.
Risk Anal ; 36(10): 1844-1854, 2016 10.
Artigo em Inglês | MEDLINE | ID: mdl-26849834

RESUMO

Simulation models are widely used in risk analysis to study the effects of uncertainties on outcomes of interest in complex problems. Often, these models are computationally complex and time consuming to run. This latter point may be at odds with time-sensitive evaluations or may limit the number of parameters that are considered. In this article, we give an introductory tutorial focused on parallelizing simulation code to better leverage modern computing hardware, enabling risk analysts to better utilize simulation-based methods for quantifying uncertainty in practice. This article is aimed primarily at risk analysts who use simulation methods but do not yet utilize parallelization to decrease the computational burden of these models. The discussion is focused on conceptual aspects of embarrassingly parallel computer code and software considerations. Two complementary examples are shown using the languages MATLAB and R. A brief discussion of hardware considerations is located in the Appendix.

20.
Risk Anal ; 36(1): 4-15, 2016 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-25976848

RESUMO

The U.S. federal government regulates the reliability of bulk power systems, while the reliability of power distribution systems is regulated at a state level. In this article, we review the history of regulating electric service reliability and study the existing reliability metrics, indices, and standards for power transmission and distribution networks. We assess the foundations of the reliability standards and metrics, discuss how they are applied to outages caused by large exogenous disturbances such as natural disasters, and investigate whether the standards adequately internalize the impacts of these events. Our reflections shed light on how existing standards conceptualize reliability, question the basis for treating large-scale hazard-induced outages differently from normal daily outages, and discuss whether this conceptualization maps well onto customer expectations. We show that the risk indices for transmission systems used in regulating power system reliability do not adequately capture the risks that transmission systems are prone to, particularly when it comes to low-probability high-impact events. We also point out several shortcomings associated with the way in which regulators require utilities to calculate and report distribution system reliability indices. We offer several recommendations for improving the conceptualization of reliability metrics and standards. We conclude that while the approaches taken in reliability standards have made considerable advances in enhancing the reliability of power systems and may be logical from a utility perspective during normal operation, existing standards do not provide a sufficient incentive structure for the utilities to adequately ensure high levels of reliability for end-users, particularly during large-scale events.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA