Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 6 de 6
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
Sci Data ; 9(1): 710, 2022 11 18.
Artículo en Inglés | MEDLINE | ID: mdl-36400781

RESUMEN

The protracted nature of the 2016-2017 central Italy seismic sequence, with multiple damaging earthquakes spaced over months, presented serious challenges for the duty seismologists and emergency managers as they assimilated the growing sequence to advise the local population. Uncertainty concerning where and when it was safe to occupy vulnerable structures highlighted the need for timely delivery of scientifically based understanding of the evolving hazard and risk. Seismic hazard assessment during complex sequences depends critically on up-to-date earthquake catalogues-i.e., data on locations, magnitudes, and activity of earthquakes-to characterize the ongoing seismicity and fuel earthquake forecasting models. Here we document six earthquake catalogues of this sequence that were developed using a variety of methods. The catalogues possess different levels of resolution and completeness resulting from progressive enhancements in the data availability, detection sensitivity, and hypocentral location accuracy. The catalogues range from real-time to advanced machine-learning procedures and highlight both the promises as well as the challenges of implementing advanced workflows in an operational environment.

2.
Nat Commun ; 13(1): 5087, 2022 08 29.
Artículo en Inglés | MEDLINE | ID: mdl-36038553

RESUMEN

The Magnitude-Frequency-Distribution (MFD) of earthquakes is typically modeled with the (tapered) Gutenberg-Richter relation. The main parameter of this relation, the b-value, controls the relative rate of small and large earthquakes. Resolving spatiotemporal variations of the b-value is critical to understanding the earthquake occurrence process and improving earthquake forecasting. However, this variation is not well understood. Here we present remarkable MFD variability during the complex 2016/17 central Italy sequence using a high-resolution earthquake catalog. Isolating seismically active volumes ('clusters') reveals that the MFD differed in nearby clusters, varied or remained constant in time depending on the cluster, and increased in b-value in the cluster where the largest earthquake eventually occurred. These findings suggest that the fault system's heterogeneity and complexity influence the MFD. Our findings raise the question "b-value of what?": interpreting and using MFD variability needs a spatiotemporal scale that is physically meaningful, like the one proposed here.


Asunto(s)
Terremotos , Predicción , Italia
3.
Phys Rev E ; 99(4-1): 042210, 2019 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-31108655

RESUMEN

Earthquakes are one of the most devastating natural disasters that plague society. Skilled, reliable earthquake forecasting remains the ultimate goal for seismologists. Using the detrended fluctuation analysis (DFA) and conditional probability (CP) methods, we find that memory exists not only in interoccurrence seismic records but also in released energy as well as in the series of the number of events per unit time. Analysis of a standard epidemic-type aftershock sequences (ETAS) earthquake model indicates that the empirically observed earthquake memory can be reproduced only for a narrow range of the model's parameters. This finding therefore provides tight constraints on the model's parameters and can serve as a testbed for existing earthquake forecasting models. Furthermore, we show that by implementing DFA and CP results, the ETAS model can significantly improve the short-term forecasting rate for the real (Italian) earthquake catalog.

5.
Sci Adv ; 3(9): e1701239, 2017 09.
Artículo en Inglés | MEDLINE | ID: mdl-28924610

RESUMEN

Earthquake forecasting is the ultimate challenge for seismologists, because it condenses the scientific knowledge about the earthquake occurrence process, and it is an essential component of any sound risk mitigation planning. It is commonly assumed that, in the short term, trustworthy earthquake forecasts are possible only for typical aftershock sequences, where the largest shock is followed by many smaller earthquakes that decay with time according to the Omori power law. We show that the current Italian operational earthquake forecasting system issued statistically reliable and skillful space-time-magnitude forecasts of the largest earthquakes during the complex 2016-2017 Amatrice-Norcia sequence, which is characterized by several bursts of seismicity and a significant deviation from the Omori law. This capability to deliver statistically reliable forecasts is an essential component of any program to assist public decision-makers and citizens in the challenging risk management of complex seismic sequences.

6.
Proc Natl Acad Sci U S A ; 111(33): 11973-8, 2014 Aug 19.
Artículo en Inglés | MEDLINE | ID: mdl-25097265

RESUMEN

Probabilistic forecasting models describe the aleatory variability of natural systems as well as our epistemic uncertainty about how the systems work. Testing a model against observations exposes ontological errors in the representation of a system and its uncertainties. We clarify several conceptual issues regarding the testing of probabilistic forecasting models for ontological errors: the ambiguity of the aleatory/epistemic dichotomy, the quantification of uncertainties as degrees of belief, the interplay between Bayesian and frequentist methods, and the scientific pathway for capturing predictability. We show that testability of the ontological null hypothesis derives from an experimental concept, external to the model, that identifies collections of data, observed and not yet observed, that are judged to be exchangeable when conditioned on a set of explanatory variables. These conditional exchangeability judgments specify observations with well-defined frequencies. Any model predicting these behaviors can thus be tested for ontological error by frequentist methods; e.g., using P values. In the forecasting problem, prior predictive model checking, rather than posterior predictive checking, is desirable because it provides more severe tests. We illustrate experimental concepts using examples from probabilistic seismic hazard analysis. Severe testing of a model under an appropriate set of experimental concepts is the key to model validation, in which we seek to know whether a model replicates the data-generating process well enough to be sufficiently reliable for some useful purpose, such as long-term seismic forecasting. Pessimistic views of system predictability fail to recognize the power of this methodology in separating predictable behaviors from those that are not.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...