Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 25
Filtrar
1.
Comput Stat Data Anal ; 180: 107616, 2023 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-36128441

RESUMO

Checking the models about the ongoing Coronavirus Disease 2019 (COVID-19) pandemic is an important issue. Some famous ordinary differential equation (ODE) models, such as the SIR and SEIR models have been used to describe and predict the epidemic trend. Still, in many cases, only part of the equations can be observed. A test is suggested to check possibly partially observed ODE models with a fixed design sampling scheme. The asymptotic properties of the test under the null, global and local alternative hypotheses are presented. Two new propositions about U-statistics with varying kernels based on independent but non-identical data are derived as essential tools. Some simulation studies are conducted to examine the performances of the test. Based on the available public data, it is found that the SEIR model, for modeling the data of COVID-19 infective cases in certain periods in Japan and Algeria, respectively, maybe not be appropriate by applying the proposed test.

2.
Pharm Stat ; 22(6): 1046-1061, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37519010

RESUMO

Pre-specification of the primary analysis model is a pre-requisite to control the family-wise type-I-error rate (T1E) at the intended level in confirmatory clinical trials. However, mixed models for repeated measures (MMRM) have been shown to be poorly specified in study protocols. The magnitude of a resulting T1E rate inflation is still unknown. This investigation aims to quantify the magnitude of the T1E rate inflation depending on the type and number of unspecified model items as well as different trial characteristics. We simulated a randomized, double-blind, parallel group, phase III clinical trial under the assumption that there is no treatment effect at any time point. The simulated data was analysed using different clusters, each including several MMRMs that are compatible with the imprecise pre-specification of the MMRM. T1E rates for each cluster were estimated. A significant T1E rate inflation could be shown for ambiguous model specifications with a maximum T1E rate of 7.6% [7.1%; 8.1%]. The results show that the magnitude of the T1E rate inflation depends on the type and number of unspecified model items as well as the sample size and allocation ratio. The imprecise specification of nuisance parameters may not lead to a significant T1E rate inflation. However, the results of this simulation study rather underestimate the true T1E rate inflation. In conclusion, imprecise MMRM specifications may lead to a substantial inflation of the T1E rate and can damage the ability to generate confirmatory evidence in pivotal clinical trials.


Assuntos
Modelos Estatísticos , Projetos de Pesquisa , Humanos , Tamanho da Amostra , Simulação por Computador
3.
Environ Res ; 187: 109638, 2020 08.
Artigo em Inglês | MEDLINE | ID: mdl-32450424

RESUMO

Recent advances in understanding of biological mechanisms and adverse outcome pathways for many exposure-related diseases show that certain common mechanisms involve thresholds and nonlinearities in biological exposure concentration-response (C-R) functions. These range from ultrasensitive molecular switches in signaling pathways, to assembly and activation of inflammasomes, to rupture of lysosomes and pyroptosis of cells. Realistic dose-response modeling and risk analysis must confront the reality of nonlinear C-R functions. This paper reviews several challenges for traditional statistical regression modeling of C-R functions with thresholds and nonlinearities, together with methods for overcoming them. Statistically significantly positive exposure-response regression coefficients can arise from many non-causal sources such as model specification errors, incompletely controlled confounding, exposure estimation errors, attribution of interactions to factors, associations among explanatory variables, or coincident historical trends. If so, the unadjusted regression coefficients do not necessarily predict how or whether reducing exposure would reduce risk. We discuss statistical options for controlling for such threats, and advocate causal Bayesian networks and dynamic simulation models as potentially valuable complements to nonparametric regression modeling for assessing causally interpretable nonlinear C-R functions and understanding how time patterns of exposures affect risk. We conclude that these approaches are promising for extending the great advances made in statistical C-R modeling methods in recent decades to clarify how to design regulations that are more causally effective in protecting human health.


Assuntos
Poluição do Ar , Teorema de Bayes , Exposição Ambiental/análise , Humanos , Análise de Regressão , Risco
4.
Mem Cognit ; 48(1): 69-82, 2020 01.
Artigo em Inglês | MEDLINE | ID: mdl-31372846

RESUMO

In this study we examined the different functions of text and pictures during text-picture integration in multimedia learning. In Study 1, 144 secondary school students (age = 11 to 14 years; 72 females, 72 males) received six text-picture units under two conditions. In the delayed-question condition, students first read the units without a specific question (no-question phase), to stimulate initial coherence-oriented mental model construction. Afterward the question was presented (question-answering phase), to stimulate task-adaptive mental model specification. In the preposed-question condition, students received a specific question from the beginning, stimulating both kinds of processing. Analyses of the participants' eye movement patterns confirmed the assumption that students allocated a higher percentage of available resources to text processing during the initial mental model construction than during adaptive model specification. Conversely, students allocated a higher percentage of available resources to picture processing during adaptive mental model specification than during the initial mental model construction. In Study 2 (N = 12, age = 12 to 16; seven females, five males), we ruled out that these findings were due to the effect of rereading, by implementing a no-question phase either once or twice. To sum up, texts seem to provide more explicit conceptual guidance in mental model construction than pictures do, whereas pictures support mental model adaptation more than text does, by providing flexible access to specific information for task-oriented updates.


Assuntos
Adaptação Psicológica/fisiologia , Modelos Psicológicos , Multimídia , Reconhecimento Visual de Modelos/fisiologia , Leitura , Pensamento/fisiologia , Adolescente , Criança , Medições dos Movimentos Oculares , Feminino , Humanos , Masculino
5.
Stat Med ; 38(17): 3168-3183, 2019 07 30.
Artigo em Inglês | MEDLINE | ID: mdl-30856294

RESUMO

Marginal structural models (MSMs) allow estimating the causal effect of a time-varying exposure on an outcome in the presence of time-dependent confounding. The parameters of MSMs can be estimated utilizing an inverse probability of treatment weight estimator under certain assumptions. One of these assumptions is that the proposed causal model relating the outcome to exposure history is correctly specified. However, in practice, the true model is unknown. We propose a test that employs the observed data to attempt validating the assumption that the model is correctly specified. The performance of the proposed test is investigated with a simulation study. We illustrate our approach by estimating the effect of repeated exposure to psychosocial stressors at work on ambulatory blood pressure in a large cohort of white-collar workers in Québec City, Canada. Code examples in SAS and R are provided to facilitate the implementation of the test.


Assuntos
Monitorização Ambulatorial da Pressão Arterial , Modelos Estatísticos , Saúde Ocupacional , Causalidade , Simulação por Computador , Humanos , Quebeque
6.
Pharm Stat ; 18(6): 636-644, 2019 11.
Artigo em Inglês | MEDLINE | ID: mdl-31267673

RESUMO

In confirmatory clinical trials, the prespecification of the primary analysis model is a universally accepted scientific principle to allow strict control of the type I error. Consequently, both the ICH E9 guideline and the European Medicines Agency (EMA) guideline on missing data in confirmatory clinical trials require that the primary analysis model is defined unambiguously. This requirement applies to mixed models for longitudinal data handling missing data implicitly. To evaluate the compliance with the EMA guideline, we evaluated the model specifications in those clinical study protocols from development phases II and III submitted between 2015 and 2018 to the Ethics Committee at Hannover Medical School under the German Medicinal Products Act, which planned to use a mixed model for longitudinal data in the confirmatory testing strategy. Overall, 39 trials from different types of sponsors and a wide range of therapeutic areas were evaluated. While nearly all protocols specify the fixed and random effects of the analysis model (95%), only 77% give the structure of the covariance matrix used for modeling the repeated measurements. Moreover, the testing method (36%), the estimation method (28%), the computation method (3%), and the fallback strategy (18%) are given by less than half the study protocols. Subgroup analyses indicate that these findings are universal and not specific to clinical trial phases or size of company. Altogether, our results show that guideline compliance is to various degrees poor and consequently, strict type I error rate control at the intended level is not guaranteed.


Assuntos
Ensaios Clínicos como Assunto/legislação & jurisprudência , Estudos Longitudinais , Modelos Estatísticos , Projetos de Pesquisa , Ensaios Clínicos como Assunto/métodos , Interpretação Estatística de Dados , Europa (Continente) , Humanos
7.
J Dairy Sci ; 101(7): 5679-5701, 2018 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-29729923

RESUMO

Reproducible results define the very core of scientific integrity in modern research. Yet, legitimate concerns have been raised about the reproducibility of research findings, with important implications for the advancement of science and for public support. With statistical practice increasingly becoming an essential component of research efforts across the sciences, this review article highlights the compelling role of statistics in ensuring that research findings in the animal sciences are reproducible-in other words, able to withstand close interrogation and independent validation. Statistics set a formal framework and a practical toolbox that, when properly implemented, can recover signal from noisy data. Yet, misconceptions and misuse of statistics are recognized as top contributing factors to the reproducibility crisis. In this article, we revisit foundational statistical concepts relevant to reproducible research in the context of the animal sciences, raise awareness on common statistical misuse undermining it, and outline recommendations for statistical practice. Specifically, we emphasize a keen understanding of the data generation process throughout the research endeavor, from thoughtful experimental design and randomization, through rigorous data analysis and inference, to careful wording in communicating research results to peer scientists and society in general. We provide a detailed discussion of core concepts in experimental design, including data architecture, experimental replication, and subsampling, and elaborate on practical implications for proper elicitation of the scope of reach of research findings. For data analysis, we emphasize proper implementation of mixed models, in terms of both distributional assumptions and specification of fixed and random effects to explicitly recognize multilevel data architecture. This is critical to ensure that experimental error for treatments of interest is properly recognized and inference is correctly calibrated. Inferential misinterpretations associated with use of P-values, both significant and not, are clarified, and problems associated with error inflation due to multiple comparisons and selective reporting are illustrated. Overall, we advocate for a responsible practice of statistics in the animal sciences, with an emphasis on continuing quantitative education and interdisciplinary collaboration between animal scientists and statisticians to maximize reproducibility of research findings.


Assuntos
Ciência dos Animais de Laboratório/normas , Reprodutibilidade dos Testes , Animais , Biometria , Ciência dos Animais de Laboratório/estatística & dados numéricos , Projetos de Pesquisa
8.
Value Health ; 18(8): 1138-51, 2015 12.
Artigo em Inglês | MEDLINE | ID: mdl-26686801

RESUMO

OBJECTIVES: To systematically review the choice of comparator strategies in cost-effectiveness analyses (CEAs) of human papillomavirus testing in cervical screening. METHODS: The PubMed, Web of Knowledge, and Scopus databases were searched to identify eligible model-based CEAs of cervical screening programs using human papillomavirus testing. The eligible CEAs were reviewed to investigate what screening strategies were chosen for analysis and how this choice might have influenced estimates of the incremental cost-effectiveness ratio (ICER). Selected examples from the reviewed studies are presented to illustrate how the omission of relevant comparators might influence estimates of screening cost-effectiveness. RESULTS: The search identified 30 eligible CEAs. The omission of relevant comparator strategies appears likely in 18 studies. The ICER estimates in these cases are probably lower than would be estimated had more comparators been included. Five of the 30 studies restricted relevant comparator strategies to sensitivity analyses or other subanalyses not part of the principal base-case analysis. Such exclusion of relevant strategies from the base-case analysis can result in cost-ineffective strategies being identified as cost-effective. CONCLUSIONS: Many of the CEAs reviewed appear to include insufficient comparator strategies. In particular, they omit strategies with relatively long screening intervals. Omitting relevant comparators matters particularly if it leads to the underestimation of ICERs for strategies around the cost-effectiveness threshold because these strategies are the most policy relevant from the CEA perspective. Consequently, such CEAs may not be providing the best possible policy guidance and lead to the mistaken adoption of cost-ineffective screening strategies.


Assuntos
Detecção Precoce de Câncer/economia , Detecção Precoce de Câncer/métodos , Infecções por Papillomavirus/diagnóstico , Projetos de Pesquisa , Análise Custo-Benefício , Técnicas Citológicas , DNA Viral , Feminino , Humanos , Anos de Vida Ajustados por Qualidade de Vida
9.
Stata J ; 15(3): 833-844, 2015 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-26973439

RESUMO

Stata's mi commands provide powerful tools to conduct multiple imputation in the presence of ignorable missing data. In this article, I present Stata code to extend the capabilities of the mi commands to address two areas of statistical inference where results are not easily aggregated across imputed datasets. First, mi commands are restricted to covariate selection. I show how to address model fit to correctly specify a model. Second, the mi commands readily aggregate model-based standard errors. I show how standard errors can be bootstrapped for situations where model assumptions may not be met. I illustrate model specification and bootstrapping on frequency counts for the number of times that alcohol was consumed in data with missing observations from a behavioral intervention.

10.
Sci Prog ; 107(1): 368504231223625, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38312028

RESUMO

Efficient resource use requires substantial evaluation and re-evaluation of production processes. Since not all production details can be properly and timely monitored and adjusted, improvement of resource allocation has long been a critical issue in commodity production, technique selection, and sustainable development. The progress of artificial intelligence (AI) offers a possibility. For example, with deep learning techniques and extensive data analysis, most previously unincorporated or unknown information associated with a firm's activities can be appropriately reflected and calculated. Once the firms take the report, a more efficient and economically friendly production strategy could be made. The central theme of this special collection is to invite studies on how the design and application of AI benefit not only the fields of computer science and information engineering but also the interdisciplinary fields, including renewable energy development, environmental protection, and economic analysis. Fourteen papers are published in this special collection.

11.
Econom Stat ; 30: 124-132, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-38919505

RESUMO

The instrumental variable approaches have been demonstrated effective for semiparametrically modeling the propensity function in analyzing data that may be missing not at random. A model specification test is considered for a class of parsimonious semiparametric propensity models. The test is constructed based on assessing an over-identification so as to detect possible incompatibility in the moment conditions when the model and/or instrumental variables are misspecified. Validity of the test under the null hypothesis is established; and its power is studied when the model is misspecified. A data analysis and simulations are presented to demonstrate the effectiveness of our methods.

12.
Heliyon ; 10(3): e25095, 2024 Feb 15.
Artigo em Inglês | MEDLINE | ID: mdl-38317955

RESUMO

Knowledge-intensive service industry has become an important part of social economy, which not only promotes the innovation of modern service industry, but also promotes the upgrading and transformation of manufacturing industry. The agglomeration of knowledge-intensive service industry is not only the inevitable result of economic growth, but also the premise of sustained economic growth. Moreover, China's national economy is transforming from an industrialized economy to a service-oriented and knowledge-based economy, and the importance of knowledge-intensive service industry is increasing day by day. This paper constructs a theoretical framework of the influence of population agglomeration on the economic development of urban agglomerations. The article holds that population agglomeration promotes the economic development of urban agglomerations through the upgrading of industrial structure and the promotion of human capital. In the future, we need to focus on the similarities and differences between population agglomeration and economic development of urban agglomerations in central China, explore the factors of economic development of urban agglomerations, and provide reasonable suggestions for the government to formulate relevant economic policies. This paper, from an innovative research perspective, based on the perspective of population agglomeration, studies the path of its role in regional economic development, which can make up for this defect to some extent. Moreover, the research object has been innovated and the research method has been improved. On the basis of the above, the panel data of the Yangtze River Delta from 2012 to 2021 is selected as the research sample, the location entropy index is taken as the explanatory variable, the regional gross domestic product (GDP) is taken as the explanatory variable, and the rationality of economic theory and the significance of econometric test are considered. Finally, the data are analyzed and tested by regression. The empirical analysis of the Yangtze River Delta urban agglomeration proves the importance of knowledge-intensive service industry agglomeration to regional economic development. Finally, based on the contribution of knowledge-intensive service industry agglomeration to regional economy, the development of knowledge-intensive service industry and the promotion of industrial agglomeration are studied.

13.
Educ Psychol Meas ; 84(1): 40-61, 2024 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-38250510

RESUMO

Metaheuristics are optimization algorithms that efficiently solve a variety of complex combinatorial problems. In psychological research, metaheuristics have been applied in short-scale construction and model specification search. In the present study, we propose a bee swarm optimization (BSO) algorithm to explore the structure underlying a psychological measurement instrument. The algorithm assigns items to an unknown number of nested factors in a confirmatory bifactor model, while simultaneously selecting items for the final scale. To achieve this, the algorithm follows the biological template of bees' foraging behavior: Scout bees explore new food sources, whereas onlooker bees search in the vicinity of previously explored, promising food sources. Analogously, scout bees in BSO introduce major changes to a model specification (e.g., adding or removing a specific factor), whereas onlooker bees only make minor changes (e.g., adding an item to a factor or swapping items between specific factors). Through this division of labor in an artificial bee colony, the algorithm aims to strike a balance between two opposing strategies diversification (or exploration) versus intensification (or exploitation). We demonstrate the usefulness of the algorithm to find the underlying structure in two empirical data sets (Holzinger-Swineford and short dark triad questionnaire, SDQ3). Furthermore, we illustrate the influence of relevant hyperparameters such as the number of bees in the hive, the percentage of scouts to onlookers, and the number of top solutions to be followed. Finally, useful applications of the new algorithm are discussed, as well as limitations and possible future research opportunities.

14.
Biometrika ; 107(4): 907-917, 2020 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-34176951

RESUMO

Composite likelihood functions are often used for inference in applications where the data have a complex structure. While inference based on the composite likelihood can be more robust than inference based on the full likelihood, the inference is not valid if the associated conditional or marginal models are misspecified. In this paper, we propose a general class of specification tests for composite likelihood inference. The test statistics are motivated by the fact that the second Bartlett identity holds for each component of the composite likelihood function when these components are correctly specified. We construct the test statistics based on the discrepancy between the so-called composite information matrix and the sensitivity matrix. As an illustration, we study three important cases of the proposed tests and establish their limiting distributions under both null and local alternative hypotheses. Finally, we evaluate the finite-sample performance of the proposed tests in several examples.

15.
Philos Trans R Soc Lond B Biol Sci ; 375(1797): 20190355, 2020 04 27.
Artigo em Inglês | MEDLINE | ID: mdl-32146887

RESUMO

In this paper, I will argue that the generality of the Price equation comes at a cost, and that is that the terms in it become meaningless. There are simple linear models that can be written in a Price equation-like form, and for those the terms in them have a meaningful interpretation. There are also models for which that is not the case, and in general, when no assumptions on the shape of the fitness function are made, and all possible models are allowed for, the regression coefficients in the Price equation do not allow for a meaningful interpretation. The failure to recognize that the Price equation, although general, only has a meaningful interpretation under restrictive assumptions, has done real damage to the field of social evolution, as will be illustrated by looking at an application of the Price equation to group selection. This article is part of the theme issue 'Fifty years of the Price equation'.


Assuntos
Evolução Biológica , Genética Populacional/métodos , Modelos Genéticos , Seleção Genética
16.
Assessment ; 27(8): 1731-1747, 2020 12.
Artigo em Inglês | MEDLINE | ID: mdl-30873844

RESUMO

The researchers examined the factor structure and model specifications of the World Health Organization Disability Assessment Schedule 2.0 (WHODAS 2.0) with confirmatory tetrad analysis (CTA) using partial least squares-structural equation modeling (PLS-SEM) with a sample of adult clients (N = 298) receiving individual therapy at a university-based counseling research center. The CTA and PLS-SEM results identified the formative nature of the WHODAS 2.0 subscale scores, supporting an alternative measurement model of the WHODAS 2.0 scores as a second-order formative-formative model.


Assuntos
Avaliação da Deficiência , Pessoas com Deficiência , Adulto , Humanos , Análise de Classes Latentes , Análise dos Mínimos Quadrados , Psicometria , Reprodutibilidade dos Testes , Organização Mundial da Saúde
17.
Glob Epidemiol ; 2: 100033, 2020 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-32905083

RESUMO

In the first half of 2020, much excitement in news media and some peer reviewed scientific articles was generated by the discovery that fine particulate matter (PM2.5) concentrations and COVID-19 mortality rates are statistically significantly positively associated in some regression models. This article points out that they are non-significantly negatively associated in other regression models, once omitted confounders (such as latitude and longitude) are included. More importantly, positive regression coefficients can and do arise when (generalized) linear regression models are applied to data with strong nonlinearities, including data on PM2.5, population density, and COVID-19 mortality rates, due to model specification errors. In general, statistical modeling accompanied by judgments about causal interpretations of statistical associations and regression coefficients - the current weight-of-evidence (WoE) approach favored in much current regulatory risk analysis for air pollutants - is not a valid basis for determining whether or to what extent risk of harm to human health would be reduced by reducing exposure. The traditional scientific method based on testing predictive generalizations against data remains a more reliable paradigm for risk analysis and risk management.

18.
Front Microbiol ; 10: 1022, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-31178829

RESUMO

13C metabolic flux analysis (MFA) is the method of choice when a detailed inference of intracellular metabolic fluxes in living organisms under metabolic quasi-steady state conditions is desired. Being continuously developed since two decades, the technology made major contributions to the quantitative characterization of organisms in all fields of biotechnology and health-related research. 13C MFA, however, stands out from other "-omics sciences," in that it requires not only experimental-analytical data, but also mathematical models and a computational toolset to infer the quantities of interest, i.e., the metabolic fluxes. At present, these models cannot be conveniently exchanged between different labs. Here, we present the implementation-independent model description language FluxML for specifying 13C MFA models. The core of FluxML captures the metabolic reaction network together with atom mappings, constraints on the model parameters, and the wealth of data configurations. In particular, we describe the governing design processes that shaped the FluxML language. We demonstrate the utility of FluxML to represent many contemporary experimental-analytical requirements in the field of 13C MFA. The major aim of FluxML is to offer a sound, open, and future-proof language to unambiguously express and conserve all the necessary information for model re-use, exchange, and comparison. Along with FluxML, several powerful computational tools are supplied for easy handling, but also to maintain a maximum of flexibility. Altogether, the FluxML collection is an "all-around carefree package" for 13C MFA modelers. We believe that FluxML improves scientific productivity as well as transparency and therewith contributes to the efficiency and reproducibility of computational modeling efforts in the field of 13C MFA.

19.
Front Microbiol ; 10: 1734, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-31417525

RESUMO

[This corrects the article DOI: 10.3389/fmicb.2019.01022.].

20.
Front Neuroinform ; 12: 80, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-30483089

RESUMO

The Si elegans platform targets the complete virtualization of the nematode Caenorhabditis elegans, and its environment. This paper presents a suite of unified web-based Graphical User Interfaces (GUIs) as the main user interaction point, and discusses their underlying technologies and methods. The user-friendly features of this tool suite enable users to graphically create neuron and network models, and behavioral experiments, without requiring knowledge of domain-specific computer-science tools. The framework furthermore allows the graphical visualization of all simulation results using a worm locomotion and neural activity viewer. Models, experiment definitions and results can be exported in a machine-readable format, thereby facilitating reproducible and cross-platform execution of in silico C. elegans experiments in other simulation environments. This is made possible by a novel XML-based behavioral experiment definition encoding format, a NeuroML XML-based model generation and network configuration description language, and their associated GUIs. User survey data confirms the platform usability and functionality, and provides insights into future directions for web-based simulation GUIs of C. elegans and other living organisms. The tool suite is available online to the scientific community and its source code has been made available.

SELEÇÃO DE REFERÊNCIAS
Detalhe da pesquisa