Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 46
Filtrar
1.
Stat Med ; 2024 Sep 15.
Artículo en Inglés | MEDLINE | ID: mdl-39278641

RESUMEN

Trivariate joint modeling for longitudinal count data, recurrent events, and a terminal event for family data has increased interest in medical studies. For example, families with Lynch syndrome (LS) are at high risk of developing colorectal cancer (CRC), where the number of polyps and the frequency of colonoscopy screening visits are highly associated with the risk of CRC among individuals and families. To assess how screening visits influence polyp detection, which in turn influences time to CRC, we propose a clustered trivariate joint model. The proposed model facilitates longitudinal count data that are zero-inflated and over-dispersed and invokes individual-specific and family-specific random effects to account for dependence among individuals and families. We formulate our proposed model as a latent Gaussian model to use the Bayesian estimation approach with the integrated nested Laplace approximation algorithm and evaluate its performance using simulation studies. Our trivariate joint model is applied to a series of 18 families from Newfoundland, with the occurrence of CRC taken as the terminal event, the colonoscopy screening visits as recurrent events, and the number of polyps detected at each visit as zero-inflated count data with overdispersion. We showed that our trivariate model fits better than alternative bivariate models and that the cluster effects should not be ignored when analyzing family data. Finally, the proposed model enables us to quantify heterogeneity across families and individuals in polyp detection and CRC risk, thus helping to identify individuals and families who would benefit from more intensive screening visits.

2.
Stat Med ; 43(3): 578-605, 2024 02 10.
Artículo en Inglés | MEDLINE | ID: mdl-38213277

RESUMEN

Research on dynamic treatment regimes has enticed extensive interest. Many methods have been proposed in the literature, which, however, are vulnerable to the presence of misclassification in covariates. In particular, although Q-learning has received considerable attention, its applicability to data with misclassified covariates is unclear. In this article, we investigate how ignoring misclassification in binary covariates can impact the determination of optimal decision rules in randomized treatment settings, and demonstrate its deleterious effects on Q-learning through empirical studies. We present two correction methods to address misclassification effects on Q-learning. Numerical studies reveal that misclassification in covariates induces non-negligible estimation bias and that the correction methods successfully ameliorate bias in parameter estimation.


Asunto(s)
Reglas de Decisión Clínica , Aprendizaje Automático , Humanos
3.
J Appl Stat ; 50(7): 1611-1634, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37197758

RESUMEN

Autoregressive (AR) models are useful in time series analysis. Inferences under such models are distorted in the presence of measurement error, a common feature in applications. In this article, we establish analytical results for quantifying the biases of the parameter estimation in AR models if the measurement error effects are neglected. We consider two measurement error models to describe different data contamination scenarios. We propose an estimating equation approach to estimate the AR model parameters with measurement error effects accounted for. We further discuss forecasting using the proposed method. Our work is inspired by COVID-19 data, which are error-contaminated due to multiple reasons including those related to asymptomatic cases and varying incubation periods. We implement the proposed method by conducting sensitivity analyses and forecasting the fatality rate of COVID-19 over time for the four most populated provinces in Canada. The results suggest that incorporating or not incorporating measurement error effects may yield rather different results for parameter estimation and forecasting.

4.
PLoS One ; 18(2): e0277878, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-36827382

RESUMEN

While the impact of the COVID-19 pandemic has been widely studied, relatively fewer discussions about the sentimental reaction of the public are available. In this article, we scrape COVID-19 related tweets on the microblogging platform, Twitter, and examine the tweets from February 24, 2020 to October 14, 2020 in four Canadian cities (Toronto, Montreal, Vancouver, and Calgary) and four U.S. cities (New York, Los Angeles, Chicago, and Seattle). Applying the RoBERTa, Vader and NRC approaches, we evaluate sentiment intensity scores and visualize the results over different periods of the pandemic. Sentiment scores for the tweets concerning three anti-epidemic measures, "masks", "vaccine", and "lockdown", are computed for comparison. We explore possible causal relationships among the variables concerning tweet activities and sentiment scores of COVID-19 related tweets by integrating the echo state network method with convergent cross-mapping. Our analyses show that public sentiments about COVID-19 vary from time to time and from place to place, and are different with respect to anti-epidemic measures of "masks", "vaccines", and "lockdown". Evidence of the causal relationship is revealed for the examined variables, assuming the suggested model is feasible.


Asunto(s)
COVID-19 , Medios de Comunicación Sociales , Vacunas , Humanos , Análisis de Sentimientos , Pandemias , Canadá , Aprendizaje
5.
Stat Methods Med Res ; 32(4): 691-711, 2023 04.
Artículo en Inglés | MEDLINE | ID: mdl-36694932

RESUMEN

In the framework of causal inference, the inverse probability weighting estimation method and its variants have been commonly employed to estimate the average treatment effect. Such methods, however, are challenged by the presence of irrelevant pre-treatment variables and measurement error. Ignoring these features and naively applying the usual inverse probability weighting estimation procedures may typically yield biased inference results. In this article, we develop an inference method for estimating the average treatment effect with those features taken into account. We establish theoretical properties for the resulting estimator and carry out numerical studies to assess the finite sample performance of the proposed estimator.


Asunto(s)
Modelos Estadísticos , Probabilidad , Causalidad , Simulación por Computador , Puntaje de Propensión
6.
Biometrics ; 79(2): 1073-1088, 2023 06.
Artículo en Inglés | MEDLINE | ID: mdl-35032335

RESUMEN

Research of complex associations between a gene network and multiple responses has attracted increasing attention. A great challenge in analyzing genetic data is posited by the presence of the genetic network that is typically unknown. Moreover, mismeasurement of responses introduces additional complexity to distort usual inferential procedures. In this paper, we consider the problem with mixed binary and continuous responses that are subject to mismeasurement and associated with complex structured covariates. We first start with the case where data are precisely measured. We propose a generalized network structured model and develop a two-step inferential procedure. In the first step, we employ a Gaussian graphical model to facilitate the covariates network structure, and in the second step, we incorporate the estimated graphical structure of covariates and develop an estimating equation method. Furthermore, we extend the development to accommodating mismeasured responses. We consider two cases where the information on mismeasurement is either known or estimated from a validation sample. Theoretical results are established and numerical studies are conducted to evaluate the finite sample performance of the proposed methods. We apply the proposed method to analyze the outbred Carworth Farms White mice data arising from a genome-wide association study.


Asunto(s)
Redes Reguladoras de Genes , Estudio de Asociación del Genoma Completo , Animales , Ratones , Distribución Normal
7.
Biometrics ; 79(2): 1089-1102, 2023 06.
Artículo en Inglés | MEDLINE | ID: mdl-35261029

RESUMEN

Zero-inflated count data arise frequently from genomics studies. Analysis of such data is often based on a mixture model which facilitates excess zeros in combination with a Poisson distribution, and various inference methods have been proposed under such a model. Those analysis procedures, however, are challenged by the presence of measurement error in count responses. In this article, we propose a new measurement error model to describe error-contaminated count data. We show that ignoring the measurement error effects in the analysis may generally lead to invalid inference results, and meanwhile, we identify situations where ignoring measurement error can still yield consistent estimators. Furthermore, we propose a Bayesian method to address the effects of measurement error under the zero-inflated Poisson model and discuss the identifiability issues. We develop a data-augmentation algorithm that is easy to implement. Simulation studies are conducted to evaluate the performance of the proposed method. We apply our method to analyze the data arising from a prostate adenocarcinoma genomic study.


Asunto(s)
Algoritmos , Modelos Estadísticos , Masculino , Humanos , Teorema de Bayes , Simulación por Computador , Distribución de Poisson
8.
Can J Stat ; 50(2): 395-416, 2022 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-35573897

RESUMEN

The coronavirus disease 2019 (COVID-19), caused by the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), has spread stealthily and presented a tremendous threat to the public. It is important to investigate the transmission dynamics of COVID-19 to help understand the impact of the disease on public health and the economy. In this article, we develop a new epidemic model that utilizes a set of ordinary differential equations with unknown parameters to delineate the transmission process of COVID-19. The model accounts for asymptomatic infections as well as the lag between symptom onset and the confirmation date of infection. To reflect the transmission potential of an infected case, we derive the basic reproduction number from the proposed model. Using the daily reported number of confirmed cases, we describe an estimation procedure for the model parameters, which involves adapting the iterated filter-ensemble adjustment Kalman filter (IF-EAKF) algorithm. To illustrate the use of the proposed model, we examine the COVID-19 data from Quebec for the period from 2 April 2020 to 10 May 2020 and carry out sensitivity studies under a variety of assumptions. Simulation studies are used to evaluate the performance of the proposed model under a variety of settings.


La maladie à coronavirus 2019 (COVID­19), causée par le coronavirus 2 du syndrome respiratoire aigu sévère (SARS­CoV­2), s'est rapidement propagée et représente une grande menace pour le public. Pour mieux comprendre l'impact de cette maladie sur la santé publique et l'économie, il est important d'étudier la dynamique de sa transmission. A cette fin, les auteurs de cet article proposent un nouveau modèle épidémiologique basé sur un ensemble d'équations différentielles ordinaires avec des paramètres inconnus et qui tient compte des infections asymptomatiques ainsi que du décalage entre l'apparition des symptômes et la date de confirmation de l'infection. Ils en déduisent le taux de reproduction de base qui traduit le potentiel de transmission d'un cas infecté. En utilisant le nombre rapporté de cas confirmés, les auteurs décrivent une procédure d'estimation des paramètres du modèle qui repose sur une adaptation de l'algorithme filtre itéré ­ filtre de Kalman énsemble àjustement (IF­EAKF). Une mise en application du modèle proposé est illustrée à travers l'examen des données COVID­19 du Québec pour la période du 2 avril 2020 au 10 mai 2020. Une analyse de sensibilité du modèle construit est explorée sous diverses hypothèses. Enfin, les auteurs ont fait appel à des études de simulation pour évaluer la performance du modèle proposé et ce sous différents scénarios.

9.
J Med Virol ; 94(9): 4156-4169, 2022 09.
Artículo en Inglés | MEDLINE | ID: mdl-35524338

RESUMEN

Providing sensible estimates of the mean incubation time for COVID-19 is important yet complex. This study aims to provide synthetic estimates of the mean incubation time of COVID-19 by capitalizing on available estimates reported in the literature and exploring different ways to accommodate heterogeneity involved in the reported studies. Online databases between January 1, 2020 and May 20, 2021 are first searched to obtain estimates of the mean incubation time of COVID-19, and meta-analyses are then conducted to generate synthetic estimates. Heterogeneity of the studies is examined via the use of Cochran's Q $Q$ statistic and Higgin's & Thompson's I 2 ${I}^{2}$ statistic, and subgroup analyses are conducted using mixed effects models. The publication bias issue is assessed using the funnel plot and Egger's test. Using all those reported mean incubation estimates for COVID-19, the synthetic mean incubation time is estimated to be 6.43 days with a 95% confidence interval (CI) [5.90, 6.96], and using all those reported mean incubation estimates together with those transformed median incubation estimates, the estimated mean incubation time is 6.07 days with a 95% CI [5.70, 6.45]. The reported estimates of the mean incubation time of COVID-19 vary considerably due to multiple reasons, including heterogeneity and publication bias. To alleviate these issues, we take different angles to provide a sensible estimate of the mean incubation time of COVID-19. Our analyses show that the mean incubation time of COVID-19 between January 1, 2020 and May 20, 2021 ranges from 5.68 to 8.30 days.


Asunto(s)
COVID-19 , Humanos
10.
BMC Med Res Methodol ; 22(1): 15, 2022 01 14.
Artículo en Inglés | MEDLINE | ID: mdl-35026998

RESUMEN

BACKGROUND: The coronavirus disease 2019 (COVID-19) pandemic has posed a significant influence on public mental health. Current efforts focus on alleviating the impacts of the disease on public health and the economy, with the psychological effects due to COVID-19 relatively ignored. In this research, we are interested in exploring the quantitative characterization of the pandemic impact on public mental health by studying an online survey dataset of the United States. METHODS: The analyses are conducted based on a large scale of online mental health-related survey study in the United States, conducted over 12 consecutive weeks from April 23, 2020 to July 21, 2020. We are interested in examining the risk factors that have a significant impact on mental health as well as in their estimated effects over time. We employ the multiple imputation by chained equations (MICE) method to deal with missing values and take logistic regression with the least absolute shrinkage and selection operator (Lasso) method to identify risk factors for mental health. RESULTS: Our analysis shows that risk predictors for an individual to experience mental health issues include the pandemic situation of the State where the individual resides, age, gender, race, marital status, health conditions, the number of household members, employment status, the level of confidence of the future food affordability, availability of health insurance, mortgage status, and the information of kids enrolling in school. The effects of most of the predictors seem to change over time though the degree varies for different risk factors. The effects of risk factors, such as States and gender show noticeable change over time, whereas the factor age exhibits seemingly unchanged effects over time. CONCLUSIONS: The analysis results unveil evidence-based findings to identify the groups who are psychologically vulnerable to the COVID-19 pandemic. This study provides helpful evidence for assisting healthcare providers and policymakers to take steps for mitigating the pandemic effects on public mental health, especially in boosting public health care, improving public confidence in future food conditions, and creating more job opportunities. TRIAL REGISTRATION: This article does not report the results of a health care intervention on human participants.


Asunto(s)
COVID-19 , Pandemias , Humanos , Salud Mental , SARS-CoV-2 , Instituciones Académicas , Estados Unidos/epidemiología
11.
Stat Biosci ; 14(1): 175-190, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-34522235

RESUMEN

To confine the spread of an infectious disease, setting a sensible quarantine time is crucial. To this end, it is imperative to well understand the distribution of incubation times of the disease. Regarding the ongoing COVID-19 pandemic, 14-days is commonly taken as a quarantine time to curb the virus spread in balancing the impacts of COVID-19 on diverse aspects of the society, including public health, economy, and humanity perspectives, etc. However, setting a sensible quarantine time is not trivial and it depends on various underlying factors. In this article, we take an angle of examining the distribution of the COVID-19 incubation time using likelihood-based methods. Our study is carried out on a dataset of 178 COVID-19 cases dated from January 20, 2020 to February 29, 2020, with the information of exposure periods and dates of symptom onset collected. To gain a good understanding of possible scenarios, we employ different models to describe incubation times of COVID-19. Our findings suggest that statistically, the 14-day quarantine time may not be long enough to control the probability of an early release of infected individuals to be small. While the size of the study data is not large enough to offer us a definitely acceptable quarantine time, and further in practice, the decision-makers may take account of other factors related to social and economic concerns to set up a practically acceptable quarantine time, our study demonstrates useful methods to determine a reasonable quarantine time from a statistical standpoint. Further, it reveals some associated complexity for fully understanding the COVID-19 incubation time distribution. Supplementary Information: The online version contains supplementary material available at 10.1007/s12561-021-09320-8.

12.
13.
Biometrics ; 78(3): 894-907, 2022 09.
Artículo en Inglés | MEDLINE | ID: mdl-33881782

RESUMEN

Data with a huge size present great challenges in modeling, inferences, and computation. In handling big data, much attention has been directed to settings with "large p small n", and relatively less work has been done to address problems with p and n being both large, though data with such a feature have now become more accessible than before, where p represents the number of variables and n stands for the sample size. The big volume of data does not automatically ensure good quality of inferences because a large number of unimportant variables may be collected in the process of gathering informative variables. To carry out valid statistical analysis, it is imperative to screen out noisy variables that have no predictive value for explaining the outcome variable. In this paper, we develop a screening method for handling large-sized survival data, where the sample size n is large and the dimension p of covariates is of non-polynomial order of the sample size n, or the so-called NP-dimension. We rigorously establish theoretical results for the proposed method and conduct numerical studies to assess its performance. Our research offers multiple extensions of existing work and enlarges the scope of high-dimensional data analysis. The proposed method capitalizes on the connections among useful regression settings and offers a computationally efficient screening procedure. Our method can be applied to different situations with large-scale data including genomic data.


Asunto(s)
Genoma , Genómica , Modelos de Riesgos Proporcionales , Tamaño de la Muestra
14.
Stat Methods Med Res ; 30(5): 1155-1186, 2021 05.
Artículo en Inglés | MEDLINE | ID: mdl-33635738

RESUMEN

Bivariate responses with mixed continuous and binary variables arise commonly in applications such as clinical trials and genetic studies. Statistical methods based on jointly modeling continuous and binary variables have been available. However, such methods ignore the effects of response mismeasurement, a ubiquitous feature in applications. It has been well studied that in many settings, ignorance of mismeasurement in variables usually results in biased estimation. In this paper, we consider the setting with a bivariate outcome vector which contains a continuous component and a binary component both subject to mismeasurement. We propose estimating equation approaches to handle measurement error in the continuous response and misclassification in the binary response simultaneously. The proposed estimators are consistent and robust to certain model misspecification, provided regularity conditions. Extensive simulation studies confirm that the proposed methods successfully correct the biases resulting from the error-in-variables under various settings. The proposed methods are applied to analyze the outbred Carworth Farms White mice data arising from a genome-wide association study.


Asunto(s)
Estudio de Asociación del Genoma Completo , Modelos Estadísticos , Animales , Sesgo , Causalidad , Simulación por Computador , Análisis Costo-Beneficio , Ratones
15.
PLoS One ; 16(1): e0244536, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-33465142

RESUMEN

BACKGROUND: Since March 11, 2020 when the World Health Organization (WHO) declared the COVID-19 pandemic, the number of infected cases, the number of deaths, and the number of affected countries have climbed rapidly. To understand the impact of COVID-19 on public health, many studies have been conducted for various countries. To complement the available work, in this article we examine Canadian COVID-19 data for the period of March 18, 2020 to August 16, 2020 with the aim to forecast the dynamic trend in a short term. METHOD: We focus our attention on Canadian data and analyze the four provinces, Ontario, Alberta, British Columbia, and Quebec, which have the most severe situations in Canada. To build predictive models and conduct prediction, we employ three models, smooth transition autoregressive (STAR) models, neural network (NN) models, and susceptible-infected-removed (SIR) models, to fit time series data of confirmed cases in the four provinces separately. In comparison, we also analyze the data of daily infections in two states of USA, Texas and New York state, for the period of March 18, 2020 to August 16, 2020. We emphasize that different models make different assumptions which are basically difficult to validate. Yet invoking different models allows us to examine the data from different angles, thus, helping reveal the underlying trajectory of the development of COVID-19 in Canada. FINDING: The examinations of the data dated from March 18, 2020 to August 11, 2020 show that the STAR, NN, and SIR models may output different results, though the differences are small in some cases. Prediction over a short term period incurs smaller prediction variability than over a long term period, as expected. The NN method tends to outperform other two methods. All the methods forecast an upward trend in all the four Canadian provinces for the period of August 12, 2020 to August 23, 2020, though the degree varies from method to method. This research offers model-based insights into the pandemic evolvement in Canada.


Asunto(s)
COVID-19/epidemiología , COVID-19/mortalidad , Canadá/epidemiología , Demografía/estadística & datos numéricos , Humanos , Modelos Estadísticos , Mortalidad/tendencias , Redes Neurales de la Computación
16.
Biometrics ; 77(3): 956-969, 2021 09.
Artículo en Inglés | MEDLINE | ID: mdl-32687216

RESUMEN

In survival data analysis, the Cox proportional hazards (PH) model is perhaps the most widely used model to feature the dependence of survival times on covariates. While many inference methods have been developed under such a model or its variants, those models are not adequate for handling data with complex structured covariates. High-dimensional survival data often entail several features: (1) many covariates are inactive in explaining the survival information, (2) active covariates are associated in a network structure, and (3) some covariates are error-contaminated. To hand such kinds of survival data, we propose graphical PH measurement error models and develop inferential procedures for the parameters of interest. Our proposed models significantly enlarge the scope of the usual Cox PH model and have great flexibility in characterizing survival data. Theoretical results are established to justify the proposed methods. Numerical studies are conducted to assess the performance of the proposed methods.


Asunto(s)
Modelos Estadísticos , Modelos de Riesgos Proporcionales , Análisis de Supervivencia
17.
Stat Med ; 39(26): 3700-3719, 2020 11 20.
Artículo en Inglés | MEDLINE | ID: mdl-32914420

RESUMEN

In genetic association studies, mixed effects models have been widely used in detecting the pleiotropy effects which occur when one gene affects multiple phenotype traits. In particular, bivariate mixed effects models are useful for describing the association of a gene with a continuous trait and a binary trait. However, such models are inadequate to feature the data with response mismeasurement, a characteristic that is often overlooked. It has been well studied that in univariate settings, ignorance of mismeasurement in variables usually results in biased estimation. In this paper, we consider the setting with a bivariate outcome vector which contains a continuous component and a binary component both subject to mismeasurement. We propose an induced likelihood approach and an EM algorithm method to handle measurement error in continuous response and misclassification in binary response simultaneously. Simulation studies confirm that the proposed methods successfully remove the bias induced from the response mismeasurement.


Asunto(s)
Sesgo , Estudios de Asociación Genética , Simulación por Computador , Funciones de Verosimilitud , Fenotipo
18.
J Med Virol ; 92(11): 2543-2550, 2020 11.
Artículo en Inglés | MEDLINE | ID: mdl-32470164

RESUMEN

The coronavirus disease-2019 (COVID-19) has been found to be caused by the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2). However, comprehensive knowledge of COVID-19 remains incomplete and many important features are still unknown. This manuscript conducts a meta-analysis and a sensitivity study to answer the questions: What is the basic reproduction number? How long is the incubation time of the disease on average? What portion of infections are asymptomatic? And ultimately, what is the case fatality rate? Our studies estimate the basic reproduction number to be 3.15 with the 95% CI (2.41-3.90), the average incubation time to be 5.08 days with the 95% CI (4.77-5.39) (in day), the asymptomatic infection rate to be 46% with the 95% CI (18.48%-73.60%), and the case fatality rate to be 2.72% with 95% CI (1.29%-4.16%) where asymptomatic infections are accounted for.


Asunto(s)
Infecciones Asintomáticas/epidemiología , Número Básico de Reproducción , COVID-19/mortalidad , COVID-19/virología , Periodo de Incubación de Enfermedades Infecciosas , SARS-CoV-2/fisiología , Humanos
19.
Lifetime Data Anal ; 26(3): 421-450, 2020 07.
Artículo en Inglés | MEDLINE | ID: mdl-31432384

RESUMEN

It is well established that measurement error has drastically negative impact on data analysis. It can not only bias parameter estimates but may also cause loss of power for testing relationship between variables. Although survival analysis of error-contaminated data has attracted extensive interest, relatively little attention has been paid to dealing with survival data with error-contaminated covariates when the underlying population is characterized by a cured fraction. In this paper, we consider this problem for which lifetimes of the non-cured individuals are featured by the additive hazards model and the measurement error process is described by an additive model. Unlike estimating the relative risk in the proportional hazards model, the additive hazards model allows us to estimate the absolute risk difference associated with the covariates. To allow the model flexibility, we incorporate time-dependent covariates in the model. We develop estimation methods for the two scenarios, without or with measurement error. The proposed methods are evaluated from both the theoretical view point and the numerical perspectives. Furthermore, a real-life data application is presented to illustrate the utility of the methodology.


Asunto(s)
Modelos de Riesgos Proporcionales , Algoritmos , Sesgo , Simulación por Computador , Humanos , Análisis de Supervivencia
20.
Stat Med ; 39(4): 456-468, 2020 02 20.
Artículo en Inglés | MEDLINE | ID: mdl-31802532

RESUMEN

Causal inference has been widely conducted in various fields and many methods have been proposed for different settings. However, for noisy data with both mismeasurements and missing observations, those methods often break down. In this paper, we consider a problem that binary outcomes are subject to both missingness and misclassification, when the interest is in estimation of the average treatment effects (ATE). We examine the asymptotic biases caused by ignoring missingness and/or misclassification and establish the intrinsic connections between missingness effects and misclassification effects on the estimation of ATE. We develop valid weighted estimation methods to simultaneously correct for missingness and misclassification effects. To provide protection against model misspecification, we further propose a doubly robust correction method which yields consistent estimators when either the treatment model or the outcome model is misspecified. Simulation studies are conducted to assess the performance of the proposed methods. An application to smoking cessation data is reported to illustrate the use of the proposed methods.


Asunto(s)
Modelos Estadísticos , Modelos Teóricos , Sesgo , Causalidad , Simulación por Computador , Humanos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...