Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 46
Filtrar
1.
Multivariate Behav Res ; 55(2): 188-210, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-31179751

RESUMO

Complex mediation models, such as a two-mediator sequential model, have become more prevalent in the literature. To test an indirect effect in a two-mediator model, we conducted a large-scale Monte Carlo simulation study of the Type I error, statistical power, and confidence interval coverage rates of 10 frequentist and Bayesian confidence/credible intervals (CIs) for normally and nonnormally distributed data. The simulation included never-studied methods and conditions (e.g., Bayesian CI with flat and weakly informative prior methods, two model-based bootstrap methods, and two nonnormality conditions) as well as understudied methods (e.g., profile-likelihood, Monte Carlo with maximum likelihood standard error [MC-ML] and robust standard error [MC-Robust]). The popular BC bootstrap showed inflated Type I error rates and CI under-coverage. We recommend different methods depending on the purpose of the analysis. For testing the null hypothesis of no mediation, we recommend MC-ML, profile-likelihood, and two Bayesian methods. To report a CI, if data has a multivariate normal distribution, we recommend MC-ML, profile-likelihood, and the two Bayesian methods; otherwise, for multivariate nonnormal data we recommend the percentile bootstrap. We argue that the best method for testing hypotheses is not necessarily the best method for CI construction, which is consistent with the findings we present.


Assuntos
Pesquisa Comportamental/métodos , Intervalos de Confiança , Modelos Estatísticos , Análise Multivariada , Teorema de Bayes , Simulação por Computador , Humanos , Método de Monte Carlo
2.
Psychol Sci ; 28(11): 1547-1562, 2017 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-28902575

RESUMO

The sample size necessary to obtain a desired level of statistical power depends in part on the population value of the effect size, which is, by definition, unknown. A common approach to sample-size planning uses the sample effect size from a prior study as an estimate of the population value of the effect to be detected in the future study. Although this strategy is intuitively appealing, effect-size estimates, taken at face value, are typically not accurate estimates of the population effect size because of publication bias and uncertainty. We show that the use of this approach often results in underpowered studies, sometimes to an alarming degree. We present an alternative approach that adjusts sample effect sizes for bias and uncertainty, and we demonstrate its effectiveness for several experimental designs. Furthermore, we discuss an open-source R package, BUCSS, and user-friendly Web applications that we have made available to researchers so that they can easily implement our suggested methods.


Assuntos
Interpretação Estatística de Dados , Viés de Publicação , Tamanho da Amostra , Incerteza , Humanos
3.
Multivariate Behav Res ; 51(5): 627-648, 2016.
Artigo em Inglês | MEDLINE | ID: mdl-27712116

RESUMO

The coefficient of variation is an effect size measure with many potential uses in psychology and related disciplines. We propose a general theory for a sequential estimation of the population coefficient of variation that considers both the sampling error and the study cost, importantly without specific distributional assumptions. Fixed sample size planning methods, commonly used in psychology and related fields, cannot simultaneously minimize both the sampling error and the study cost. The sequential procedure we develop is the first sequential sampling procedure developed for estimating the coefficient of variation. We first present a method of planning a pilot sample size after the research goals are specified by the researcher. Then, after collecting a sample size as large as the estimated pilot sample size, a check is performed to assess whether the conditions necessary to stop the data collection have been satisfied. If not an additional observation is collected and the check is performed again. This process continues, sequentially, until a stopping rule involving a risk function is satisfied. Our method ensures that the sampling error and the study costs are considered simultaneously so that the cost is not higher than necessary for the tolerable sampling error. We also demonstrate a variety of properties of the distribution of the final sample size for five different distributions under a variety of conditions with a Monte Carlo simulation study. In addition, we provide freely available functions via the MBESS package in R to implement the methods discussed.


Assuntos
Análise de Variância , Projetos de Pesquisa , Algoritmos , Simulação por Computador , Interpretação Estatística de Dados , Humanos , Modelos Estatísticos , Método de Monte Carlo , Pesquisa/economia , Risco , Software
4.
Multivariate Behav Res ; 51(1): 86-105, 2016.
Artigo em Inglês | MEDLINE | ID: mdl-26881959

RESUMO

To draw valid inference about an indirect effect in a mediation model, there must be no omitted confounders. No omitted confounders means that there are no common causes of hypothesized causal relationships. When the no-omitted-confounder assumption is violated, inference about indirect effects can be severely biased and the results potentially misleading. Despite the increasing attention to address confounder bias in single-level mediation, this topic has received little attention in the growing area of multilevel mediation analysis. A formidable challenge is that the no-omitted-confounder assumption is untestable. To address this challenge, we first analytically examined the biasing effects of potential violations of this critical assumption in a two-level mediation model with random intercepts and slopes, in which all the variables are measured at Level 1. Our analytic results show that omitting a Level 1 confounder can yield misleading results about key quantities of interest, such as Level 1 and Level 2 indirect effects. Second, we proposed a sensitivity analysis technique to assess the extent to which potential violation of the no-omitted-confounder assumption might invalidate or alter the conclusions about the indirect effects observed. We illustrated the methods using an empirical study and provided computer code so that researchers can implement the methods discussed.


Assuntos
Fatores de Confusão Epidemiológicos , Modelos Estatísticos , Análise Multinível/métodos , Algoritmos , Pesquisa Comportamental/métodos , Análise por Conglomerados
5.
Mem Cognit ; 41(7): 1079-95, 2013 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-23532591

RESUMO

Prior knowledge in the domain of mathematics can sometimes interfere with learning and performance in that domain. One of the best examples of this phenomenon is in students' difficulties solving equations with operations on both sides of the equal sign. Elementary school children in the U.S. typically acquire incorrect, operational schemata rather than correct, relational schemata for interpreting equations. Researchers have argued that these operational schemata are never unlearned and can continue to affect performance for years to come, even after relational schemata are learned. In the present study, we investigated whether and how operational schemata negatively affect undergraduates' performance on equations. We monitored the eye movements of 64 undergraduate students while they solved a set of equations that are typically used to assess children's adherence to operational schemata (e.g., 3 + 4 + 5 = 3 + __). Participants did not perform at ceiling on these equations, particularly when under time pressure. Converging evidence from performance and eye movements showed that operational schemata are sometimes activated instead of relational schemata. Eye movement patterns reflective of the activation of relational schemata were specifically lacking when participants solved equations by adding up all the numbers or adding the numbers before the equal sign, but not when they used other types of incorrect strategies. These findings demonstrate that the negative effects of acquiring operational schemata extend far beyond elementary school.


Assuntos
Movimentos Oculares/fisiologia , Conceitos Matemáticos , Resolução de Problemas/fisiologia , Adulto , Avaliação Educacional , Medições dos Movimentos Oculares , Humanos , Adulto Jovem
6.
Psychol Methods ; 2023 May 11.
Artigo em Inglês | MEDLINE | ID: mdl-37166855

RESUMO

Planning an appropriate sample size for a study involves considering several issues. Two important considerations are cost constraints and variability inherent in the population from which data will be sampled. Methodologists have developed sample size planning methods for two or more populations when testing for equivalence or noninferiority/superiority for a linear contrast of population means. Additionally, cost constraints and variance heterogeneity among populations have also been considered. We extend these methods by developing a theory for sequential procedures for testing the equivalence or noninferiority/superiority for a linear contrast of population means under cost constraints, which we prove to effectively utilize the allocated resources. Our method, due to the sequential framework, does not require prespecified values of unknown population variance(s), something that is historically an impediment to designing studies. Importantly, our method does not require an assumption of a specific type of distribution of the data in the relevant population from which the observations are sampled, as we make our developments in a data distribution-free context. We provide an illustrative example to show how the implementation of the proposed approach can be useful in applied research. (PsycInfo Database Record (c) 2023 APA, all rights reserved).

7.
Psychol Methods ; 2022 Jul 21.
Artigo em Inglês | MEDLINE | ID: mdl-35862114

RESUMO

Replication is central to scientific progress. Because of widely reported replication failures, replication has received increased attention in psychology, sociology, education, management, and related fields in recent years. Replication studies have generally been assessed dichotomously, designated either a "success" or "failure" based entirely on the outcome of a null hypothesis significance test (i.e., p < .05 or p > .05, respectively). However, alternative definitions of success depend on researchers' goals for the replication. Previous work on alternative definitions for success has focused on the analysis phase of replication. However, the design of the replication is also important, as emphasized with the adage, "an ounce of prevention is better than a pound of cure." One critical component of design often ignored or oversimplified in replication studies is sample size planning, indeed, the details here are crucial. Sample size planning for replication studies should correspond to the method by which success will be evaluated. Researchers have received little guidance, some of which is misguided, on sample size planning for replication goals other than the aforementioned dichotomous null hypothesis significance testing approach. In this article, we describe four different replication goals. Then, we formalize sample size planning methods for each of the four goals. This article aims to provide clarity on the procedures for sample size planning for each goal, with examples and syntax provided to show how each procedure can be used in practice. (PsycInfo Database Record (c) 2022 APA, all rights reserved).

8.
Am J Emerg Med ; 28(3): 304-9, 2010 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-20223387

RESUMO

OBJECTIVES: Despite the growing problems of emergency department (ED) crowding, the potential impact on the frequency of medication errors occurring in the ED is uncertain. Using a metric to measure ED crowding in real time (the Emergency Department Work Index, or EDWIN, score), we sought to prospectively measure the correlation between the degree of crowding and the frequency of medication errors occurring in our ED as detected by our ED pharmacists. METHODS: We performed a prospective, observational study in a large, community hospital ED of all patients whose medication orders were evaluated by our ED pharmacists for a 3-month period. Our ED pharmacists review the orders of all patients in the ED critical care section and the Chest Pain unit, and all admitted patients boarding in the ED. We measured the Spearman correlation between average daily EDWIN score and number of medication errors detected and determined the score's predictive performance with receiver operating characteristic (ROC) curves. RESULTS: A total of 283 medication errors were identified by the ED pharmacists over the study period. Errors included giving medications at incorrect doses, frequencies, durations, or routes and giving contraindicated medications. Error frequency showed a positive correlation with daily average EDWIN score (Spearman's rho = 0.33; P = .001). The area under the ROC curve was 0.67 (95% confidence interval, 0.56-0.78) with failure defined as greater than 1 medication error per day. CONCLUSIONS: We identified an increased frequency of medication errors in our ED with increased crowding as measured with a real-time modified EDWIN score.


Assuntos
Aglomeração , Serviço Hospitalar de Emergência/organização & administração , Erros de Medicação/estatística & dados numéricos , Serviço Hospitalar de Emergência/normas , Humanos , Serviço de Farmácia Hospitalar , Estudos Prospectivos , Curva ROC
9.
Psychol Methods ; 25(4): 496-515, 2020 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-32191106

RESUMO

Mediation analysis is an important approach for investigating causal pathways. One approach used in mediation analysis is the test of an indirect effect, which seeks to measure how the effect of an independent variable impacts an outcome variable through 1 or more mediators. However, in many situations the proposed tests of indirect effects, including popular confidence interval-based methods, tend to produce poor Type I error rates when mediation does not occur and, more generally, only allow dichotomous decisions of "not significant" or "significant" with regards to the statistical conclusion. To remedy these issues, we propose a new method, a likelihood ratio test (LRT), that uses nonlinear constraints in what we term the model-based constrained optimization (MBCO) procedure. The MBCO procedure (a) offers a more robust Type I error rate than existing methods; (b) provides a p value, which serves as a continuous measure of compatibility of data with the hypothesized null model (not just a dichotomous reject or fail-to-reject decision rule); (c) allows simple and complex hypotheses about mediation (i.e., 1 or more mediators; different mediational pathways); and (d) allows the mediation model to use observed or latent variables. The MBCO procedure is based on a structural equation modeling framework (even if latent variables are not specified) with specialized fitting routines, namely with the use of nonlinear constraints. We advocate using the MBCO procedure to test hypotheses about an indirect effect in addition to reporting a confidence interval to capture uncertainty about the indirect effect because this combination transcends existing methods. (PsycInfo Database Record (c) 2020 APA, all rights reserved).


Assuntos
Interpretação Estatística de Dados , Modelos Estatísticos , Psicologia/métodos , Humanos
10.
Psychol Methods ; 24(1): 20-35, 2019 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-29863377

RESUMO

Clustered data are common in many fields. Some prominent examples of clustering are employees clustered within supervisors, students within classrooms, and clients within therapists. Many methods exist that explicitly consider the dependency introduced by a clustered data structure, but the multitude of available options has resulted in rigid disciplinary preferences. For example, those working in the psychological, organizational behavior, medical, and educational fields generally prefer mixed effects models, whereas those working in economics, behavioral finance, and strategic management generally prefer fixed effects models. However, increasingly interdisciplinary research has caused lines that separate the fields grounded in psychology and those grounded in economics to blur, leading to researchers encountering unfamiliar statistical methods commonly found in other disciplines. Persistent discipline-specific preferences can be particularly problematic because (a) each approach has certain limitations that can restrict the types of research questions that can be appropriately addressed, and (b) analyses based on the statistical modeling decisions common in one discipline can be difficult to understand for researchers trained in alternative disciplines. This can impede cross-disciplinary collaboration and limit the ability of scientists to make appropriate use of research from adjacent fields. This article discusses the differences between mixed effects and fixed effects models for clustered data, reviews each approach, and helps to identify when each approach is optimal. We then discuss the within-between specification, which blends advantageous properties of each framework into a single model. (PsycINFO Database Record (c) 2019 APA, all rights reserved).


Assuntos
Análise por Conglomerados , Modelos Estatísticos , Análise Multinível , Psicologia/métodos , Humanos
11.
J Gen Psychol ; 146(3): 325-338, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-30905317

RESUMO

The Pearson correlation coefficient can be translated to a common language effect size, which shows the probability of obtaining a certain value on one variable, given the value on the other variable. This common language effect size makes the size of a correlation coefficient understandable to laypeople. Three examples are provided to demonstrate the application of the common language effect size in interpreting Pearson correlation coefficients and multiple correlation coefficients.


Assuntos
Idioma , Probabilidade , Humanos
12.
Psychol Methods ; 24(4): 492-515, 2019 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-30829512

RESUMO

Correlation coefficients are effect size measures that are widely used in psychology and related disciplines for quantifying the degree of relationship of two variables, where different correlation coefficients are used to describe different types of relationships for different types of data. We develop methods for constructing a sufficiently narrow confidence interval for 3 different population correlation coefficients with a specified upper bound on the confidence interval width (e.g., .10 units) at a specified level of confidence (e.g., 95%). In particular, we develop methods for Pearson's r, Kendall's tau, and Spearman's rho. Our methods solve an important problem because existing methods of study design for correlation coefficients generally require the use of supposed but typically unknowable population values as input parameters. We develop sequential estimation procedures and prove their desirable properties in order to obtain sufficiently narrow confidence interval for population correlation coefficients without using supposed values of population parameters, doing so in a distribution-free environment. In sequential estimation procedures, supposed values of population parameters for purposes of sample size planning are not needed, but instead stopping rules are developed and once satisfied, they provide a rule-based stop to the sampling of additional units. In particular, data in sequential estimation procedures are collected in stages, whereby at each stage the estimated population values are updated and the stopping rule evaluated. Correspondingly, the final sample size required to obtain a sufficiently narrow confidence interval is not known a priori, but is based on the outcome of the study. Additionally, we extend our methods to the squared multiple correlation coefficient under the assumption of multivariate normality. We demonstrate the effectiveness of our sequential procedure using a Monte Carlo simulation study. We provide freely available R code to implement the methods in the MBESS package. (PsycINFO Database Record (c) 2019 APA, all rights reserved).


Assuntos
Correlação de Dados , Estatística como Assunto/métodos , Humanos
13.
Am J Kidney Dis ; 51(2): 242-54, 2008 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-18215702

RESUMO

BACKGROUND: Assessment of volume state is difficult in hemodialysis patients. Whether continuous blood volume monitoring can improve the assessment of volume state is unclear. STUDY DESIGN: Diagnostic test study. SETTINGS & PARTICIPANTS: Asymptomatic long-term hemodialysis patients (n = 150) in 4 university-affiliated hemodialysis units. INDEX TESTS: Ultrafiltration rate (UFR) divided by postdialysis weight (UFR index), slopes of relative blood volume (RBV), RBV slope corrected for UFR and weight (volume index). REFERENCE TESTS: Dialysis-related symptoms and echocardiographic signs of volume excess and volume depletion, assessed by using inferior vena cava (IVC) diameter after dialysis and its collapse on inspiration. Volume excess was defined as values in the upper third of IVC diameter or lower third of IVC collapse on inspiration. Volume depletion was defined as values in the lower third of IVC diameter or upper third of IVC collapse on inspiration. RESULTS: Mean UFR was 8.3 +/- 3.8 (SD) mL/h/kg. Mean RBV slope was -2.32% +/- 1.50%/h. Mean volume index was -0.25% +/- 0.17%/h/mL/h ultrafiltration/kg. Volume index provided the best fit of observed RBV slopes. Volume index was related to dizziness, the need to decrease UFR, and placement in Trendelenburg position. RBV and volume index, but not UFR index, were related to echocardiographic markers of volume excess and depletion. Areas under the receiver operating characteristic curve to predict volume excess were 0.48 (95% confidence interval [CI], 0.33 to 0.63) for UFR index, 0.71 (95% CI, 0.60 to 0.83) for RBV slope, and 0.73 (95% CI, 0.59 to 0.86) for volume index. Areas under the receiver operating characteristic curve to predict volume depletion were 0.56 (95% CI, 0.38 to 0.74) for UFR index, 0.55 (95% CI, 0.38 to 0.72) for RBV slope, and 0.62 (95% CI, 0.48 to 0.76) for volume index. LIMITATIONS: Dialysis-related symptoms and echocardiographic findings are not validated measures of volume. Our results were not adjusted for demographic or clinical characteristics; performance characteristics of the indices may differ across populations. CONCLUSIONS: Volume index appears to be a novel marker of volume, but requires validation studies, and its utility needs to be tested in clinical trials.


Assuntos
Determinação do Volume Sanguíneo , Volume Sanguíneo , Hemodiafiltração/efeitos adversos , Hipovolemia/etiologia , Adulto , Idoso , Fatores de Confusão Epidemiológicos , Ecocardiografia , Feminino , Humanos , Falência Renal Crônica/etiologia , Falência Renal Crônica/terapia , Masculino , Pessoa de Meia-Idade , Razão de Chances , Valor Preditivo dos Testes , Curva ROC , Projetos de Pesquisa
14.
Am J Nephrol ; 28(5): 792-801, 2008.
Artigo em Inglês | MEDLINE | ID: mdl-18477842

RESUMO

The analysis of change is central to the study of kidney research. In the past 25 years, newer and more sophisticated methods for the analysis of change have been developed; however, as of yet these newer methods are underutilized in the field of kidney research. Repeated measures ANOVA is the traditional model that is easy to understand and simpler to interpret, but it may not be valid in complex real-world situations. Problems with the assumption of sphericity, unit of analysis, lack of consideration for different types of change, and missing data, in the repeated measures ANOVA context are often encountered. Multilevel modeling, a newer and more sophisticated method for the analysis of change, overcomes these limitations and provides a better framework for understanding the true nature of change. The present article provides a primer on the use of multilevel modeling to study change. An example from a clinical study is detailed and the method for implementation in SAS is provided.


Assuntos
Nefropatias , Modelos Teóricos , Análise de Variância
15.
Am Psychol ; 73(7): 899-917, 2018 10.
Artigo em Inglês | MEDLINE | ID: mdl-29469579

RESUMO

The potential for big data to provide value for psychology is significant. However, the pursuit of big data remains an uncertain and risky undertaking for the average psychological researcher. In this article, we address some of this uncertainty by discussing the potential impact of big data on the type of data available for psychological research, addressing the benefits and most significant challenges that emerge from these data, and organizing a variety of research opportunities for psychology. Our article yields two central insights. First, we highlight that big data research efforts are more readily accessible than many researchers realize, particularly with the emergence of open-source research tools, digital platforms, and instrumentation. Second, we argue that opportunities for big data research are diverse and differ both in their fit for varying research goals, as well as in the challenges they bring about. Ultimately, our outlook for researchers in psychology using and benefiting from big data is cautiously optimistic. Although not all big data efforts are suited for all researchers or all areas within psychology, big data research prospects are diverse, expanding, and promising for psychology and related disciplines. (PsycINFO Database Record (c) 2018 APA, all rights reserved).


Assuntos
Big Data , Psicologia , Projetos de Pesquisa , Humanos , Aprendizado de Máquina
16.
Psychol Methods ; 23(2): 226-243, 2018 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-28383948

RESUMO

Sequential estimation is a well recognized approach to inference in statistical theory. In sequential estimation the sample size to use is not specified at the start of the study, and instead study outcomes are used to evaluate a predefined stopping rule, if sampling should continue or stop. In this article we develop a general theory for sequential estimation procedure for constructing a narrow confidence interval for a general class of effect sizes with a specified level of confidence (e.g., 95%) and a specified upper bound on the confidence interval width. Our method does not require prespecified, yet usually unknowable, population values of certain parameters for certain types of distributions, thus offering advantages compared to commonly used approaches to sample size planning. Importantly, we make our developments in a distribution-free environment and thus do not make untenable assumptions about the population from which observations are sampled. Our work is thus very general, timely due to the interest in effect sizes, and has wide applicability in the context of estimation of a general class of effect sizes. (PsycINFO Database Record


Assuntos
Pesquisa Biomédica/métodos , Modelos Estatísticos , Projetos de Pesquisa , Pesquisa Biomédica/normas , Intervalos de Confiança , Humanos , Projetos de Pesquisa/normas , Tamanho da Amostra
17.
Psychol Methods ; 23(2): 244-261, 2018 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-29172614

RESUMO

Mediation analysis has become one of the most popular statistical methods in the social sciences. However, many currently available effect size measures for mediation have limitations that restrict their use to specific mediation models. In this article, we develop a measure of effect size that addresses these limitations. We show how modification of a currently existing effect size measure results in a novel effect size measure with many desirable properties. We also derive an expression for the bias of the sample estimator for the proposed effect size measure and propose an adjusted version of the estimator. We present a Monte Carlo simulation study conducted to examine the finite sampling properties of the adjusted and unadjusted estimators, which shows that the adjusted estimator is effective at recovering the true value it estimates. Finally, we demonstrate the use of the effect size measure with an empirical example. We provide freely available software so that researchers can immediately implement the methods we discuss. Our developments here extend the existing literature on effect sizes and mediation by developing a potentially useful method of communicating the magnitude of mediation. (PsycINFO Database Record


Assuntos
Pesquisa Biomédica/métodos , Interpretação Estatística de Dados , Modelos Estatísticos , Método de Monte Carlo , Humanos
18.
Am J Nephrol ; 27(5): 488-94, 2007.
Artigo em Inglês | MEDLINE | ID: mdl-17664865

RESUMO

BACKGROUND: Diabetic nephropathy with overt proteinuria often progresses relentlessly to end-stage renal disease (ESRD). MATERIAL AND METHODS: To answer the question whether it is impaired glomerular filtration rate (GFR) or its precursor proteinuria which is more related with multiple domains of health-related quality of life (HRQOL), we measured GFR and proteinuria in 44 patients with type 2 diabetes and overt nephropathy and repeated the measurements after 4 months. 38 patients with ESRD due to diabetic nephropathy served as a control group. We used path analysis to examine the association of baseline proteinuria and GFR with baseline and subsequent HRQOL scales. RESULTS: Compared to patients with ESRD, patients with non-dialysis CKD had Kidney Disease Burden (KDB) that was, on a scale from 0 to 100, 19.8 better (95% CI 6.9-32.8) (p = 0.003). Mental component score (MCS) did not differ and physical component score (PCS) was worse in non-dialysis CKD patients by 8.5 (p < 0.001). Proteinuria at baseline was a predictor of PCS, MCS and KDB score at 4 months, suggesting a lagged effect of proteinuria on HRQOL after controlling for the autoregressive effects. GFR was not shown to have a significant impact on HRQOL. One log unit increase in proteinuria was associated with 3.8 (p = 0.011) fall in PCS, 3.3 (p = 0.043) fall in MCS and 10.6 (p = 0.006) fall in KDB. CONCLUSION: In patients with advanced diabetic nephropathy, we found that proteinuria has a lagged and profound effect on multiple domains of HRQOL.


Assuntos
Nefropatias Diabéticas/complicações , Nefropatias Diabéticas/fisiopatologia , Proteinúria/etiologia , Qualidade de Vida , Idoso , Efeitos Psicossociais da Doença , Diabetes Mellitus Tipo 2 , Feminino , Taxa de Filtração Glomerular , Nível de Saúde , Humanos , Masculino , Pessoa de Meia-Idade , Modelos Estatísticos
19.
Psychol Methods ; 22(1): 94-113, 2017 03.
Artigo em Inglês | MEDLINE | ID: mdl-27607545

RESUMO

The standardized mean difference is a widely used effect size measure. In this article, we develop a general theory for estimating the population standardized mean difference by minimizing both the mean square error of the estimator and the total sampling cost. Fixed sample size methods, when sample size is planned before the start of a study, cannot simultaneously minimize both the mean square error of the estimator and the total sampling cost. To overcome this limitation of the current state of affairs, this article develops a purely sequential sampling procedure, which provides an estimate of the sample size required to achieve a sufficiently accurate estimate with minimum expected sampling cost. Performance of the purely sequential procedure is examined via a simulation study to show that our analytic developments are highly accurate. Additionally, we provide freely available functions in R to implement the algorithm of the purely sequential procedure. (PsycINFO Database Record


Assuntos
Projetos de Pesquisa , Tamanho da Amostra , Algoritmos , Humanos
20.
Psychol Methods ; 11(4): 363-85, 2006 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-17154752

RESUMO

Methods for planning sample size (SS) for the standardized mean difference so that a narrow confidence interval (CI) can be obtained via the accuracy in parameter estimation (AIPE) approach are developed. One method plans SS so that the expected width of the CI is sufficiently narrow. A modification adjusts the SS so that the obtained CI is no wider than desired with some specified degree of certainty (e.g., 99% certain the 95% CI will be no wider than omega). The rationale of the AIPE approach to SS planning is given, as is a discussion of the analytic approach to CI formation for the population standardized mean difference. Tables with values of necessary SS are provided. The freely available Methods for the Behavioral, Educational, and Social Sciences (K. Kelley, 2006a) R (R Development Core Team, 2006) software package easily implements the methods discussed.


Assuntos
Intervalos de Confiança , Psicologia/métodos , Psicologia/estatística & dados numéricos , Humanos , Modelos Psicológicos
SELEÇÃO DE REFERÊNCIAS
Detalhe da pesquisa