RESUMEN
Redundancy analysis (RA) is a multivariate method that maximizes the mean variance of a set of criterion variables explained by a small number of redundancy variates (i.e., linear combinations of a set of predictor variables). However, two challenges exist in RA. First, inferential information for the RA estimates might not be readily available. Second, the existing methods addressing the dimensionality problem in RA are limited for various reasons. To aid the applications of RA, we propose a direct covariance structure modeling approach to RA. The proposed approach (1) provides inferential information for the RA estimates, and (2) allows the researcher to use a simple yet practical criterion to address the dimensionality problem in RA. We illustrate our approach with an artificial example, validate some standard error estimates by simulations, and demonstrate our new criterion in a real example. Finally, we conclude with future research topics.
RESUMEN
This article presents a hierarchical map of analyses subsumed by canonical correlation and a shiny application to facilitate the connections between said analyses. Building on the work of other researchers who used canonical correlation analyses to unify analyses in the general linear model, we demonstrate that the hierarchy is not as flat as some have portrayed. While a simpler hierarchy may seem to be more accessible, it implies a lack of relationship between analyses that may cause confusion when learning the vast majority of univariate and multivariate analyses in the general linear model. Because it is not always intuitive how all the relevant analyses for a given data type can be conducted, we developed the Shiny application canCORRgam to demonstrate the hierarchical path of analyses subsumed by canonical correlation for 15 different models. The canCORRgam application provides emerging researchers evidence of the transitive properties implied in the map. Our work also promotes meta-analytic thinking and practice as we provide the tools, formula, and software to relate test statistics to effect sizes in addition to transforming relevant test statistics and effect sizes to equivalent test statistics and effect sizes.
Asunto(s)
Análisis de Correlación Canónica , Programas Informáticos , Análisis Multivariante , Modelos LinealesRESUMEN
Background: An abundance of rapidly accumulating scientific evidence presents novel opportunities for researchers and practitioners alike, yet such advantages are often overshadowed by resource demands associated with finding and aggregating a continually expanding body of scientific information. Data extraction activities associated with evidence synthesis have been described as time-consuming to the point of critically limiting the usefulness of research. Across social science disciplines, the use of automation technologies for timely and accurate knowledge synthesis can enhance research translation value, better inform key policy development, and expand the current understanding of human interactions, organizations, and systems. Ongoing developments surrounding automation are highly concentrated in research for evidence-based medicine with limited evidence surrounding tools and techniques applied outside of the clinical research community. The goal of the present study is to extend the automation knowledge base by synthesizing current trends in the application of extraction technologies of key data elements of interest for social scientists. Methods: We report the baseline results of a living systematic review of automated data extraction techniques supporting systematic reviews and meta-analyses in the social sciences. This review follows PRISMA standards for reporting systematic reviews. Results: The baseline review of social science research yielded 23 relevant studies. Conclusions: When considering the process of automating systematic review and meta-analysis information extraction, social science research falls short as compared to clinical research that focuses on automatic processing of information related to the PICO framework. With a few exceptions, most tools were either in the infancy stage and not accessible to applied researchers, were domain specific, or required substantial manual coding of articles before automation could occur. Additionally, few solutions considered extraction of data from tables which is where key data elements reside that social and behavioral scientists analyze.
Asunto(s)
Ciencias Sociales , Ciencias Sociales/métodos , Humanos , Metaanálisis como Asunto , Automatización , Almacenamiento y Recuperación de la Información/métodosRESUMEN
The recent surge in artificial intelligence (AI) has significantly transformed work dynamics, particularly in human resource development (HRD) and related domains. Scholars, recognizing the significant potential of AI in HRD functions and processes, have contributed to the growing body of literature reviews on AI in HRD and related domains. Despite the valuable insights provided by these individual reviews, the challenge of collectively interpreting them within the HRD domain remains unresolved. This protocol outlines the methodology for an umbrella review aiming to systematically synthesize existing reviews on AI in HRD. The review seeks to address key research questions regarding AI's contributions to HRD functions and processes, as well as the opportunities and threats associated with its implementation by employing a technology-aided systematic approach. The coding framework will be used to synthesize the contents of the selected systematic reviews such as their search strategies, data synthesis approaches, and HRD-related findings. The results of this umbrella review are expected to provide insights for HRD scholars and practitioners, promoting continuous improvement in AI-driven HRD initiatives. This protocol is preregistered on the Open Science Framework (https://doi.org/10.17605/OSF.IO/Z8NM6) on May 27, 2024.
Asunto(s)
Inteligencia Artificial , Humanos , Proyectos de Investigación , Revisiones Sistemáticas como Asunto , Desarrollo de Personal/métodos , Desarrollo de Personal/tendenciasRESUMEN
This is a review of a range of empirical studies that use digital text algorithms to predict and model response patterns from humans to Likert-scale items, using texts only as inputs. The studies show that statistics used in construct validation is predictable on sample and individual levels, that this happens across languages and cultures, and that the relationship between variables are often semantic instead of empirical. That is, the relationships among variables are given a priori and evidently computable as such. We explain this by replacing the idea of "nomological networks" with "semantic networks" to designate computable relationships between abstract concepts. Understanding constructs as nodes in semantic networks makes it clear why psychological research has produced constant average explained variance at 42% since 1956. Together, these findings shed new light on the formidable capability of human minds to operate with fast and intersubjectively similar semantic processing. Our review identifies a categorical error present in much psychological research, measuring representations instead of the purportedly represented. We discuss how this has grave consequences for the empirical truth in research using traditional psychometric methods.
RESUMEN
Managerial coaching remains a widespread and popular organizational development intervention applied across numerous industries to enhance critical workplace outcomes and employee attitudes, yet no studies to date have evaluated the temporal precedence within these relationships. This study sought to assess the predictive validity of the widely used Employee Perceptions of Supervisor/Line Manager Coaching Behavior Measure managerial coaching scale (CBI), employing a longitudinal design and following the testing of the causal hypothesized relationship framework. Three hypotheses were evaluated using three commonly associated variables with managerial coaching (role clarity, job satisfaction, and organization commitment), using longitudinal data collected over two waves from full-time US employees (n = 313). The study followed a two-wave design, collecting data over two time points to test for longitudinal measurement invariance and three reciprocal cross-lagged models. Results detected statistically significant cross-lagged and reciprocal cross-lagged effects in the role clarity and organization commitment models, highlighting a reciprocal relationship between managerial coaching behaviors and the two variables. However, only the reciprocal cross-lagged effect was statistically significant in the job satisfaction model. Findings suggest the predictive validity of the CBI scale for role clarity and organization commitment. Moreover, results indicate employee attitudes influenced managerial coaching behaviors over time across all three models, emphasizing the potential impact of employee attitudes on leadership effectiveness. This study highlights the complex relationships between managerial coaching and workplace outcomes, offering nuanced insights for improved understanding.
RESUMEN
The concept of employee engagement has garnered considerable attention in acute care hospitals because of the many positive benefits that research has found when clinicians are individually engaged. However, limited, if any, research has examined the effects of engaging all hospital employees (including housekeeping, cafeteria, and admissions staff) in a collective manner and how this may impact patient experience, an important measure of hospital performance. Therefore, this quantitative online survey-based study examines the association between 60 chief executive officers' (CEOs') perceptions of the collective organizational engagement (COE) of all hospital employees and patient experience. A summary measure of the US Hospital Consumer Assessment of Healthcare Providers and Systems survey scores was used to assess patient experience at each of the 60 hospitals represented in the study. A multiple linear regression model was tested using structural equation modeling. The findings of the research suggest that CEOs' perceptions of COE explain a significant amount of variability in patient experience at acute care hospitals. Practical implications for CEOs and other hospital leaders are provided that discuss how COE can be used as an organizational capability to influence organizational performance.
RESUMEN
This study uses latent semantic analysis (LSA) to explore how prevalent measures of motivation are interpreted across very diverse job types. Building on the Semantic Theory of Survey Response (STSR), we calculate "semantic compliance" as the degree to which an individual's responses follow a semantically predictable pattern. This allows us to examine how context, in the form of job type, influences respondent interpretations of items. In total, 399 respondents from 18 widely different job types (from CEOs through lawyers, priests and artists to sex workers and professional soldiers) self-rated their work motivation on eight commonly applied scales from research on motivation. A second sample served as an external evaluation panel (n = 30) and rated the 18 job types across eight job characteristics. Independent measures of the job types' salary levels were obtained from national statistics. The findings indicate that while job type predicts motivational score levels significantly, semantic compliance as moderated by job type job also predicts motivational score levels usually at a lesser but significant magnitude. Combined, semantic compliance and job type explained up to 41% of the differences in motional score levels. The variation in semantic compliance was also significantly related to job characteristics as rated by an external panel, and to national income levels. Our findings indicate that people in different contexts interpret items differently to a degree that substantially affects their score levels. We discuss how future measurements of motivation may improve by taking semantic compliance and the STSR perspective into consideration.
RESUMEN
The United States is facing an impending crisis as the number of nurses being educated is not keeping up with the demands of an aging population. Although much effort has gone into bolstering the post-secondary nursing education pipeline, this article postulates that the pipeline begins with career development in K-12 schools. This article provides information that nurses in staff development positions can use to advance the nursing profession through career development.
Asunto(s)
Selección de Profesión , Educación/organización & administración , Personal de Enfermería/provisión & distribución , Estudiantes/psicología , Orientación Vocacional/organización & administración , Adolescente , Niño , Predicción , Humanos , Modelos Educacionales , Personal de Enfermería/tendencias , Selección de Personal , Estados UnidosRESUMEN
The importance of structure coefficients and analogs of regression weights for analysis within the general linear model (GLM) has been well-documented. The purpose of this study was to investigate bias in squared structure coefficients in the context of multiple regression and to determine if a formula that had been shown to correct for bias in squared Pearson correlation coefficients and coefficients of determination could be used to correct for bias in squared regression structure coefficients. Using data from a Monte Carlo simulation, this study found that squared regression structure coefficients corrected with Pratt's formula produced less biased estimates and might be more accurate and stable estimates of population squared regression structure coefficients than estimates with no such corrections. While our findings are in line with prior literature that identified multicollinearity as a predictor of bias in squared regression structure coefficients but not coefficients of determination, the findings from this study are unique in that the level of predictive power, number of predictors, and sample size were also observed to contribute bias in squared regression structure coefficients.
RESUMEN
The validity of inferences drawn from statistical test results depends on how well data meet associated assumptions. Yet, research (e.g., Hoekstra et al., 2012) indicates that such assumptions are rarely reported in literature and that some researchers might be unfamiliar with the techniques and remedies that are pertinent to the statistical tests they conduct. This article seeks to support researchers by concisely reviewing key statistical assumptions associated with substantive statistical tests across the general linear model. Additionally, the article reviews techniques to check for statistical assumptions and identifies remedies and problems if data do not meet the necessary assumptions.
RESUMEN
The purpose of this article is to help researchers avoid common pitfalls associated with reliability including incorrectly assuming that (a) measurement error always attenuates observed score correlations, (b) different sources of measurement error originate from the same source, and (c) reliability is a function of instrumentation. To accomplish our purpose, we first describe what reliability is and why researchers should care about it with focus on its impact on effect sizes. Second, we review how reliability is assessed with comment on the consequences of cumulative measurement error. Third, we consider how researchers can use reliability generalization as a prescriptive method when designing their research studies to form hypotheses about whether or not reliability estimates will be acceptable given their sample and testing conditions. Finally, we discuss options that researchers may consider when faced with analyzing unreliable data.
RESUMEN
While multicollinearity may increase the difficulty of interpreting multiple regression (MR) results, it should not cause undue problems for the knowledgeable researcher. In the current paper, we argue that rather than using one technique to investigate regression results, researchers should consider multiple indices to understand the contributions that predictors make not only to a regression model, but to each other as well. Some of the techniques to interpret MR effects include, but are not limited to, correlation coefficients, beta weights, structure coefficients, all possible subsets regression, commonality coefficients, dominance weights, and relative importance weights. This article will review a set of techniques to interpret MR effects, identify the elements of the data on which the methods focus, and identify statistical software to support such analyses.
RESUMEN
In the face of multicollinearity, researchers face challenges interpreting canonical correlation analysis (CCA) results. Although standardized function and structure coefficients provide insight into the canonical variates produced, they fall short when researchers want to fully report canonical effects. This article revisits the interpretation of CCA results, providing a tutorial and demonstrating canonical commonalty analysis. Commonality analysis fully explains the canonical effects produced by using the variables in a given canonical set to partition the variance of canonical variates produced from the other canonical set. Conducting canonical commonality analysis without the aid of software is laborious and may be untenable, depending on the number of noteworthy canonical functions and variables in either canonical set. Commonality analysis software is identified for the canonical correlation case and we demonstrate its use in facilitating model interpretation. Data from Holzinger and Swineford (1939) are employed to test a hypothetical theory that problem-solving skills are predicted by fundamental math ability.
RESUMEN
Multiple regression is a widely used technique for data analysis in social and behavioral research. The complexity of interpreting such results increases when correlated predictor variables are involved. Commonality analysis provides a method of determining the variance accounted for by respective predictor variables and is especially useful in the presence of correlated predictors. However, computing commonality coefficients is laborious. To make commonality analysis accessible to more researchers, a program was developed to automate the calculation of unique and common elements in commonality analysis, using the statistical package R. The program is described, and a heuristic example using data from the Holzinger and Swineford (1939) study, readily available in the MBESS R package, is presented.