Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 823
Filtrar
1.
Mil Med ; 189(Suppl 3): 456-464, 2024 Aug 19.
Artigo em Inglês | MEDLINE | ID: mdl-39160876

RESUMO

INTRODUCTION: The ongoing conflict in Ukraine from Russian invasion presents a critical challenge to medical planning in the context of multi-domain battle against a peer adversary deploying conventional weapon systems. The potential escalation of preventable morbidity and mortality, reaching a scale unprecedented since World War II, underscores the paramount importance of effective phases of care from Point of Injury (PoI)/Point of Wounding (PoW) or Point of Exposure (PoE) to Role 1 (R1) and Role 2 (R2) echelons of care.The NATO Vigorous Warrior (VW) Live Exercise (LIVEX) serves as a strategic platform for NATO and its partners, providing an opportunity to challenge operational concepts, experiment, innovate life-saving systems, and foster best practices across the Alliance. MATERIALS AND METHODS: This study delineates the strategic application of the VW LIVEX platform for the adaptation of the computational simulation software Simulation for the Assessment and Optimization of Medical Disaster Management (SIMEDIS) within the context of Large-Scale Combat Operations (LSCO). The SIMEDIS computer simulator plays a pivotal role by furnishing real-time insights into the evolving injury patterns of patients, employing an all-hazards approach. This simulator facilitates the examination of temporal shifts in medical timelines and the ramifications of resource scarcity against both morbidity and mortality outcomes. The VW LIVEX provides a unique opportunity for systematic validation to evaluate the results of the computer simulator in a realistic setting and identify gaps for future concepts of operations. RESULTS: We report the process and methodologies to be evaluated at the VW LIVEX in far forward and retrospective medical support operations. Using the SIMEDIS simulator, we can define battlefield scenarios for varied situations including artillery, drone strikes, and Chemical, Biological, Radiological, Nuclear, and explosive (CBRNe) attacks. Casualty health progressions versus time are dependent on each threat. Mortality is computed based on the concepts found in Tactical Combat Casualty Care (TCCC) of "self-aid"/"buddy-aid" factoring in the application or absence of definitive traumatic hemorrhage control and on the distribution policy of victims to medical treatment facilities through appropriate Command and Control (C2) ("Scoop and Run" versus "Stay and Play"). The number of medical supplies available along with the number of transport resources and personnel are set and are scalable, with their effect on both morbidity and mortality quantified.Concept of Medical Operations can be optimized and interoperability enhanced when shared data are provided to C2 for prospective medical planning with retrospective data. The SIMEDIS simulator determines best practices of medical management for a myriad of injury types and tactical/operational situations relevant to policy making and battlefield medical planning for LSCO. CONCLUSIONS: The VW LIVEX provides a Concept Development and Experimentation platform for SIMEDIS refinement and conclusive insights into medical planning to reduce preventable morbidity and mortality. Recommending further iterations of similar methodologies at other NATO LIVEXs for validation is crucial, as is information sharing across the Alliance and partners to ensure best practice standards are met.


Assuntos
Simulação por Computador , Humanos , Simulação por Computador/tendências , Simulação por Computador/normas , Simulação por Computador/estatística & dados numéricos , Medicina Militar/métodos , Ucrânia , Guerra/estatística & dados numéricos
2.
Multivariate Behav Res ; 59(5): 1098-1105, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-39141406

RESUMO

We present the R package galamm, whose goal is to provide common ground between structural equation modeling and mixed effect models. It supports estimation of models with an arbitrary number of crossed or nested random effects, smoothing splines, mixed response types, factor structures, heteroscedastic residuals, and data missing at random. Implementation using sparse matrix methods and automatic differentiation ensures computational efficiency. We here briefly present the implemented methodology, give an overview of the package and an example demonstrating its use.


Assuntos
Modelos Estatísticos , Humanos , Análise de Classes Latentes , Análise Multinível/métodos , Interpretação Estatística de Dados , Simulação por Computador/estatística & dados numéricos , Software , Algoritmos
3.
Multivariate Behav Res ; 59(5): 1019-1042, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-39058418

RESUMO

There has been an increasing call to model multivariate time series data with measurement error. The combination of latent factors with a vector autoregressive (VAR) model leads to the dynamic factor model (DFM), in which dynamic relations are derived within factor series, among factors and observed time series, or both. However, a few limitations exist in the current DFM representatives and estimation: (1) the dynamic component contains either directed or undirected contemporaneous relations, but not both, (2) selecting the optimal model in exploratory DFM is a challenge, (3) the consequences of structural misspecifications from model selection is barely studied. Our paper serves to advance DFM with a hybrid VAR representations and the utilization of LASSO regularization to select dynamic implied instrumental variable, two-stage least squares (MIIV-2SLS) estimation. Our proposed method highlights the flexibility in modeling the directions of dynamic relations with a robust estimation. We aim to offer researchers guidance on model selection and estimation in person-centered dynamic assessments.


Assuntos
Análise de Classes Latentes , Modelos Estatísticos , Humanos , Análise dos Mínimos Quadrados , Análise Fatorial , Interpretação Estatística de Dados , Simulação por Computador/estatística & dados numéricos
4.
Multivariate Behav Res ; 59(4): 879-893, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38990138

RESUMO

Mobile applications offer a wide range of opportunities for psychological data collection, such as increased ecological validity and greater acceptance by participants compared to traditional laboratory studies. However, app-based psychological data also pose data-analytic challenges because of the complexities introduced by missingness and interdependence of observations. Consequently, researchers must weigh the advantages and disadvantages of app-based data collection to decide on the scientific utility of their proposed app study. For instance, some studies might only be worthwhile if they provide adequate statistical power. However, the complexity of app data forestalls the use of simple analytic formulas to estimate properties such as power. In this paper, we demonstrate how Monte Carlo simulations can be used to investigate the impact of app usage behavior on the utility of app-based psychological data. We introduce a set of questions to guide simulation implementation and showcase how we answered them for the simulation in the context of the guessing game app Who Knows (Rau et al., 2023). Finally, we give a brief overview of the simulation results and the conclusions we have drawn from them for real-world data generation. Our results can serve as an example of how to use a simulation approach for planning real-world app-based data collection.


Assuntos
Simulação por Computador , Aplicativos Móveis , Método de Monte Carlo , Humanos , Aplicativos Móveis/estatística & dados numéricos , Simulação por Computador/estatística & dados numéricos , Coleta de Dados/métodos
5.
BMJ Open Qual ; 13(2)2024 Jun 25.
Artigo em Inglês | MEDLINE | ID: mdl-38925661

RESUMO

OBJECTIVE: In-person healthcare delivery is rapidly changing with a shifting employment landscape and technological advances. Opportunities to care for patients in more efficient ways include leveraging technology and focusing on caring for patients in the right place at the right time. We aim to use computer modelling to understand the impact of interventions, such as virtual consultation, on hospital census for referring and referral centres if non-procedural patients are cared for locally rather than transferred. PATIENTS AND METHODS: We created computer modelling based on 25 138 hospital transfers between June 2019 and June 2022 with patients originating at one of 17 community-based hospitals and a regional or academic referral centre receiving them. We identified patients that likely could have been cared for at a community facility, with attention to hospital internal medicine and cardiology patients. The model was run for 33 500 days. RESULTS: Approximately 121 beds/day were occupied by transferred patients at the academic centre, and on average, approximately 17 beds/day were used for hospital internal medicine and nine beds/day for non-procedural cardiology patients. Typical census for all internal medicine beds is approximately 175 and for cardiology is approximately 70. CONCLUSION: Deferring transfers for patients in favour of local hospitalisation would increase the availability of beds for complex care at the referral centre. Potential downstream effects also include increased patient satisfaction due to proximity to home and viability of the local hospital system/economy, and decreased resource utilisation for transfer systems.


Assuntos
Simulação por Computador , Hospitais Comunitários , Transferência de Pacientes , Humanos , Transferência de Pacientes/estatística & dados numéricos , Transferência de Pacientes/métodos , Transferência de Pacientes/normas , Hospitais Comunitários/estatística & dados numéricos , Simulação por Computador/estatística & dados numéricos , Censos
6.
Multivariate Behav Res ; 59(5): 978-994, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38779786

RESUMO

Linear mixed-effects models have been increasingly used to analyze dependent data in psychological research. Despite their many advantages over ANOVA, critical issues in their analyses remain. Due to increasing random effects and model complexity, estimation computation is demanding, and convergence becomes challenging. Applied users need help choosing appropriate methods to estimate random effects. The present Monte Carlo simulation study investigated the impacts when the restricted maximum likelihood (REML) and Bayesian estimation models were misspecified in the estimation. We also compared the performance of Akaike information criterion (AIC) and deviance information criterion (DIC) in model selection. Results showed that models neglecting the existing random effects had inflated Type I errors, unacceptable coverage, and inaccurate R-squared measures of fixed and random effects variation. Furthermore, models with redundant random effects had convergence problems, lower statistical power, and inaccurate R-squared measures for Bayesian estimation. The convergence problem is more severe for REML, while reduced power and inaccurate R-squared measures were more severe for Bayesian estimation. Notably, DIC was better than AIC in identifying the true models (especially for models including person random intercept only), improving convergence rates, and providing more accurate effect size estimates, despite AIC having higher power than DIC with 10 items and the most complicated true model.


Assuntos
Teorema de Bayes , Simulação por Computador , Método de Monte Carlo , Humanos , Modelos Lineares , Funções Verossimilhança , Simulação por Computador/estatística & dados numéricos , Interpretação Estatística de Dados , Modelos Estatísticos
7.
Multivariate Behav Res ; 59(5): 934-956, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38821115

RESUMO

Continuous-time modeling using differential equations is a promising technique to model change processes with longitudinal data. Among ways to fit this model, the Latent Differential Structural Equation Modeling (LDSEM) approach defines latent derivative variables within a structural equation modeling (SEM) framework, thereby allowing researchers to leverage advantages of the SEM framework for model building, estimation, inference, and comparison purposes. Still, a few issues remain unresolved, including performance of multilevel variations of the LDSEM under short time lengths (e.g., 14 time points), particularly when coupled multivariate processes and time-varying covariates are involved. Additionally, the possibility of using Bayesian estimation to facilitate the estimation of multilevel LDSEM (M-LDSEM) models with complex and higher-dimensional random effect structures has not been investigated. We present a series of Monte Carlo simulations to evaluate three possible approaches to fitting M-LDSEM, including: frequentist single-level and two-level robust estimators and Bayesian two-level estimator. Our findings suggested that the Bayesian approach outperformed other frequentist approaches. The effects of time-varying covariates are well recovered, and coupling parameters are the least biased especially using higher-order derivative information with the Bayesian estimator. Finally, an empirical example is provided to show the applicability of the approach.


Assuntos
Teorema de Bayes , Simulação por Computador , Análise de Classes Latentes , Método de Monte Carlo , Humanos , Simulação por Computador/estatística & dados numéricos , Modelos Estatísticos , Fatores de Tempo , Interpretação Estatística de Dados , Estudos Longitudinais , Análise Multinível/métodos
8.
Multivariate Behav Res ; 59(5): 899-912, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38717588

RESUMO

In unrestricted or exploratory factor analysis (EFA), there is a wide range of recommendations about the size samples should be to attain correct and stable solutions. In general, however, these recommendations are either rules of thumb or based on simulation results. As it is hard to establish the extent to which a particular data set suits the conditions used in a simulation study, the advice produced by simulation studies is not direct enough to be of practical use. Instead of trying to provide general and complex recommendations, in this article, we propose to estimate the sample size that is needed to analyze a data set at hand. The estimation takes into account the specified EFA model. The proposal is based on an intensive simulation process in which the sample correlation matrix is used as a basis for generating data sets from a pseudo-population in which the parent correlation holds exactly, and the criterion for determining the size required is a threshold that quantifies the closeness between the pseudo-population and the sample reproduced correlation matrices. The simulation results suggest that the proposal works well and that the determinants identified agree with those in the literature.


Assuntos
Simulação por Computador , Tamanho da Amostra , Humanos , Análise Fatorial , Simulação por Computador/estatística & dados numéricos , Modelos Estatísticos , Interpretação Estatística de Dados
9.
Multivariate Behav Res ; 59(4): 818-840, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38821136

RESUMO

Latent classes are a useful tool in developmental research, however there are challenges associated with embedding them within a counterfactual mediation model. We develop and test a new method "updated pseudo class draws (uPCD)" to examine the association between a latent class exposure and distal outcome that could easily be extended to allow the use of any counterfactual mediation method. UPCD extends an existing group of methods (based on pseudo class draws) that assume that the true values of the latent class variable are missing, and need to be multiply imputed using class membership probabilities. We simulate data based on the Avon Longitudinal Study of Parents and Children, examine performance for existing techniques to relate a latent class exposure to a distal outcome ("one-step," "bias-adjusted three-step," "modal class assignment," "non-inclusive pseudo class draws," and "inclusive pseudo class draws") and compare bias in parameter estimates and their precision to uPCD when estimating counterfactual mediation effects. We found that uPCD shows minimal bias when estimating counterfactual mediation effects across all levels of entropy. UPCD performs similarly to recommended methods (one-step and bias-adjusted three-step), but provides greater flexibility and scope for incorporating the latent grouping within any commonly-used counterfactual mediation approach.


Assuntos
Análise de Classes Latentes , Análise de Mediação , Humanos , Estudos Longitudinais , Modelos Estatísticos , Interpretação Estatística de Dados , Criança , Simulação por Computador/estatística & dados numéricos , Feminino , Masculino
10.
Multivariate Behav Res ; 59(4): 738-757, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38587864

RESUMO

Calculating confidence intervals and p-values of edges in networks is useful to decide their presence or absence and it is a natural way to quantify uncertainty. Since lasso estimation is often used to obtain edges in a network, and the underlying distribution of lasso estimates is discontinuous and has probability one at zero when the estimate is zero, obtaining p-values and confidence intervals is problematic. It is also not always desirable to use the lasso to select the edges because there are assumptions required for correct identification of network edges that may not be warranted for the data at hand. Here, we review three methods that either use a modified lasso estimate (desparsified or debiased lasso) or a method that uses the lasso for selection and then determines p-values without the lasso. We compare these three methods with popular methods to estimate Gaussian Graphical Models in simulations and conclude that the desparsified lasso and its bootstrapped version appear to be the best choices for selection and quantifying uncertainty with confidence intervals and p-values.


Assuntos
Simulação por Computador , Modelos Estatísticos , Humanos , Simulação por Computador/estatística & dados numéricos , Interpretação Estatística de Dados , Incerteza , Intervalos de Confiança
11.
Multivariate Behav Res ; 59(3): 543-565, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38351547

RESUMO

Recent years have seen the emergence of an "idio-thetic" class of methods to bridge the gap between nomothetic and idiographic inference. These methods describe nomothetic trends in idiographic processes by pooling intraindividual information across individuals to inform group-level inference or vice versa. The current work introduces a novel "idio-thetic" model: the subgrouped chain graphical vector autoregression (scGVAR). The scGVAR is unique in its ability to identify subgroups of individuals who share common dynamic network structures in both lag(1) and contemporaneous effects. Results from Monte Carlo simulations indicate that the scGVAR shows promise over similar approaches when clusters of individuals differ in their contemporaneous dynamics and in showing increased sensitivity in detecting nuanced group differences while keeping Type-I error rates low. In contrast, a competing approach-the Alternating Least Squares VAR (ALS VAR) performs well when groups were separated by larger distances. Further considerations are provided regarding applications of the ALS VAR and scGVAR on real data and the strengths and limitations of both methods.


Assuntos
Simulação por Computador , Modelos Estatísticos , Método de Monte Carlo , Humanos , Simulação por Computador/estatística & dados numéricos , Interpretação Estatística de Dados , Análise dos Mínimos Quadrados
12.
Multivariate Behav Res ; 59(3): 584-598, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38348654

RESUMO

With clustered data, such as where students are nested within schools or employees are nested within organizations, it is often of interest to estimate and compare associations among variables separately for each level. While researchers routinely estimate between-cluster effects using the sample cluster means of a predictor, previous research has shown that such practice leads to biased estimates of coefficients at the between level, and recent research has recommended the use of latent cluster means with the multilevel structural equation modeling framework. However, the latent cluster mean approach may not always be the best choice as it (a) relies on the assumption that the population cluster sizes are close to infinite, (b) requires a relatively large number of clusters, and (c) is currently only implemented in specialized software such as Mplus. In this paper, we show how using empirical Bayes estimates of the cluster means can also lead to consistent estimates of between-level coefficients, and illustrate how the empirical Bayes estimate can incorporate finite population corrections when information on population cluster sizes is available. Through a series of Monte Carlo simulation studies, we show that the empirical Bayes cluster-mean approach performs similarly to the latent cluster mean approach for estimating the between-cluster coefficients in most conditions when the infinite-population assumption holds, and applying the finite population correction provides reasonable point and interval estimates when the population is finite. The performance of EBM can be further improved with restricted maximum likelihood estimation and likelihood-based confidence intervals. We also provide an R function that implements the empirical Bayes cluster-mean approach, and illustrate it using data from the classic High School and Beyond Study.


Assuntos
Teorema de Bayes , Método de Monte Carlo , Humanos , Análise por Conglomerados , Simulação por Computador/estatística & dados numéricos , Viés de Seleção , Interpretação Estatística de Dados , Funções Verossimilhança , Modelos Estatísticos
13.
Multivariate Behav Res ; 59(3): 502-522, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38348679

RESUMO

In psychology and education, tests (e.g., reading tests) and self-reports (e.g., clinical questionnaires) generate counts, but corresponding Item Response Theory (IRT) methods are underdeveloped compared to binary data. Recent advances include the Two-Parameter Conway-Maxwell-Poisson model (2PCMPM), generalizing Rasch's Poisson Counts Model, with item-specific difficulty, discrimination, and dispersion parameters. Explaining differences in model parameters informs item construction and selection but has received little attention. We introduce two 2PCMPM-based explanatory count IRT models: The Distributional Regression Test Model for item covariates, and the Count Latent Regression Model for (categorical) person covariates. Estimation methods are provided and satisfactory statistical properties are observed in simulations. Two examples illustrate how the models help understand tests and underlying constructs.


Assuntos
Modelos Estatísticos , Humanos , Análise de Regressão , Reprodutibilidade dos Testes , Simulação por Computador/estatística & dados numéricos , Distribuição de Poisson , Psicometria/métodos , Interpretação Estatística de Dados
14.
Multivariate Behav Res ; 59(3): 523-542, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38351542

RESUMO

Student evaluation of teaching (SET) questionnaires are ubiquitously applied in higher education institutions in North America for both formative and summative purposes. Data collected from SET questionnaires are usually item-level data with cross-classified structure, which are characterized by multivariate categorical outcomes (i.e., multiple Likert-type items in the questionnaires) and cross-classified structure (i.e., non-nested students and instructors). Recently, a new approach, namely the cross-classified IRT model, was proposed for appropriately handling SET data. To inform researchers in higher education, in this article, the cross-classified IRT model, along with three existing approaches applied in SET studies, including the cross-classified random effects model (CCREM), the multilevel item response theory (MLIRT) model, and a two-step integrated strategy, was reviewed. The strengths and weaknesses of each of the four approaches were also discussed. Additionally, the new and existing approaches were compared through an empirical data analysis and a preliminary simulation study. This article concluded by providing general suggestions to researchers for analyzing SET data and discussing limitations and future research directions.


Assuntos
Estudantes , Ensino , Humanos , Estudantes/estatística & dados numéricos , Ensino/estatística & dados numéricos , Inquéritos e Questionários , Modelos Estatísticos , Interpretação Estatística de Dados , Avaliação Educacional/métodos , Avaliação Educacional/estatística & dados numéricos , Simulação por Computador/estatística & dados numéricos
15.
Multivariate Behav Res ; 59(3): 411-433, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38379305

RESUMO

Propensity score (PS) analyses are increasingly popular in behavioral sciences. Two issues often add complexities to PS analyses, including missing data in observed covariates and clustered data structure. In previous research, methods for conducting PS analyses with considering either issue alone were examined. In practice, the two issues often co-occur; but the performance of methods for PS analyses in the presence of both issues has not been evaluated previously. In this study, we consider PS weighting analysis when data are clustered and observed covariates have missing values. A simulation study is conducted to evaluate the performance of different missing data handling methods (complete-case, single-level imputation, or multilevel imputation) combined with different multilevel PS weighting methods (fixed- or random-effects PS models, inverse-propensity-weighting or the clustered weighting, weighted single-level or multilevel outcome models). The results suggest that the bias in average treatment effect estimation can be reduced, by better accounting for clustering in both the missing data handling stage (such as with the multilevel imputation) and the PS analysis stage (such as with the fixed-effects PS model, clustered weighting, and weighted multilevel outcome model). A real-data example is provided for illustration.


Assuntos
Simulação por Computador , Pontuação de Propensão , Humanos , Análise por Conglomerados , Interpretação Estatística de Dados , Simulação por Computador/estatística & dados numéricos , Modelos Estatísticos , Análise Multinível/métodos , Viés
16.
Multivariate Behav Res ; 59(3): 482-501, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38379320

RESUMO

Accelerated longitudinal designs allow researchers to efficiently collect longitudinal data covering a time span much longer than the study duration. One important assumption of these designs is that each cohort (a group defined by their age of entry into the study) shares the same longitudinal trajectory. Although previous research has examined the impact of violating this assumption when each cohort is defined by a single age of entry, it is possible that each cohort is instead defined by a range of ages, such as groups that experience a particular historical event. In this paper we examined how including cohort membership in linear and quadratic multilevel models performed in detecting and controlling for cohort effects in this scenario. Using a Monte Carlo simulation study, we assessed the performance of this approach under conditions related to the number of cohorts, the overlap between cohorts, the strength of the cohort effect, the number of affected parameters, and the sample size. Our results indicate that models including a proxy variable for cohort membership based on age at study entry performed comparably to using true cohort membership in detecting cohort effects accurately and returning unbiased parameter estimates. This indicates that researchers can control for cohort effects even when true cohort membership is unknown.


Assuntos
Efeito de Coortes , Simulação por Computador , Método de Monte Carlo , Análise Multinível , Estudos Longitudinais , Humanos , Análise Multinível/métodos , Simulação por Computador/estatística & dados numéricos , Modelos Estatísticos , Tamanho da Amostra , Projetos de Pesquisa
17.
Multivariate Behav Res ; 59(3): 461-481, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38247019

RESUMO

Network analysis has gained popularity as an approach to investigate psychological constructs. However, there are currently no guidelines for applied researchers when encountering missing values. In this simulation study, we compared the performance of a two-step EM algorithm with separated steps for missing handling and regularization, a combined direct EM algorithm, and pairwise deletion. We investigated conditions with varying network sizes, numbers of observations, missing data mechanisms, and percentages of missing values. These approaches are evaluated with regard to recovering population networks in terms of loss in the precision matrix, edge set identification and network statistics. The simulation showed adequate performance only in conditions with large samples (n≥500) or small networks (p = 10). Comparing the missing data approaches, the direct EM appears to be more sensitive and superior in nearly all chosen conditions. The two-step EM yields better results when the ratio of n/p is very large - being less sensitive but more specific. Pairwise deletion failed to converge across numerous conditions and yielded inferior results overall. Overall, direct EM is recommended in most cases, as it is able to mitigate the impact of missing data quite well, while modifications to two-step EM could improve its performance.


Assuntos
Algoritmos , Simulação por Computador , Humanos , Simulação por Computador/estatística & dados numéricos , Interpretação Estatística de Dados , Modelos Estatísticos
18.
JAMA ; 329(4): 306-317, 2023 01 24.
Artigo em Inglês | MEDLINE | ID: mdl-36692561

RESUMO

Importance: Stroke is the fifth-highest cause of death in the US and a leading cause of serious long-term disability with particularly high risk in Black individuals. Quality risk prediction algorithms, free of bias, are key for comprehensive prevention strategies. Objective: To compare the performance of stroke-specific algorithms with pooled cohort equations developed for atherosclerotic cardiovascular disease for the prediction of new-onset stroke across different subgroups (race, sex, and age) and to determine the added value of novel machine learning techniques. Design, Setting, and Participants: Retrospective cohort study on combined and harmonized data from Black and White participants of the Framingham Offspring, Atherosclerosis Risk in Communities (ARIC), Multi-Ethnic Study for Atherosclerosis (MESA), and Reasons for Geographical and Racial Differences in Stroke (REGARDS) studies (1983-2019) conducted in the US. The 62 482 participants included at baseline were at least 45 years of age and free of stroke or transient ischemic attack. Exposures: Published stroke-specific algorithms from Framingham and REGARDS (based on self-reported risk factors) as well as pooled cohort equations for atherosclerotic cardiovascular disease plus 2 newly developed machine learning algorithms. Main Outcomes and Measures: Models were designed to estimate the 10-year risk of new-onset stroke (ischemic or hemorrhagic). Discrimination concordance index (C index) and calibration ratios of expected vs observed event rates were assessed at 10 years. Analyses were conducted by race, sex, and age groups. Results: The combined study sample included 62 482 participants (median age, 61 years, 54% women, and 29% Black individuals). Discrimination C indexes were not significantly different for the 2 stroke-specific models (Framingham stroke, 0.72; 95% CI, 0.72-073; REGARDS self-report, 0.73; 95% CI, 0.72-0.74) vs the pooled cohort equations (0.72; 95% CI, 0.71-0.73): differences 0.01 or less (P values >.05) in the combined sample. Significant differences in discrimination were observed by race: the C indexes were 0.76 for all 3 models in White vs 0.69 in Black women (all P values <.001) and between 0.71 and 0.72 in White men and between 0.64 and 0.66 in Black men (all P values ≤.001). When stratified by age, model discrimination was better for younger (<60 years) vs older (≥60 years) adults for both Black and White individuals. The ratios of observed to expected 10-year stroke rates were closest to 1 for the REGARDS self-report model (1.05; 95% CI, 1.00-1.09) and indicated risk overestimation for Framingham stroke (0.86; 95% CI, 0.82-0.89) and pooled cohort equations (0.74; 95% CI, 0.71-0.77). Performance did not significantly improve when novel machine learning algorithms were applied. Conclusions and Relevance: In this analysis of Black and White individuals without stroke or transient ischemic attack among 4 US cohorts, existing stroke-specific risk prediction models and novel machine learning techniques did not significantly improve discriminative accuracy for new-onset stroke compared with the pooled cohort equations, and the REGARDS self-report model had the best calibration. All algorithms exhibited worse discrimination in Black individuals than in White individuals, indicating the need to expand the pool of risk factors and improve modeling techniques to address observed racial disparities and improve model performance.


Assuntos
População Negra , Disparidades em Assistência à Saúde , Preconceito , Medição de Risco , Acidente Vascular Cerebral , População Branca , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Aterosclerose/epidemiologia , Doenças Cardiovasculares/epidemiologia , Ataque Isquêmico Transitório/epidemiologia , Estudos Retrospectivos , Acidente Vascular Cerebral/diagnóstico , Acidente Vascular Cerebral/epidemiologia , Acidente Vascular Cerebral/etnologia , Medição de Risco/normas , Reprodutibilidade dos Testes , Fatores Sexuais , Fatores Etários , Fatores Raciais/estatística & dados numéricos , População Negra/estatística & dados numéricos , População Branca/estatística & dados numéricos , Estados Unidos/epidemiologia , Aprendizado de Máquina/normas , Viés , Preconceito/prevenção & controle , Disparidades em Assistência à Saúde/etnologia , Disparidades em Assistência à Saúde/normas , Disparidades em Assistência à Saúde/estatística & dados numéricos , Simulação por Computador/normas , Simulação por Computador/estatística & dados numéricos
19.
BMC Anesthesiol ; 22(1): 42, 2022 02 08.
Artigo em Inglês | MEDLINE | ID: mdl-35135495

RESUMO

BACKGROUND: Simulation-based training is a clinical skill learning method that can replicate real-life situations in an interactive manner. In our study, we compared a novel hybrid learning method with conventional simulation learning in the teaching of endotracheal intubation. METHODS: One hundred medical students and residents were randomly divided into two groups and were taught endotracheal intubation. The first group of subjects (control group) studied in the conventional way via lectures and classic simulation-based training sessions. The second group (experimental group) used the hybrid learning method where the teaching process consisted of distance learning and small group peer-to-peer simulation training sessions with remote supervision by the instructors. After the teaching process, endotracheal intubation (ETI) procedures were performed on real patients under the supervision of an anesthesiologist in an operating theater. Each step of the procedure was evaluated by a standardized assessment form (checklist) for both groups. RESULTS: Thirty-four subjects constituted the control group and 43 were in the experimental group. The hybrid group (88%) showed significantly better ETI performance in the operating theater compared with the control group (52%). Further, all hybrid group subjects (100%) followed the correct sequence of actions, while in the control group only 32% followed proper sequencing. CONCLUSIONS: We conclude that our novel algorithm-driven hybrid simulation learning method improves acquisition of endotracheal intubation with a high degree of acceptability and satisfaction by the learners' as compared with classic simulation-based training.


Assuntos
Anestesiologia/educação , Competência Clínica/estatística & dados numéricos , Simulação por Computador/estatística & dados numéricos , Intubação Intratraqueal/métodos , Treinamento por Simulação/métodos , Estudantes de Medicina/estatística & dados numéricos , Adulto , Algoritmos , Avaliação Educacional/métodos , Avaliação Educacional/estatística & dados numéricos , Feminino , Humanos , Internato e Residência , Masculino , Adulto Jovem
20.
Pathol Res Pract ; 231: 153771, 2022 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-35091177

RESUMO

Mass-forming ductal carcinoma in situ (DCIS) detected on core needle biopsy (CNB) is often a radiology-pathology discordance and thought to represent missed invasive carcinoma. This brief report applied supervised machine learning (ML) for image segmentation to investigate a series of 44 mass-forming DCIS cases, with the primary focus being stromal computational signatures. The area under the curve (AUC) for receiver operator curves (ROC) in relation to upgrade to invasive carcinoma from DCIS were as follows: high myxoid stromal ratio (MSR): 0.923, P = <0.001; low collagenous stromal percentage (CSP): 0.875, P = <0.001; and low proportionated stromal area (PSA): 0.682, P = 0.039. The use of ML in mass-forming DCIS could predict upgraded to invasive carcinoma with high sensitivity and specificity. The findings from this brief report are clinically useful and should be further validated by future studies.


Assuntos
Biópsia com Agulha de Grande Calibre/estatística & dados numéricos , Carcinoma Intraductal não Infiltrante/diagnóstico , Simulação por Computador/normas , Modelos Genéticos , Idoso , Análise de Variância , Área Sob a Curva , Biópsia com Agulha de Grande Calibre/métodos , Carcinoma Intraductal não Infiltrante/epidemiologia , Simulação por Computador/estatística & dados numéricos , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Curva ROC , Estudos Retrospectivos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA