Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 53
Filtrar
Mais filtros

Tipo de documento
Intervalo de ano de publicação
1.
Brief Bioinform ; 25(3)2024 Mar 27.
Artigo em Inglês | MEDLINE | ID: mdl-38557675

RESUMO

Spatial transcriptomics (ST) data have emerged as a pivotal approach to comprehending the function and interplay of cells within intricate tissues. Nonetheless, analyses of ST data are restricted by the low spatial resolution and limited number of ribonucleic acid transcripts that can be detected with several popular ST techniques. In this study, we propose that both of the above issues can be significantly improved by introducing a deep graph co-embedding framework. First, we establish a self-supervised, co-graph convolution network-based deep learning model termed SpatialcoGCN, which leverages single-cell data to deconvolve the cell mixtures in spatial data. Evaluations of SpatialcoGCN on a series of simulated ST data and real ST datasets from human ductal carcinoma in situ, developing human heart and mouse brain suggest that SpatialcoGCN could outperform other state-of-the-art cell type deconvolution methods in estimating per-spot cell composition. Moreover, with competitive accuracy, SpatialcoGCN could also recover the spatial distribution of transcripts that are not detected by raw ST data. With a similar co-embedding framework, we further established a spatial information-aware ST data simulation method, SpatialcoGCN-Sim. SpatialcoGCN-Sim could generate simulated ST data with high similarity to real datasets. Together, our approaches provide efficient tools for studying the spatial organization of heterogeneous cells within complex tissues.


Assuntos
Perfilação da Expressão Gênica , RNA , Humanos , Animais , Camundongos , Simulação por Computador , Transcriptoma
2.
Lab Invest ; 104(8): 102095, 2024 Jun 24.
Artigo em Inglês | MEDLINE | ID: mdl-38925488

RESUMO

In our rapidly expanding landscape of artificial intelligence, synthetic data have become a topic of great promise and also some concern. This review aimed to provide pathologists and laboratory professionals with a primer on the role of synthetic data and how it may soon shape the landscape within our field. Using synthetic data presents many advantages but also introduces a milieu of new obstacles and limitations. This review aimed to provide pathologists and laboratory professionals with a primer on the general concept of synthetic data and its potential to transform our field. By leveraging synthetic data, we can help accelerate the development of various machine learning models and enhance our medical education and research/quality study needs. This review explored the methods for generating synthetic data, including rule-based, machine learning model-based and hybrid approaches, as they apply to applications within pathology and laboratory medicine. We also discussed the limitations and challenges associated with such synthetic data, including data quality, malicious use, and ethical bias/concerns and challenges. By understanding the potential benefits (ie, medical education, training artificial intelligence programs, and proficiency testing, etc) and limitations of this new data realm, we can not only harness its power to improve patient outcomes, advance research, and enhance the practice of pathology but also become readily aware of their intrinsic limitations.

3.
BMC Immunol ; 25(1): 13, 2024 02 08.
Artigo em Inglês | MEDLINE | ID: mdl-38331731

RESUMO

The reconstruction of clonal families (CFs) in B-cell receptor (BCR) repertoire analysis is a crucial step to understand the adaptive immune system and how it responds to antigens. The BCR repertoire of an individual is formed throughout life and is diverse due to several factors such as gene recombination and somatic hypermutation. The use of Adaptive Immune Receptor Repertoire sequencing (AIRR-seq) using next generation sequencing enabled the generation of full BCR repertoires that also include rare CFs. The reconstruction of CFs from AIRR-seq data is challenging and several approaches have been developed to solve this problem. Currently, most methods use the heavy chain (HC) only, as it is more variable than the light chain (LC). CF reconstruction options include the definition of appropriate sequence similarity measures, the use of shared mutations among sequences, and the possibility of reconstruction without preliminary clustering based on V- and J-gene annotation. In this study, we aimed to systematically evaluate different approaches for CF reconstruction and to determine their impact on various outcome measures such as the number of CFs derived, the size of the CFs, and the accuracy of the reconstruction. The methods were compared to each other and to a method that groups sequences based on identical junction sequences and another method that only determines subclones. We found that after accounting for data set variability, in particular sequencing depth and mutation load, the reconstruction approach has an impact on part of the outcome measures, including the number of CFs. Simulations indicate that unique junctions and subclones should not be used as substitutes for CF and that more complex methods do not outperform simpler methods. Also, we conclude that different approaches differ in their ability to correctly reconstruct CFs when not considering the LC and to identify shared CFs. The results showed the effect of different approaches on the reconstruction of CFs and highlighted the importance of choosing an appropriate method.


Assuntos
Linfócitos B , Receptores de Antígenos de Linfócitos B , Humanos , Mutação , Receptores de Antígenos de Linfócitos B/genética , Sequenciamento de Nucleotídeos em Larga Escala
4.
BMC Genomics ; 22(1): 877, 2021 Dec 05.
Artigo em Inglês | MEDLINE | ID: mdl-34865618

RESUMO

BACKGROUND: With the emphasis on analysing genotype-by-environment interactions within the framework of genomic selection and genome-wide association analysis, there is an increasing demand for reliable tools that can be used to simulate large-scale genomic data in order to assess related approaches. RESULTS: We proposed a theory to simulate large-scale genomic data on genotype-by-environment interactions and added this new function to our developed tool GPOPSIM. Additionally, a simulated threshold trait with large-scale genomic data was also added. The validation of the simulated data indicated that GPOSPIM2.0 is an efficient tool for mimicking the phenotypic data of quantitative traits, threshold traits, and genetically correlated traits with large-scale genomic data while taking genotype-by-environment interactions into account. CONCLUSIONS: This tool is useful for assessing genotype-by-environment interactions and threshold traits methods.


Assuntos
Interação Gene-Ambiente , Estudo de Associação Genômica Ampla , Genômica , Modelos Genéticos , Software
5.
Int J Geriatr Psychiatry ; 36(3): 433-442, 2021 03.
Artigo em Inglês | MEDLINE | ID: mdl-33027842

RESUMO

OBJECTIVE: Grip strength is a widely used motor assessment in ageing research and has repeatedly been shown to be associated with cognition. It has been proposed that grip strength could enhance cognitive screening in experimental or clinical research, but this study uses multiple data-driven approaches to caution against this interpretation. Furthermore, we introduce an alternative motor assessment, comparable to grip dynamometry, but has a more robust relationship with cognition among older adults. DESIGN: Associations between grip strength and cognition (measured with the Montreal Cognitive Assessment) were analysed cross sectionally using multivariate regression in two datasets: (1) The Irish LongituDinal Study on Ageing (TILDA; N = 5,980, community-dwelling adults ages 49-80) and (2) an experimental dataset (N = 250, community-dwelling adults aged 39-98). Additional statistical simulations on TILDA tested how ceiling effects or skewness in these variables influenced these associations for quality control. RESULTS: Grip strength was significantly but weakly associated with cognition, consistent with previous studies. Simulations revealed this was not due to skewness/ceiling effects. Conversely, a new alternative motor assessment (functional reaching [FR]) had a stronger, more robust and more sensitive relationship with cognition compared to grip strength. CONCLUSIONS: Grip strength should be cautiously interpreted as being associated with cognition. However, FR may have a stronger and clinically useful relationship with cognition.


Assuntos
Envelhecimento , Força da Mão , Idoso , Idoso de 80 Anos ou mais , Cognição , Estudos Transversais , Humanos , Vida Independente , Estudos Longitudinais
6.
Sensors (Basel) ; 21(4)2021 Feb 05.
Artigo em Inglês | MEDLINE | ID: mdl-33562774

RESUMO

We present a method for estimating the detection threshold of InSAR time-series products that relies on simulations of both vertical stratification and turbulence mixing components of tropospheric delay. Our simulations take into account case-specific parameters, such as topography and wet delay. We generate the time series of simulated data with given intervals (e.g., 12 and 35 days) for temporal coverages varying between 3 and 10 years. Each simulated acquisition presents the apparent noise due to tropospheric delay, which is constrained by case-specific parameters. As the calculation parameters are randomized, we carry out a large number of simulations and analyze the results statistically and we see that, as temporal coverage increases, the amount of propagated error decreases, presenting an inverse correlation. We validate our method by comparing our results with ERS and Envisat results over Socorro Magma Body, New Mexico. Our case study results indicate that Sentinel-1 can achieve ≈1 mm/yr detection level with regularly sampled data sets that have temporal coverage longer than 5 years.

7.
J Med Virol ; 92(6): 645-659, 2020 06.
Artigo em Inglês | MEDLINE | ID: mdl-32141624

RESUMO

Using the parameterized susceptible-exposed-infectious-recovered model, we simulated the spread dynamics of coronavirus disease 2019 (COVID-19) outbreak and impact of different control measures, conducted the sensitivity analysis to identify the key factor, plotted the trend curve of effective reproductive number (R), and performed data fitting after the simulation. By simulation and data fitting, the model showed the peak existing confirmed cases of 59 769 arriving on 15 February 2020, with the coefficient of determination close to 1 and the fitting bias 3.02%, suggesting high precision of the data-fitting results. More rigorous government control policies were associated with a slower increase in the infected population. Isolation and protective procedures would be less effective as more cases accrue, so the optimization of the treatment plan and the development of specific drugs would be of more importance. There was an upward trend of R in the beginning, followed by a downward trend, a temporary rebound, and another continuous decline. The feature of high infectiousness for severe acute respiratory syndrome coronavirus 2(SARS-CoV-2) led to an upward trend, and government measures contributed to the temporary rebound and declines. The declines of R could be exploited as strong evidence for the effectiveness of the interventions. Evidence from the four-phase stringent measures showed that it was significant to ensure early detection, early isolation, early treatment, adequate medical supplies, patients' being admitted to designated hospitals, and comprehensive therapeutic strategy. Collaborative efforts are required to combat the novel coronavirus, focusing on both persistent strict domestic interventions and vigilance against exogenous imported cases.


Assuntos
Betacoronavirus/patogenicidade , Infecções por Coronavirus/epidemiologia , Infecções por Coronavirus/transmissão , Regulamentação Governamental , Modelos Estatísticos , Pandemias , Pneumonia Viral/epidemiologia , Pneumonia Viral/transmissão , COVID-19 , China/epidemiologia , Controle de Doenças Transmissíveis/organização & administração , Simulação por Computador , Infecções por Coronavirus/diagnóstico , Suscetibilidade a Doenças , Humanos , Pneumonia Viral/diagnóstico , SARS-CoV-2 , Índice de Gravidade de Doença
8.
Am J Drug Alcohol Abuse ; 45(5): 451-459, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-30870054

RESUMO

Background. The Food and Drug Administration recently added a new clinical endpoint for evaluating the efficacy of alcohol use disorder (AUD) treatment that is more inclusive of treatment goals besides abstinence: no heavy drinking days (NHDD). However, numerous critiques have been noted for such binary models of treatment outcome. Further, there is mounting evidence that participants inaccurately estimate the quantities of alcohol they consume during drinking episodes (i.e., drink size misestimation), which may be particularly problematic when using a binary criterion (NHDD) compared to a similar, continuous alternative outcome variable: percent heavy drinking days (PHDD). Yet, the impact of drinking misestimation on binary (e.g., NHDD) versus continuous outcome variables (e.g., PHDD) has not been studied. Objectives. Using simulation methods, the present study examined the potential impact of drink size misestimation on NHDD and PHDD. Methods. Data simulations were based on previously published findings of the amount of error in how much alcohol is actually poured when estimating standard drinks. We started with self-reported daily drinking data from COMBINE study participants with complete data (N = 888; 68.1% male), then simulated inaccuracy in those estimations based on literature on standard drink size misestimation. Results. Clinical trial effect sizes were consistently lower for NHDD than for PHDD. Drink size misestimation further lowered effect sizes for NHDD and PHDD. Conclusions. Drink size misestimation may lead to inaccurate conclusions about drinking outcomes and the comparative effectiveness of AUD treatments, including inflated type-II error rates, particularly when treatment "success" is defined by binary outcomes such as NHDD.


Assuntos
Alcoolismo , Consumo de Bebidas Alcoólicas , Etanol , Feminino , Humanos , Masculino , Autorrelato , Resultado do Tratamento
9.
Behav Res Methods ; 48(3): 1125-44, 2016 09.
Artigo em Inglês | MEDLINE | ID: mdl-26208814

RESUMO

Contention of the ovulatory shift hypothesis is principally supported by failures to replicate previous findings; e.g., recent meta-analytic work suggests that the effects endorsing the hypothesis may not be robust. Some possible limitations in this and other ovulatory-effects research-that may contribute to such controversy arising-are: (a) use of error-prone methods for assessing target periods of fertility that are thought to be associated with behavioral shifts, and (b) use of between-subjects-as opposed to within-subjects-methods. In the current study we present both simulated and empirical research: (a) comparing the ability of between- and within-subject t-tests to detect cyclical shifts; (b) evaluating the efficacy of correlating estimated fertility overlays with potential behavioral shifts; and (c) testing the accuracy of counting methods for identifying windows of cycle fertility. While this study cannot assess whether the ovulatory shift hypothesis or other ovulatory-based hypotheses are tenable, it demonstrates how low power resulting from typical methods employed in the extant literature may be associated with perceived inconsistencies in findings. We conclude that to fully address this issue greater use of within-subjects methodology is needed.


Assuntos
Ovulação/fisiologia , Ovulação/psicologia , Adulto , Comportamento , Simulação por Computador , Interpretação Estatística de Dados , Feminino , Fertilidade , Humanos , Valor Preditivo dos Testes , Adulto Jovem
10.
Am J Epidemiol ; 182(6): 520-7, 2015 Sep 15.
Artigo em Inglês | MEDLINE | ID: mdl-26316599

RESUMO

We sought to explore the impact of intention to treat and complex treatment use assumptions made during weight construction on the validity and precision of estimates derived from inverse-probability-of-treatment-weighted analysis. We simulated data assuming a nonexperimental design that attempted to quantify the effect of statin on lowering low-density lipoprotein cholesterol. We created 324 scenarios by varying parameter values (effect size, sample size, adherence level, probability of treatment initiation, associations between low-density lipoprotein cholesterol and treatment initiation and continuation). Four analytical approaches were used: 1) assuming intention to treat; 2) assuming complex mechanisms of treatment use; 3) assuming a simple mechanism of treatment use; and 4) assuming invariant confounders. With a continuous outcome, estimates assuming intention to treat were biased toward the null when there were nonnull treatment effect and nonadherence after treatment initiation. For each 1% decrease in the proportion of patients staying on treatment after initiation, the bias in estimated average treatment effect increased by 1%. Inverse-probability-of-treatment-weighted analyses that took into account the complex mechanisms of treatment use generated approximately unbiased estimates. Studies estimating the actual effect of a time-varying treatment need to consider the complex mechanisms of treatment use during weight construction.


Assuntos
Simulação por Computador , Dislipidemias/tratamento farmacológico , Hipolipemiantes/uso terapêutico , Análise de Intenção de Tratamento/métodos , Intervalos de Confiança , Dislipidemias/epidemiologia , Feminino , Humanos , Masculino
11.
J Pediatr Psychol ; 39(2): 138-50, 2014 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-24284134

RESUMO

OBJECTIVE: Aggregated N-of-1 randomized controlled trials (RCTs) combined with multilevel modeling represent a methodological advancement that may help bridge science and practice in pediatric psychology. The purpose of this article is to offer a primer for pediatric psychologists interested in conducting aggregated N-of-1 RCTs. METHODS: An overview of N-of-1 RCT methodology is provided and 2 simulated data sets are analyzed to demonstrate the clinical and research potential of the methodology. RESULTS: The simulated data example demonstrates the utility of aggregated N-of-1 RCTs for understanding the clinical impact of an intervention for a given individual and the modeling of covariates to explain why an intervention worked for one patient and not another. CONCLUSIONS: Aggregated N-of-1 RCTs hold potential for improving the science and practice of pediatric psychology.


Assuntos
Avaliação de Resultados em Cuidados de Saúde , Ensaios Clínicos Controlados Aleatórios como Assunto , Projetos de Pesquisa , Simulação por Computador , Humanos , Modelos Psicológicos
12.
Ann Noninvasive Electrocardiol ; 19(2): 182-9, 2014 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-24521536

RESUMO

BACKGROUND: Two methods of estimating reader variability (RV) in QT measurements between 12 readers were compared. METHODS: Using data from 500 electrocardiograms (ECGs) analyzed twice by 12 readers, we bootstrapped 1000 datasets each for both methods. In grouped analysis design (GAD), the same 40 ECGs were read twice by all readers. In pairwise analysis design (PAD), 40 ECGs analyzed by each reader in a clinical trial were reanalyzed by the same reader (intra-RV) and also by another reader (inter-RV); thus, variability between each pair of readers was estimated using different ECGs. RESULTS: Inter-RV (mean [95% CI]) between pairs of readers by GAD and PAD was 3.9 ms (2.1-5.5 ms) and 4.1 ms (2.6-5.4 ms), respectively, using ANOVA, 0 ms (-0.0 to 0.4 ms), and 0 ms (-0.7 to 0.6 ms), respectively, by actual difference between readers and 7.7 ms (6.2-9.8 ms) and 7.7 ms (6.6-9.1 ms), respectively, by absolute difference between readers. Intra-RV too was comparable. CONCLUSIONS: RV estimates by the grouped- and pairwise analysis designs are comparable.


Assuntos
Eletrocardiografia/métodos , Eletrocardiografia/estatística & dados numéricos , Frequência Cardíaca/fisiologia , Variações Dependentes do Observador , Projetos de Pesquisa , Análise de Variância , Humanos
14.
Microbiome ; 12(1): 135, 2024 Jul 22.
Artigo em Inglês | MEDLINE | ID: mdl-39039570

RESUMO

BACKGROUND: Advances in sequencing technology has led to the discovery of associations between the human microbiota and many diseases, conditions, and traits. With the increasing availability of microbiome data, many statistical methods have been developed for studying these associations. The growing number of newly developed methods highlights the need for simple, rapid, and reliable methods to simulate realistic microbiome data, which is essential for validating and evaluating the performance of these methods. However, generating realistic microbiome data is challenging due to the complex nature of microbiome data, which feature correlation between taxa, sparsity, overdispersion, and compositionality. Current methods for simulating microbiome data are deficient in their ability to capture these important features of microbiome data, or can require exorbitant computational time. METHODS: We develop MIDASim (MIcrobiome DAta Simulator), a fast and simple approach for simulating realistic microbiome data that reproduces the distributional and correlation structure of a template microbiome dataset. MIDASim is a two-step approach. The first step generates correlated binary indicators that represent the presence-absence status of all taxa, and the second step generates relative abundances and counts for the taxa that are considered to be present in step 1, utilizing a Gaussian copula to account for the taxon-taxon correlations. In the second step, MIDASim can operate in both a nonparametric and parametric mode. In the nonparametric mode, the Gaussian copula uses the empirical distribution of relative abundances for the marginal distributions. In the parametric mode, a generalized gamma distribution is used in place of the empirical distribution. RESULTS: We demonstrate improved performance of MIDASim relative to other existing methods using gut and vaginal data. MIDASim showed superior performance by PERMANOVA and in terms of alpha diversity and beta dispersion in either parametric or nonparametric mode. We also show how MIDASim in parametric mode can be used to assess the performance of methods for finding differentially abundant taxa in a compositional model. CONCLUSIONS: MIDASim is easy to implement, flexible and suitable for most microbiome data simulation situations. MIDASim has three major advantages. First, MIDASim performs better in reproducing the distributional features of real data compared to other methods, at both the presence-absence level and the relative-abundance level. MIDASim-simulated data are more similar to the template data than competing methods, as quantified using a variety of measures. Second, MIDASim makes few distributional assumptions for the relative abundances, and thus can easily accommodate complex distributional features in real data. Third, MIDASim is computationally efficient and can be used to simulate large microbiome datasets. Video Abstract.


Assuntos
Simulação por Computador , Microbiota , Humanos , Microbioma Gastrointestinal , Software , Biologia Computacional/métodos , Bactérias/classificação , Bactérias/genética , Bactérias/isolamento & purificação , Feminino
15.
SLAS Technol ; : 100151, 2024 May 23.
Artigo em Inglês | MEDLINE | ID: mdl-38796032

RESUMO

This research emphasises the value of physical training for table tennis players, particularly as ball speed and spin rate decline and emphasises how important intensity quality is to the game. Chinese table tennis players' dual identities place greater demands on the general growth of their learning and training as a crucial component of talent development preparation. Athletes' general quality, competitive level, and ability to avoid sports injuries are all improved by scientific and focused physical training. In order to achieve the functions of intelligent camera, multi-angle broadcasting, and 3D scene reproduction, this study combines the physical training model of artificial intelligence. This gives the audience a more engaging and in-depth viewing experience. More feature extraction of the match footage is made possible by deep learning and convolutional neural networks when combined with large-scale video data, greatly enhancing the match information for viewers. The experimental findings demonstrate that the accuracy of table tennis human technical movement recognition reaches 98.88 % based on the enhanced AM-Softmax classification algorithm.

16.
Stud Health Technol Inform ; 310: 820-824, 2024 Jan 25.
Artigo em Inglês | MEDLINE | ID: mdl-38269923

RESUMO

Healthcare data is a scarce resource and access is often cumbersome. While medical software development would benefit from real datasets, the privacy of the patients is held at a higher priority. Realistic synthetic healthcare data can fill this gap by providing a dataset for quality control while at the same time preserving the patient's anonymity and privacy. Existing methods focus on American or European patient healthcare data but none is exclusively focused on the Australian population. Australia is a highly diverse country that has a unique healthcare system. To overcome this problem, we used a popular publicly available tool, Synthea, to generate disease progressions based on the Australian population. With this approach, we were able to generate 100,000 patients following Queensland (Australia) demographics.


Assuntos
Instalações de Saúde , Privacidade , Humanos , Austrália , Queensland , Progressão da Doença
17.
Genome Biol ; 25(1): 145, 2024 06 03.
Artigo em Inglês | MEDLINE | ID: mdl-38831386

RESUMO

BACKGROUND: Single-cell RNA sequencing (scRNA-seq) and spatially resolved transcriptomics (SRT) have led to groundbreaking advancements in life sciences. To develop bioinformatics tools for scRNA-seq and SRT data and perform unbiased benchmarks, data simulation has been widely adopted by providing explicit ground truth and generating customized datasets. However, the performance of simulation methods under multiple scenarios has not been comprehensively assessed, making it challenging to choose suitable methods without practical guidelines. RESULTS: We systematically evaluated 49 simulation methods developed for scRNA-seq and/or SRT data in terms of accuracy, functionality, scalability, and usability using 152 reference datasets derived from 24 platforms. SRTsim, scDesign3, ZINB-WaVE, and scDesign2 have the best accuracy performance across various platforms. Unexpectedly, some methods tailored to scRNA-seq data have potential compatibility for simulating SRT data. Lun, SPARSim, and scDesign3-tree outperform other methods under corresponding simulation scenarios. Phenopath, Lun, Simple, and MFA yield high scalability scores but they cannot generate realistic simulated data. Users should consider the trade-offs between method accuracy and scalability (or functionality) when making decisions. Additionally, execution errors are mainly caused by failed parameter estimations and appearance of missing or infinite values in calculations. We provide practical guidelines for method selection, a standard pipeline Simpipe ( https://github.com/duohongrui/simpipe ; https://doi.org/10.5281/zenodo.11178409 ), and an online tool Simsite ( https://www.ciblab.net/software/simshiny/ ) for data simulation. CONCLUSIONS: No method performs best on all criteria, thus a good-yet-not-the-best method is recommended if it solves problems effectively and reasonably. Our comprehensive work provides crucial insights for developers on modeling gene expression data and fosters the simulation process for users.


Assuntos
Perfilação da Expressão Gênica , Análise de Célula Única , Análise de Célula Única/métodos , Perfilação da Expressão Gênica/métodos , Humanos , Software , Simulação por Computador , Transcriptoma , Biologia Computacional/métodos , Análise de Sequência de RNA/métodos , RNA-Seq/métodos , RNA-Seq/normas
18.
Anim Reprod Sci ; : 107564, 2024 Jul 17.
Artigo em Inglês | MEDLINE | ID: mdl-39048502

RESUMO

Objective assessment of sperm morphology is an essential component for assessing ejaculate quality. Due to economic limitations, investigators often divert to conducting observational studies instead of experimental ones, which provide the strongest statistical power, yielding more heterogeneous data regardless of the number of data sources (barns/farms). Using such data inevitably leads to higher variances of estimates, which negatively impacts the statistical power of a study. In this article, we describe a statistical methodology called finite mixture modeling (FMM), which, based on the supplied data and assumed number of sub-classes, classifies the data into two or more homogeneous types of distributions and determines their fractional size relative to the entire cohort. The goal is to use statistical methods that will confound the variance of the sample. A figure from a previous publication was used to generate simulated data (n=1559) on the cytoplasmic droplet rate. We identified that a bi-modal distribution with two latent classes best described the simulated data. Post-hoc estimation showed that about 80 % of observations belonged to latent class 1, with 20 % in latent class 2. The FMM methodology identified a cutoff point of 8.7 %. Finally, when estimating the standard error for the total cohort, the FMM methodology yielded a 40 % reduction in the standard error compared to standard methodologies. In conclusion, here we show that FMM successfully confounded the variance of the data and, as such, yielded lower estimates of the variance than standard methodologies, increasing the statistical power of the cohort.

19.
Front Res Metr Anal ; 9: 1360333, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38752011

RESUMO

Recognizing the value of experiential education in social/behavioral science research training, we designed and offered a simulation of a survey research project for doctoral students in education. Through three phases of the project, from instrument design through scale investigation and quantitative analyses, students are developed as researchers in a realistic and authentic way. In this paper, we highlight the advantages, challenges, and outcomes from applying simulation methods within graduate research methods courses, with a specific focus on survey methodology and quantitative skill development.

20.
Ther Innov Regul Sci ; 58(3): 423-430, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38321191

RESUMO

The past years have sharpened the industry's understanding of a Quality by Design (QbD) approach toward clinical trials. Using QbD encourages designing quality into a trial during the planning phase. The identification of Critical to Quality (CtQs) factors and specifically Critical Data and Processes (CD&Ps) is key to such a risk-based monitoring approach. A variable that allows monitoring the evolution of risk regarding the CD&Ps is called a Quality Tolerance Limit (QTL) parameter. These parameters are linked to the scientific question(s) of a trial and may identify the issues that can jeopardize the integrity of trial endpoints. This paper focuses on defining what QTL parameters are and providing general guidance on setting thresholds for these parameters allowing for the derivation of an acceptable range of the risk.


Assuntos
Ensaios Clínicos como Assunto , Humanos , Projetos de Pesquisa , Controle de Qualidade
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA