Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 14.811
Filtrar
1.
AAPS J ; 24(5): 97, 2022 Sep 01.
Artigo em Inglês | MEDLINE | ID: mdl-36050426

RESUMO

The two one-sided t-tests (TOST) procedure has been used to evaluate average bioequivalence (BE). As a regulatory standard, it is crucial that TOST distinguish BE from not-BE (NBE) when BE data are not lognormal. TOST was compared with a Bayesian procedure (BEST by Kruschke) in simulated datasets of test/reference ratios (T/R) which were BE and NBE, wherein (1) log(T/R) or T-R were normally distributed, (2) sample sizes ranged 10-50, and (3) extreme log(T/R) or T-R values were randomly included in datasets. The 90% "credible interval" (CrI) from BEST is a Bayesian alternative of the 90% confidence interval (CI) of TOST and it can be derived from a posterior distribution that is more reflective of the observed mean log(T/R) distribution that often deviates from normality. In the absence of extreme T/R values, both methods agreed BE when observed T/R were lognormal. BEST more accurately concluded BE or NBE, while requiring fewer subjects, when observed log(T/R) or T-R were normal in the presence of extreme values. Overall, TOST and BEST perform comparably on lognormal T/R, while BEST is more accurate, requiring fewer subjects when datasets are normal for T-R or contain extreme values. Of note, the normally distributed datasets only rarely contain extreme values. Our results imply that when BEST and TOST yield different BE assessment results from bioequivalent products, TOST may disadvantage applicants when T/R are not lognormal and/or include extreme T/R values. Application of BEST can address the situation when T/R are not lognormal or include extreme data values. Application of BEST to BE data can be considered a useful alternative to TOST for evaluation of BE and for efficient development of BE formulations.


Assuntos
Equivalência Terapêutica , Área Sob a Curva , Teorema de Bayes , Estudos Cross-Over , Humanos , Tamanho da Amostra
2.
PLoS One ; 17(9): e0271992, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36107875

RESUMO

Local independence is a principal assumption of applying latent variable models. Violations of this assumption might be stemmed from dimensionality (trait dependence) and statistical independence of item responses (response dependence). The purpose of this study is to evaluate the sensitivity of weighted least squares means and variance adjusted (WLSMV) based global fit indices to violations of local independence in Rasch models, and compare those indices to principal component analysis of residuals (PCAR) that is widely used for Rasch models. Dichotomous Rasch model is considered in this simulation study. The results show that WLSMV-based fit indices could detect trait dependence, but are to be limited with regard to response dependence. Additionally, WLSMV-based fit indices have advantages over the use of PCAR since WLSMV-based global fit indices are consistent regardless of sample size and test length. Though it is not recommended to apply exact benchmarks for those indices, they would provide practitioners with a method for evaluating the degree to which assumption violation is problematic for their data diagnostic purpose.


Assuntos
Modelos Estatísticos , Análise dos Mínimos Quadrados , Análise de Componente Principal , Psicometria/métodos , Tamanho da Amostra
3.
PLoS One ; 17(9): e0264246, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36112652

RESUMO

RNA-seq is a high-throughput sequencing technology widely used for gene transcript discovery and quantification under different biological or biomedical conditions. A fundamental research question in most RNA-seq experiments is the identification of differentially expressed genes among experimental conditions or sample groups. Numerous statistical methods for RNA-seq differential analysis have been proposed since the emergence of the RNA-seq assay. To evaluate popular differential analysis methods used in the open source R and Bioconductor packages, we conducted multiple simulation studies to compare the performance of eight RNA-seq differential analysis methods used in RNA-seq data analysis (edgeR, DESeq, DESeq2, baySeq, EBSeq, NOISeq, SAMSeq, Voom). The comparisons were across different scenarios with either equal or unequal library sizes, different distribution assumptions and sample sizes. We measured performance using false discovery rate (FDR) control, power, and stability. No significant differences were observed for FDR control, power, or stability across methods, whether with equal or unequal library sizes. For RNA-seq count data with negative binomial distribution, when sample size is 3 in each group, EBSeq performed better than the other methods as indicated by FDR control, power, and stability. When sample sizes increase to 6 or 12 in each group, DESeq2 performed slightly better than other methods. All methods have improved performance when sample size increases to 12 in each group except DESeq. For RNA-seq count data with log-normal distribution, both DESeq and DESeq2 methods performed better than other methods in terms of FDR control, power, and stability across all sample sizes. Real RNA-seq experimental data were also used to compare the total number of discoveries and stability of discoveries for each method. For RNA-seq data analysis, the EBSeq method is recommended for studies with sample size as small as 3 in each group, and the DESeq2 method is recommended for sample size of 6 or higher in each group when the data follow the negative binomial distribution. Both DESeq and DESeq2 methods are recommended when the data follow the log-normal distribution.


Assuntos
Sequenciamento de Nucleotídeos em Larga Escala , Distribuição Binomial , Sequenciamento de Nucleotídeos em Larga Escala/métodos , RNA-Seq , Tamanho da Amostra , Análise de Sequência de RNA/métodos
4.
BMC Med ; 20(1): 294, 2022 Sep 16.
Artigo em Inglês | MEDLINE | ID: mdl-36109742

RESUMO

BACKGROUND: Lack of representativeness in Black, Indigenous, and People of Colour (BIPOC) enrollment could compromise the generalizability of study results and health equity. This study aimed to examine trends in BIPOC groups enrollment in diabetes randomized controlled trials (RCTs) and to explore the association between trial factors and high-enrollment of BIPOC groups. METHODS: We systematically searched the literature on large diabetes RCTs with a sample size of ≥ 400 participants published between 2000 and 2020. We assessed temporal trends in enrollment of racial and ethnic groups in the included trials. Logistic and linear regression analyses were used to explore the relationship between trial factors and the high-enrollment defined by median enrollment rate. RESULTS: A total of 405 RCTs were included for analyses. The median enrollment rate of BIPOC groups was 24.0%, with 6.4% for the Black group, 11.2% for Hispanic, 8.5% for Asian, and 3.0% for other BIPOC groups respectively. Over the past 20 years, the BIPOC enrollment showed an increased trend in the diabetes RCTs, ranging from 20.1 to 28.4% (P for trend = 0.041). A significant trend towards increased enrollment for Asian group was observed. We found that weekly or daily intervention frequency (OR = 0.48, 95% CI: 0.26, 0.91) and duration of intervention > 6.5 month (OR = 0.59, 95% CI: 0.37, 0.95) were significantly related to decreased odds of high-enrollment, while type 2 diabetes (OR = 1.44, 95% CI: 1.04, 1.99) was associated with high-enrollment of BIPOC groups. CONCLUSIONS: The enrollment of BIPOC was found to increase in large diabetes RCTs over the past two decades; some trial factors may be significantly associated with BIPOC enrollment. These findings may highlight the importance of enrollment of BIPOC groups and provide insights into the design and implementation of future clinical trials in diabetes.


Assuntos
Diabetes Mellitus Tipo 2 , Etnicidade , Humanos , Ensaios Clínicos Controlados Aleatórios como Assunto , Tamanho da Amostra
5.
J Med Internet Res ; 24(9): e39910, 2022 Sep 09.
Artigo em Inglês | MEDLINE | ID: mdl-36083626

RESUMO

BACKGROUND: Digital technologies are increasingly used in health research to collect real-world data from wider populations. A new wave of digital health studies relies primarily on digital technologies to conduct research entirely remotely. Remote digital health studies hold promise to significant cost and time advantages over traditional, in-person studies. However, such studies have been reported to typically suffer from participant attrition, the sources for which are still largely understudied. OBJECTIVE: To contribute to future remote digital health study planning, we present a conceptual framework and hypotheses for study enrollment and completion. The framework introduces 3 participation criteria that impact remote digital health study outcomes: (1) participant motivation profile and incentives or nudges, (2) participant task complexity, and (3) scientific requirements. The goal of this study is to inform the planning and implementation of remote digital health studies from a person-centered perspective. METHODS: We conducted a scoping review to collect information on participation in remote digital health studies, focusing on methodological aspects that impact participant enrollment and retention. Comprehensive searches were conducted on the PubMed, CINAHL, and Web of Science databases, and additional sources were included in our study from citation searching. We included digital health studies that were fully conducted remotely, included information on at least one of the framework criteria during recruitment, onboarding or retention phases of the studies, and included study enrollment or completion outcomes. Qualitative analyses were performed to synthesize the findings from the included studies. RESULTS: We report qualitative findings from 37 included studies that reveal high values of achieved median participant enrollment based on target sample size calculations, 128% (IQR 100%-234%), and median study completion, 48% (IQR 35%-76%). Increased median study completion is observed for studies that provided incentives or nudges to extrinsically motivated participants (62%, IQR 43%-78%). Reducing task complexity for participants in the absence of incentives or nudges did not improve median study enrollment (103%, IQR 102%-370%) or completion (43%, IQR 22%-60%) in observational studies, in comparison to interventional studies that provided more incentives or nudges (median study completion rate of 55%, IQR 38%-79%). Furthermore, there were inconsistencies in measures of completion across the assessed remote digital health studies, where only around half of the studies with completion measures (14/27, 52%) were based on participant retention throughout the study period. CONCLUSIONS: Few studies reported on participatory factors and study outcomes in a consistent manner, which may have limited the evidence base for our study. Our assessment may also have suffered from publication bias or unrepresentative study samples due to an observed preference for participants with digital literacy skills in digital health studies. Nevertheless, we find that future remote digital health study planning can benefit from targeting specific participant profiles, providing incentives and nudges, and reducing study complexity to improve study outcomes.


Assuntos
Tamanho da Amostra , Humanos
6.
Annu Int Conf IEEE Eng Med Biol Soc ; 2022: 541-544, 2022 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-36085959

RESUMO

In Radiomics, deep learning-based systems for medical image analysis play an increasing role. However, due to the better explainability, feature-based systems are still preferred, especially by physicians. Often, high-dimensional data and low sample size pose different challenges (e.g. increased risk of overfitting) to machine learning systems. By removing irrelevant and redundant features from the data, feature selection is an effective way of pre-processing. The research in this study is focused on unsupervised deep learning-based methods for feature selection. Five recently proposed algorithms are compared regarding their applicability and efficiency on seven data sets in three different sample applications. It was found that deep learning-based feature selection leads to improved classification results compared to conventional methods, especially for small feature subsets. Clinical Relevance - The exploration of distinctive features and the ability to rank their importance without the need for outcome information is a potential field of application for unsupervised feature selection methods. Especially in multiparametric radiology, the number of features is increasing. The identification of new potential biomarkers is important both for treatment and prevention.


Assuntos
Aprendizado Profundo , Algoritmos , Aprendizado de Máquina , Tamanho da Amostra
7.
Trials ; 23(1): 699, 2022 Aug 20.
Artigo em Inglês | MEDLINE | ID: mdl-35987698

RESUMO

BACKGROUND: The NOTACS trial will assess the efficacy, safety and cost-effectiveness of high-flow nasal therapy (HFNT) compared to standard oxygen therapy (SOT) on the outcomes of patients after cardiac surgery. METHODS/DESIGN: NOTACS is an adaptive, international, multicentre, parallel-group, randomised controlled trial, with a pre-planned interim sample size re-estimation (SSR). A minimum of 850 patients will be randomised 1:1 to receive either HFNT or SOT. The primary outcome is days alive and at home in the first 90 days after the planned surgery (DAH90), with a number of secondary analyses and cost-effectiveness analyses also planned. The interim SSR will take place after a minimum of 300 patients have been followed up for 90 days and will allow for the sample size to increase up to a maximum of 1152 patients. RESULTS: This manuscript provides detailed descriptions of the design of the NOTACS trial, and the analyses to be undertaken at the interim and final analyses. The main purpose of the interim analysis is to assess safety and to perform a sample size re-estimation. The main purpose of the final analysis is to examine the safety, efficacy and cost-effectiveness of HFNT compared to SOT on the outcomes of patients after cardiac surgery. DISCUSSION: This manuscript outlines the key features of the NOTACS statistical analysis plan and was submitted to the journal before the interim analysis in order to preserve scientific integrity under an adaptive design framework. The NOTACS SAP closely follows published guidelines for the content of SAPs in clinical trials. TRIAL REGISTRATION: ISRCTN14092678 . Registered on 13 May 2020.


Assuntos
Procedimentos Cirúrgicos Cardíacos , Avaliação de Resultados em Cuidados de Saúde , Procedimentos Cirúrgicos Cardíacos/efeitos adversos , Humanos , Projetos de Pesquisa , Tamanho da Amostra , Padrão de Cuidado , Resultado do Tratamento
8.
Commun Biol ; 5(1): 806, 2022 08 11.
Artigo em Inglês | MEDLINE | ID: mdl-35953715

RESUMO

Genome-wide association studies (GWAS) have made impactful discoveries for complex diseases, often by amassing very large sample sizes. Yet, GWAS of many diseases remain underpowered, especially for non-European ancestries. One cost-effective approach to increase sample size is to combine existing cohorts, which may have limited sample size or be case-only, with public controls, but this approach is limited by the need for a large overlap in variants across genotyping arrays and the scarcity of non-European controls. We developed and validated a protocol, Genotyping Array-WGS Merge (GAWMerge), for combining genotypes from arrays and whole-genome sequencing, ensuring complete variant overlap, and allowing for diverse samples like Trans-Omics for Precision Medicine to be used. Our protocol involves phasing, imputation, and filtering. We illustrated its ability to control technology driven artifacts and type-I error, as well as recover known disease-associated signals across technologies, independent datasets, and ancestries in smoking-related cohorts. GAWMerge enables genetic studies to leverage existing cohorts to validly increase sample size and enhance discovery for understudied traits and ancestries.


Assuntos
Estudo de Associação Genômica Ampla , Estudo de Associação Genômica Ampla/métodos , Genótipo , Fenótipo , Tamanho da Amostra , Sequenciamento Completo do Genoma/métodos
9.
PLoS One ; 17(8): e0271163, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35976925

RESUMO

This paper presents a new approach to constructing the confidence interval for the mean value of a population when the distribution is unknown and the sample size is small, called the Percentile Data Construction Method (PDCM). A simulation was conducted to compare the performance of the PDCM confidence interval with those generated by the Percentile Bootstrap (PB) and Normal Theory (NT) methods. Both the convergence probability and average interval width criterion are considered when seeking to find the best interval. The results show that the PDCM outperforms both the PB and NT methods when the sample size is less than 30 or a large population variance exists.


Assuntos
Modelos Estatísticos , Projetos de Pesquisa , Simulação por Computador , Intervalos de Confiança , Probabilidade , Tamanho da Amostra
10.
BMC Med Res Methodol ; 22(1): 224, 2022 08 12.
Artigo em Inglês | MEDLINE | ID: mdl-35962310

RESUMO

BACKGROUND: Meaningfully interpreting patient-reported outcomes (PRO) results from randomized clinical trials requires that the PRO scores obtained in the trial have the same meaning across patients and previous applications of the PRO instrument. Calibration of PRO instruments warrants this property. In the Rasch measurement theory (RMT) framework, calibration is performed by fixing the item parameter estimates when measuring the targeted concept for each individual of the trial. The item parameter estimates used for this purpose are typically obtained from a previous "calibration" study. But imposing this constraint on item parameters, instead of freely estimating them directly in the specific sample of the trial, may hamper the ability to detect a treatment effect. The objective of this simulation study was to explore the potential negative impact of calibration of PRO instruments that were developed using RMT on the comparison of results between treatment groups, using different analysis methods. METHODS: PRO results were simulated following a polytomous Rasch model, for a calibration and a trial sample. Scenarios included varying sample sizes, with instrument of varying number of items and modalities, and varying item parameters distributions. Different treatment effect sizes and distributions of the two patient samples were also explored. Cross-sectional comparison of treatment groups was performed using different methods based on a random effect Rasch model. Calibrated and non-calibrated approaches were compared based on type-I error, power, bias, and variance of the estimates for the difference between groups. RESULTS: There was no impact of the calibration approach on type-I error, power, bias, and dispersion of the estimates. Among other findings, mistargeting between the PRO instrument and patients from the trial sample (regarding the level of measured concept) resulted in a lower power and higher position bias than appropriate targeting. CONCLUSIONS: Calibration does not compromise the ability to accurately assess a treatment effect using a PRO instrument developed within the RMT paradigm in randomized clinical trials. Thus, given its essential role in producing interpretable results, calibration should always be performed when using a PRO instrument developed using RMT as an endpoint in a randomized clinical trial.


Assuntos
Medidas de Resultados Relatados pelo Paciente , Viés , Calibragem , Estudos Transversais , Humanos , Psicometria/métodos , Tamanho da Amostra , Inquéritos e Questionários
11.
BMC Med Res Methodol ; 22(1): 228, 2022 08 15.
Artigo em Inglês | MEDLINE | ID: mdl-35971069

RESUMO

BACKGROUND: Platform trials can evaluate the efficacy of several experimental treatments compared to a control. The number of experimental treatments is not fixed, as arms may be added or removed as the trial progresses. Platform trials are more efficient than independent parallel group trials because of using shared control groups. However, for a treatment entering the trial at a later time point, the control group is divided into concurrent controls, consisting of patients randomised to control when that treatment arm is in the platform, and non-concurrent controls, patients randomised before. Using non-concurrent controls in addition to concurrent controls can improve the trial's efficiency by increasing power and reducing the required sample size, but can introduce bias due to time trends. METHODS: We focus on a platform trial with two treatment arms and a common control arm. Assuming that the second treatment arm is added at a later time, we assess the robustness of recently proposed model-based approaches to adjust for time trends when utilizing non-concurrent controls. In particular, we consider approaches where time trends are modeled either as linear in time or as a step function, with steps at time points where treatments enter or leave the platform trial. For trials with continuous or binary outcomes, we investigate the type 1 error rate and power of testing the efficacy of the newly added arm, as well as the bias and root mean squared error of treatment effect estimates under a range of scenarios. In addition to scenarios where time trends are equal across arms, we investigate settings with different time trends or time trends that are not additive in the scale of the model. RESULTS: A step function model, fitted on data from all treatment arms, gives increased power while controlling the type 1 error, as long as the time trends are equal for the different arms and additive on the model scale. This holds even if the shape of the time trend deviates from a step function when patients are allocated to arms by block randomisation. However, if time trends differ between arms or are not additive to treatment effects in the scale of the model, the type 1 error rate may be inflated. CONCLUSIONS: The efficiency gained by using step function models to incorporate non-concurrent controls can outweigh potential risks of biases, especially in settings with small sample sizes. Such biases may arise if the model assumptions of equality and additivity of time trends are not satisfied. However, the specifics of the trial, scientific plausibility of different time trends, and robustness of results should be carefully considered.


Assuntos
Tamanho da Amostra , Viés , Humanos
12.
JCO Precis Oncol ; 6: e2200046, 2022 08.
Artigo em Inglês | MEDLINE | ID: mdl-36001859

RESUMO

PURPOSE: Through Bayesian inference, we propose a method called BayeSize as a reference tool for investigators to assess the sample size and its associated scientific property for phase I clinical trials. METHODS: BayeSize applies the concept of effect size in dose finding, assuming that the maximum tolerated dose can be identified on the basis of an interval surrounding its true value because of statistical uncertainty. Leveraging a decision framework that involves composite hypotheses, BayeSize uses two types of priors, the fitting prior (for model fitting) and sampling prior (for data generation), to conduct sample size calculation under the constraints of statistical power and type I error. RESULTS: Simulation results showed that BayeSize can provide reliable sample size estimation under the constraints of type I/II error rates. CONCLUSION: BayeSize could facilitate phase I trial planning by providing appropriate sample size estimation. Look-up tables and R Shiny app are provided for practical applications.


Assuntos
Ensaios Clínicos Fase I como Assunto , Projetos de Pesquisa , Teorema de Bayes , Humanos , Dose Máxima Tolerável , Tamanho da Amostra
13.
JAMA Netw Open ; 5(8): e2228776, 2022 08 01.
Artigo em Inglês | MEDLINE | ID: mdl-36006641

RESUMO

Importance: Small study effects are the phenomena that studies with smaller sample sizes tend to report larger and more favorable effect estimates than studies with larger sample sizes. Objective: To evaluate the presence and extent of small study effects in diagnostic imaging accuracy meta-analyses. Data Sources: A search was conducted in the PubMed database for diagnostic imaging accuracy meta-analyses published between 2010 and 2019. Study Selection: Meta-analyses with 10 or more studies of medical imaging diagnostic accuracy, assessing a single imaging modality, and providing 2 × 2 contingency data were included. Studies that did not assess diagnostic accuracy of medical imaging techniques, compared 2 or more imaging modalities or different methods of 1 imaging modality, were cost analyses, used predictive or prognostic tests, did not provide individual patient data, or were network meta-analyses were excluded. Data Extraction and Synthesis: Data extraction was performed in accordance with the PRISMA guidelines. Main Outcomes and Measures: The diagnostic odds ratio (DOR) was calculated for each primary study using 2 × 2 contingency data. Regression analysis was used to examine the association between effect size estimate and precision across meta-analyses. Results: A total of 31 meta-analyses involving 668 primary studies and 80 206 patients were included. Fixed effects analysis produced a regression coefficient for the natural log of DOR against the SE of the natural log of DOR of 2.19 (95% CI, 1.49-2.90; P < .001), with computed tomography as the reference modality. Interaction test for modality and SE of the natural log of DOR did not depend on modality (Wald statistic P = .50). Taken together, this analysis found an inverse association between effect size estimate and precision that was independent of imaging modality. Of 26 meta-analyses that formally assessed for publication bias using funnel plots and statistical tests for funnel plot asymmetry, 21 found no evidence for such bias. Conclusions and Relevance: This meta-analysis found evidence of widespread prevalence of small study effects in the diagnostic imaging accuracy literature. One likely contributor to the observed effects is publication bias, which can undermine the results of many meta-analyses. Conventional methods for detecting funnel plot asymmetry conducted by included studies appeared to underestimate the presence of small study effects. Further studies are required to elucidate the various factors that contribute to small study effects.


Assuntos
Tomografia Computadorizada por Raios X , Viés , Humanos , Razão de Chances , Viés de Publicação , Tamanho da Amostra
14.
Emerg Top Life Sci ; 6(3): 311-322, 2022 Sep 09.
Artigo em Inglês | MEDLINE | ID: mdl-35994000

RESUMO

Developmental instability (DI) is an individual's inability to produce a specific developmental outcome under a given set of conditions, generally thought to result from random perturbations experienced during development. Fluctuating asymmetry (FA) - asymmetry on bilateral features that, on average, are symmetrical (or asymmetry deviating from that arising from design) - has been used to measure DI. Dating to half a century ago, and accelerating in the past three decades, psychological researchers have examined associations between FA (typically measured on bodily or facial features) and a host of outcomes of interest, including psychological disorders, cognitive ability, attractiveness, and sexual behavior. A decade ago, a meta-analysis on findings from nearly 100 studies extracted several conclusions. On average, small but statistically reliable associations between FA and traits of interest exist. Though modest, these associations are expected to greatly underestimate the strength of associations with underlying DI. Despite the massive sample size across studies, we still lack a good handle on which traits are most strongly affected by DI. A major methodological implication of the meta-analysis is that most studies have been, individually, woefully underpowered to detect associations. Though offering some intriguing findings, much research is the past decade too has been underpowered; hence, the newer literature is also likely noisy. Several large-scale studies are exceptions. Future progress depends on additional large-scale studies and researchers' sensitivity to power issues. As well, theoretical assumptions and conceptualizations of DI and FA driving psychological research may need revision to explain empirical patterns.


Assuntos
Comportamento Sexual , Humanos , Fenótipo , Tamanho da Amostra
15.
Harmful Algae ; 117: 102273, 2022 08.
Artigo em Inglês | MEDLINE | ID: mdl-35944960

RESUMO

Machine learning, Deep learning, and water quality data have been used in recent years to predict the outbreak of harmful algae, especially Microcystis, and analyze outbreak causes. However, for various reasons, water quality data are often High-Dimension, Low-Sample- Size (HDLSS), meaning the sample size is lower than the number of dimensions. Moreover, imbalance problems may arise due to bias in the occurrence frequency of Microcystis. These problems make predicting the occurrence of Microcystis and analyzing its causes with machine learning difficult. In this study, a machine learning model that applies Feature Engineering (FE) and Feature Selection (FS) algorithms are used to predict outbreaks of Microcystis and analyze the outbreak factors from imbalanced HDLSS water quality data. The prediction performance was verified with binary classification to determine whether Microcystis would occur in the future by applying three machine learning models to four data patterns. The cause analysis of Microcystis occurrence was performed by visualizing the results of applying FE and FS. For the test data, the predictive performance of FE and FS methods was significantly better than that of the conventional method, with an accuracy of .108 points and an F-value of .691 points higher than the conventional method. A prediction performance increase was observed with a smaller model capacity. Data-driven analysis suggested that total nitrogen, chemical oxygen demand, chlorophyll-a, dissolved oxygen saturation, and water temperature are associated with Microcystis occurrences. The results also indicated that basic statistics of the water quality distribution (especially mean, standard deviation, and skewness) over a year, not the concentrations of water components, are related to the occurrence of Microcystis. These are new findings not found in previous studies and are expected to contribute significantly to future studies of algae. This study provides a method for analyzing water quality data with high-dimensionality and small samples, imbalance problems, or both.


Assuntos
Microcystis , Clorofila A , Aprendizado de Máquina , Tamanho da Amostra , Qualidade da Água
16.
Medicine (Baltimore) ; 101(33): e29959, 2022 Aug 19.
Artigo em Inglês | MEDLINE | ID: mdl-35984206

RESUMO

INTRODUCTION: Successful recruitment of participants into clinical research has always been challenging and is affected by many factors. This systematic review aimed to explore the perceptions and attitudes as well as identify the factors affecting the participation in clinical research among the Eastern Mediterranean Regional Office countries' population. METHODS: A systematic search of the literature was conducted to explore attitudes or perceptions of the general public or patients towards participation in clinical research. PubMed, Pro-Quest Central, World Health Organizations Index Medicus for the Eastern Mediterranean Region, and Google Scholar were searched. Studies were considered eligible for inclusion if they presented primary data and were conducted in one of the Eastern Mediterranean Regional Office countries. A data extraction sheet was used to record the following: year, country, aim, population, sample size, study design, data collection, and setting. The identified factors from the included studies were categorized into motivators and barriers. RESULTS: In total, 23 original research articles were identified that addressed perceptions or attitudes towards clinical research participation. Six main motivators and barriers of research participation among patients, the general public, and patient family members were identified. The most common cited motivators included personal benefits to the individual, altruism and the desire to help others, the research process, the influence of the physician, family encouragement, and religion. Concerns regarding safety, confidentiality, and other factors in addition to the research process, lack of trust in healthcare providers or healthcare system, lack of interest in research and no perceived personal benefit, religious concerns, and family/cultural concerns were the most cited barriers to participation. CONCLUSION: The identified motivators and barriers are essential to tackle during clinical research planning among the population of Eastern Mediterranean Regional Office countries. Further research is needed to assess the attitudes and perceptions of individuals approached to participate in trials.


Assuntos
Conhecimentos, Atitudes e Prática em Saúde , Médicos , Humanos , Religião , Tamanho da Amostra , Confiança
17.
Epidemiology ; 33(5): 707-714, 2022 Sep 01.
Artigo em Inglês | MEDLINE | ID: mdl-35944152

RESUMO

A valid study design is essential when assessing the safety of drugs based on observational data. The comparator group is a key element of the design and can greatly influence the results. The active comparator new user design is a go-to design in observational drug safety research where a target trial of initiation of a study drug versus usual care is emulated. A comparison with another treatment that targets similar patients as the study drug and has no effect on the outcome has great potential to reduce bias. However, the active comparator new user design can be difficult to implement because no suitable comparator drug is available or because it requires extensive exclusion of study drug initiators. In this analysis, we evaluated alternative study designs that can be used in drug safety assessments when the active comparator new user design is not optimal. Using target trial emulation as a common framework, we defined and evaluated the following designs: traditional no use, no-use episodes, active comparator new user, prevalent new user, generalized prevalent new user, and hierarchical prevalent new user. We showed that all designs can be implemented by using sequential cohorts and simply altering the patient selection criteria, i.e., identifying increasingly restrictive cohorts. In this way, all designs are nested in each other and the differences between them can be demonstrated clearly. We concluded that many study-specific factors need to be considered when choosing a design, including indication, available comparator drugs, treatment patterns, potential effect modification, and sample size.


Assuntos
Projetos de Pesquisa , Viés , Humanos , Tamanho da Amostra
19.
BMC Med Res Methodol ; 22(1): 235, 2022 08 31.
Artigo em Inglês | MEDLINE | ID: mdl-36045338

RESUMO

BACKGROUND: A classic methodology used in evaluating the impact of health policy interventions is interrupted time-series (ITS) analysis, applying a quasi-experimental design that uses both pre- and post-policy data without randomization. In this paper, we took a simulation-based approach to estimating intervention effects under different assumptions. METHODS: Each of the simulated mortality rates contained a linear time trend, seasonality, autoregressive, and moving-average terms. The simulations of the policy effects involved three scenarios: 1) immediate-level change only, 2) immediate-level and slope change, and 3) lagged-level and slope change. The estimated effects and biases of these effects were examined via three matched generalized additive mixed models, each of which used two different approaches: 1) effects based on estimated coefficients (estimated approach), and 2) effects based on predictions from models (predicted approach). The robustness of these two approaches was further investigated assuming misspecification of the models. RESULTS: When one simulated dataset was analyzed with the matched model, the two analytical approaches produced similar estimates. However, when the models were misspecified, the number of deaths prevented, estimated using the predicted vs. estimated approaches, were very different, with the predicted approach yielding estimates closer to the real effect. The discrepancy was larger when the policy was applied early in the time-series. CONCLUSION: Even when the sample size appears to be large enough, one should still be cautious when conducting ITS analyses, since the power also depends on when in the series the intervention occurs. In addition, the intervention lagged effect needs to be fully considered at the study design stage (i.e., when developing the models).


Assuntos
Política de Saúde , Projetos de Pesquisa , Simulação por Computador , Humanos , Análise de Séries Temporais Interrompida , Tamanho da Amostra
20.
Cad Saude Publica ; 38Suppl 1(Suppl 1): e00164321, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35857956

RESUMO

Our objective is to describe the differences in the sampling plans of the two editions of the Brazilian National Health Survey (PNS 2013 and 2019) and to evaluate how the changes affected the coefficient of variation (CV) and the design effect (Deff) of some estimated indicators. Variables from different parts of the questionnaire were analyzed to cover proportions with different magnitudes. The prevalence of obesity was included in the analysis since anthropometry measurement in the 2019 survey was performed in a subsample. The value of the point estimate, CV, and the Deff were calculated for each indicator, considering the stratification of the primary sampling units, the weighting of the sampling units, and the clustering effect. The CV and the Deff were lower in the 2019 estimates for most indicators. Concerning the questionnaire indicators of all household members, the Deffs were high and reached values greater than 18 for having a health insurance plan. Regarding the indicators of the individual questionnaire, for the prevalence of obesity, the Deff ranged from 2.7 to 4.2, in 2013, and from 2.7 to 10.2, in 2019. The prevalence of hypertension and diabetes per Federative Unit had a higher CV and lower Deff. Expanding the sample size to meet the diverse health objectives and the high Deff are significant challenges for developing probabilistic household-based national survey. New probabilistic sampling strategies should be considered to reduce costs and clustering effects.


Assuntos
Obesidade , Brasil/epidemiologia , Análise por Conglomerados , Inquéritos Epidemiológicos , Humanos , Obesidade/epidemiologia , Tamanho da Amostra
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...