Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 192
Filtrar
1.
Am J Epidemiol ; 2024 Jun 24.
Artigo em Inglês | MEDLINE | ID: mdl-38918039

RESUMO

There is a dearth of safety data on maternal outcomes after perinatal medication exposure. Data-mining for unexpected adverse event occurrence in existing datasets is a potentially useful approach. One method, the Poisson tree-based scan statistic (TBSS), assumes that the expected outcome counts, based on incidence of outcomes in the control group, are estimated without error. This assumption may be difficult to satisfy with a small control group. Our simulation study evaluated the effect of imprecise incidence proportions from the control group on TBSS' ability to identify maternal outcomes in pregnancy research. We simulated base case analyses with "true" expected incidence proportions and compared these to imprecise incidence proportions derived from sparse control samples. We varied parameters impacting Type I error and statistical power (exposure group size, outcome's incidence proportion, and effect size). We found that imprecise incidence proportions generated by a small control group resulted in inaccurate alerting, inflation of Type I error, and removal of very rare outcomes for TBSS analysis due to "zero" background counts. Ideally, the control size should be at least several times larger than the exposure size to limit the number of false positive alerts and retain statistical power for true alerts.

2.
Brief Bioinform ; 23(1)2022 01 17.
Artigo em Inglês | MEDLINE | ID: mdl-34545927

RESUMO

Quantitative trait locus (QTL) analyses of multiomic molecular traits, such as gene transcription (eQTL), DNA methylation (mQTL) and histone modification (haQTL), have been widely used to infer the functional effects of genome variants. However, the QTL discovery is largely restricted by the limited study sample size, which demands higher threshold of minor allele frequency and then causes heavy missing molecular trait-variant associations. This happens prominently in single-cell level molecular QTL studies because of sample availability and cost. It is urgent to propose a method to solve this problem in order to enhance discoveries of current molecular QTL studies with small sample size. In this study, we presented an efficient computational framework called xQTLImp to impute missing molecular QTL associations. In the local-region imputation, xQTLImp uses multivariate Gaussian model to impute the missing associations by leveraging known association statistics of variants and the linkage disequilibrium (LD) around. In the genome-wide imputation, novel procedures are implemented to improve efficiency, including dynamically constructing a reused LD buffer, adopting multiple heuristic strategies and parallel computing. Experiments on various multiomic bulk and single-cell sequencing-based QTL datasets have demonstrated high imputation accuracy and novel QTL discovery ability of xQTLImp. Finally, a C++ software package is freely available at https://github.com/stormlovetao/QTLIMP.


Assuntos
Estudo de Associação Genômica Ampla , Locos de Características Quantitativas , Estudo de Associação Genômica Ampla/métodos , Genótipo , Desequilíbrio de Ligação , Fenótipo , Polimorfismo de Nucleotídeo Único , Tamanho da Amostra
3.
Biometrics ; 80(1)2024 Jan 29.
Artigo em Inglês | MEDLINE | ID: mdl-38386359

RESUMO

In clinical studies of chronic diseases, the effectiveness of an intervention is often assessed using "high cost" outcomes that require long-term patient follow-up and/or are invasive to obtain. While much progress has been made in the development of statistical methods to identify surrogate markers, that is, measurements that could replace such costly outcomes, they are generally not applicable to studies with a small sample size. These methods either rely on nonparametric smoothing which requires a relatively large sample size or rely on strict model assumptions that are unlikely to hold in practice and empirically difficult to verify with a small sample size. In this paper, we develop a novel rank-based nonparametric approach to evaluate a surrogate marker in a small sample size setting. The method developed in this paper is motivated by a small study of children with nonalcoholic fatty liver disease (NAFLD), a diagnosis for a range of liver conditions in individuals without significant history of alcohol intake. Specifically, we examine whether change in alanine aminotransferase (ALT; measured in blood) is a surrogate marker for change in NAFLD activity score (obtained by biopsy) in a trial, which compared Vitamin E ($n=50$) versus placebo ($n=46$) among children with NAFLD.


Assuntos
Hepatopatia Gordurosa não Alcoólica , Criança , Humanos , Hepatopatia Gordurosa não Alcoólica/diagnóstico , Biomarcadores , Biópsia , Tamanho da Amostra
4.
Clin Trials ; : 17407745241276137, 2024 Oct 08.
Artigo em Inglês | MEDLINE | ID: mdl-39377196

RESUMO

BACKGROUND/AIMS: Stepped-wedge cluster randomized trials tend to require fewer clusters than standard parallel-arm designs due to the switches between control and intervention conditions, but there are no recommendations for the minimum number of clusters. Trials randomizing an extremely small number of clusters are not uncommon, but the justification for small numbers of clusters is often unclear and appropriate analysis is often lacking. In addition, stepped-wedge cluster randomized trials are methodologically more complex due to their longitudinal correlation structure, and ignoring the distinct within- and between-period intracluster correlations can underestimate the sample size in small stepped-wedge cluster randomized trials. We conducted a review of published small stepped-wedge cluster randomized trials to understand how and why they are used, and to characterize approaches used in their design and analysis. METHODS: Electronic searches were used to identify primary reports of full-scale stepped-wedge cluster randomized trials published during the period 2016-2022; the subset that randomized two to six clusters was identified. Two reviewers independently extracted information from each report and any available protocol. Disagreements were resolved through discussion. RESULTS: We identified 61 stepped-wedge cluster randomized trials that randomized two to six clusters: median sample size (Q1-Q3) 1426 (420-7553) participants. Twelve (19.7%) gave some indication that the evaluation was considered a "preliminary" evaluation and 16 (26.2%) recognized the small number of clusters as a limitation. Sixteen (26.2%) provided an explanation for the limited number of clusters: the need to minimize contamination (e.g. by merging adjacent units), limited availability of clusters, and logistical considerations were common explanations. Majority (51, 83.6%) presented sample size or power calculations, but only one assumed distinct within- and between-period intracluster correlations. Few (10, 16.4%) utilized restricted randomization methods; more than half (34, 55.7%) identified baseline imbalances. The most common statistical method for analysis was the generalized linear mixed model (44, 72.1%). Only four trials (6.6%) reported statistical analyses considering small numbers of clusters: one used generalized estimating equations with small-sample correction, two used generalized linear mixed model with small-sample correction, and one used Bayesian analysis. Another eight (13.1%) used fixed-effects regression, the performance of which requires further evaluation under stepped-wedge cluster randomized trials with small numbers of clusters. None used permutation tests or cluster-period level analysis. CONCLUSION: Methods appropriate for the design and analysis of small stepped-wedge cluster randomized trials have not been widely adopted in practice. Greater awareness is required that the use of standard sample size calculation methods can provide spuriously low numbers of required clusters. Methods such as generalized estimating equations or generalized linear mixed models with small-sample corrections, Bayesian approaches, and permutation tests may be more appropriate for the analysis of small stepped-wedge cluster randomized trials. Future research is needed to establish best practices for stepped-wedge cluster randomized trials with a small number of clusters.

5.
Clin Trials ; 21(2): 199-210, 2024 04.
Artigo em Inglês | MEDLINE | ID: mdl-37990575

RESUMO

BACKGROUND/AIMS: The stepped-wedge cluster randomized trial (SW-CRT), in which clusters are randomized to a time at which they will transition to the intervention condition - rather than a trial arm - is a relatively new design. SW-CRTs have additional design and analytical considerations compared to conventional parallel arm trials. To inform future methodological development, including guidance for trialists and the selection of parameters for statistical simulation studies, we conducted a review of recently published SW-CRTs. Specific objectives were to describe (1) the types of designs used in practice, (2) adherence to key requirements for statistical analysis, and (3) practices around covariate adjustment. We also examined changes in adherence over time and by journal impact factor. METHODS: We used electronic searches to identify primary reports of SW-CRTs published 2016-2022. Two reviewers extracted information from each trial report and its protocol, if available, and resolved disagreements through discussion. RESULTS: We identified 160 eligible trials, randomizing a median (Q1-Q3) of 11 (8-18) clusters to 5 (4-7) sequences. The majority (122, 76%) were cross-sectional (almost all with continuous recruitment), 23 (14%) were closed cohorts and 15 (9%) open cohorts. Many trials had complex design features such as multiple or multivariate primary outcomes (50, 31%) or time-dependent repeated measures (27, 22%). The most common type of primary outcome was binary (51%); continuous outcomes were less common (26%). The most frequently used method of analysis was a generalized linear mixed model (112, 70%); generalized estimating equations were used less frequently (12, 8%). Among 142 trials with fewer than 40 clusters, only 9 (6%) reported using methods appropriate for a small number of clusters. Statistical analyses clearly adjusted for time effects in 119 (74%), for within-cluster correlations in 132 (83%), and for distinct between-period correlations in 13 (8%). Covariates were included in the primary analysis of the primary outcome in 82 (51%) and were most often individual-level covariates; however, clear and complete pre-specification of covariates was uncommon. Adherence to some key methodological requirements (adjusting for time effects, accounting for within-period correlation) was higher among trials published in higher versus lower impact factor journals. Substantial improvements over time were not observed although a slight improvement was observed in the proportion accounting for a distinct between-period correlation. CONCLUSIONS: Future methods development should prioritize methods for SW-CRTs with binary or time-to-event outcomes, small numbers of clusters, continuous recruitment designs, multivariate outcomes, or time-dependent repeated measures. Trialists, journal editors, and peer reviewers should be aware that SW-CRTs have additional methodological requirements over parallel arm designs including the need to account for period effects as well as complex intracluster correlations.


Assuntos
Projetos de Pesquisa , Humanos , Análise por Conglomerados , Ensaios Clínicos Controlados Aleatórios como Assunto , Simulação por Computador , Modelos Lineares , Tamanho da Amostra
6.
Clin Trials ; 21(3): 350-357, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38618916

RESUMO

In the last few years, numerous novel designs have been proposed to improve the efficiency and accuracy of phase I trials to identify the maximum-tolerated dose (MTD) or the optimal biological dose (OBD) for noncytotoxic agents. However, the conventional 3+3 approach, known for its and poor performance, continues to be an attractive choice for many trials despite these alternative suggestions. The article seeks to underscore the importance of moving beyond the 3+3 design by highlighting a different key element in trial design: the estimation of sample size and its crucial role in predicting toxicity and determining the MTD. We use simulation studies to compare the performance of the most used phase I approaches: 3+3, Continual Reassessment Method (CRM), Keyboard and Bayesian Optimal Interval (BOIN) designs regarding three key operating characteristics: the percentage of correct selection of the true MTD, the average number of patients allocated per dose level, and the average total sample size. The simulation results consistently show that the 3+3 algorithm underperforms in comparison to model-based and model-assisted designs across all scenarios and metrics. The 3+3 method yields significantly lower (up to three times) probabilities in identifying the correct MTD, often selecting doses one or even two levels below the actual MTD. The 3+3 design allocates significantly fewer patients at the true MTD, assigns higher numbers to lower dose levels, and rarely explores doses above the target dose-limiting toxicity (DLT) rate. The overall performance of the 3+3 method is suboptimal, with a high level of unexplained uncertainty and significant implications for accurately determining the MTD. While the primary focus of the article is to demonstrate the limitations of the 3+3 algorithm, the question remains about the preferred alternative approach. The intention is not to definitively recommend one model-based or model-assisted method over others, as their performance can vary based on parameters and model specifications. However, the presented results indicate that the CRM, Keyboard, and BOIN designs consistently outperform the 3+3 and offer improved efficiency and precision in determining the MTD, which is crucial in early-phase clinical trials.


Assuntos
Algoritmos , Teorema de Bayes , Ensaios Clínicos Fase I como Assunto , Simulação por Computador , Relação Dose-Resposta a Droga , Dose Máxima Tolerável , Projetos de Pesquisa , Humanos , Tamanho da Amostra , Ensaios Clínicos Fase I como Assunto/métodos , Modelos Estatísticos
7.
J Biopharm Stat ; : 1-19, 2024 May 02.
Artigo em Inglês | MEDLINE | ID: mdl-38695298

RESUMO

In the drug development for rare disease, the number of treated subjects in the clinical trial is often very small, whereas the number of external controls can be relatively large. There is no clear guidance on choosing an appropriate statistical method to control baseline confounding in this situation. To fill this gap, we conduct extensive simulations to evaluate the performance of commonly used matching and weighting methods as well as the more recently developed targeted maximum likelihood estimation (TMLE) and cardinality matching in small sample settings, mimicking the motivating data from a pediatric rare disease. Among the methods examined, the performance of coarsened exact matching (CEM) and TMLE are relatively robust under various model specifications. CEM is only feasible when the number of controls far exceeds the number of treated, whereas TMLE has better performance with less extreme treatment allocation ratios. Our simulations suggest bootstrap is useful for variance estimation in small samples after matching.

8.
Sensors (Basel) ; 24(17)2024 Sep 01.
Artigo em Inglês | MEDLINE | ID: mdl-39275611

RESUMO

The fault diagnosis of rolling bearings is faced with the problem of a lack of fault data. Currently, fault diagnosis based on traditional convolutional neural networks decreases the diagnosis rate. In this paper, the developed adaptive residual shrinkage network model is combined with transfer learning to solve the above problems. The model is trained on the Case Western Reserve dataset, and then the trained model is migrated to a small-sample dataset with a scaled-down sample size and the Jiangnan University bearing dataset to conduct the experiments. The experimental results show that the proposed method can efficiently learn from small-sample datasets, improving the accuracy of the fault diagnosis of bearings under variable loads and variable speeds. The adaptive parameter-rectified linear unit is utilized to adapt the nonlinear transformation. When rolling bearings are in operation, noise production is inevitable. In this paper, soft thresholding and an attention mechanism are added to the model, which can effectively process vibration signals with strong noise. In this paper, the real noise is simulated by adding Gaussian white noise in migration task experiments on small-sample datasets. The experimental results show that the algorithm has noise resistance.

9.
Sensors (Basel) ; 24(3)2024 Jan 30.
Artigo em Inglês | MEDLINE | ID: mdl-38339607

RESUMO

In response to the challenge of small and imbalanced Datasets, where the total Sample size is limited and healthy Samples significantly outweigh faulty ones, we propose a diagnostic framework designed to tackle Class imbalance, denoted as the Dual-Stream Adaptive Deep Residual Shrinkage Vision Transformer with Interclass-Intraclass Rebalancing Loss (DSADRSViT-IIRL). Firstly, to address the issue of limited Sample quantity, we incorporated the Dual-Stream Adaptive Deep Residual Shrinkage Block (DSA-DRSB) into the Vision Transformer (ViT) architecture, creating a DSA-DRSB that adaptively removes redundant signal information based on the input data characteristics. This enhancement enables the model to focus on the Global receptive field while capturing crucial local fault discrimination features from the extremely limited Samples. Furthermore, to tackle the problem of a significant Class imbalance in long-tailed Datasets, we designed an Interclass-Intraclass Rebalancing Loss (IIRL), which decouples the contributions of the Intraclass and Interclass Samples during training, thus promoting the stable convergence of the model. Finally, we conducted experiments on the Laboratory and CWRU bearing Datasets, validating the superiority of the DSADRSViT-IIRL algorithm in handling Class imbalance within mixed-load Datasets.

10.
Sensors (Basel) ; 24(11)2024 Jun 04.
Artigo em Inglês | MEDLINE | ID: mdl-38894412

RESUMO

Surface roughness is one of the main bases for measuring the surface quality of machined parts. A large amount of training data can effectively improve model prediction accuracy. However, obtaining a large and complete surface roughness sample dataset during the ultra-precision machining process is a challenging task. In this article, a novel virtual sample generation scheme (PSOVSGBLS) for surface roughness is designed to address the small sample problem in ultra-precision machining, which utilizes a particle swarm optimization algorithm combined with a broad learning system to generate virtual samples, enriching the diversity of samples by filling the information gaps between the original small samples. Finally, a set of ultra-precision micro-groove cutting experiments was carried out to verify the feasibility of the proposed virtual sample generation scheme, and the results show that the prediction error of the surface roughness prediction model was significantly reduced after adding virtual samples.

11.
Behav Res Methods ; 56(4): 4130-4161, 2024 04.
Artigo em Inglês | MEDLINE | ID: mdl-38519726

RESUMO

Item response theory (IRT) has evolved as a standard psychometric approach in recent years, in particular for test construction based on dichotomous (i.e., true/false) items. Unfortunately, large samples are typically needed for item refinement in unidimensional models and even more so in the multidimensional case. However, Bayesian IRT approaches with hierarchical priors have recently been shown to be promising for estimating even complex models in small samples. Still, it may be challenging for applied researchers to set up such IRT models in general purpose or specialized statistical computer programs. Therefore, we developed a user-friendly tool - a SAS macro called HBMIRT - that allows to estimate uni- and multidimensional IRT models with dichotomous items. We explain the capabilities and features of the macro and demonstrate the particular advantages of the implemented hierarchical priors in rather small samples over weakly informative priors and traditional maximum likelihood estimation with the help of a simulation study. The macro can also be used with the online version of SAS OnDemand for Academics that is freely accessible for academic researchers.


Assuntos
Teorema de Bayes , Modelos Estatísticos , Psicometria , Humanos , Psicometria/métodos , Software , Funções Verossimilhança , Simulação por Computador
12.
Hum Brain Mapp ; 44(17): 5672-5692, 2023 12 01.
Artigo em Inglês | MEDLINE | ID: mdl-37668327

RESUMO

Resting-state functional magnetic resonance imaging (rs-fMRI) helps characterize regional interactions that occur in the human brain at a resting state. Existing research often attempts to explore fMRI biomarkers that best predict brain disease progression using machine/deep learning techniques. Previous fMRI studies have shown that learning-based methods usually require a large amount of labeled training data, limiting their utility in clinical practice where annotating data is often time-consuming and labor-intensive. To this end, we propose an unsupervised contrastive graph learning (UCGL) framework for fMRI-based brain disease analysis, in which a pretext model is designed to generate informative fMRI representations using unlabeled training data, followed by model fine-tuning to perform downstream disease identification tasks. Specifically, in the pretext model, we first design a bi-level fMRI augmentation strategy to increase the sample size by augmenting blood-oxygen-level-dependent (BOLD) signals, and then employ two parallel graph convolutional networks for fMRI feature extraction in an unsupervised contrastive learning manner. This pretext model can be optimized on large-scale fMRI datasets, without requiring labeled training data. This model is further fine-tuned on to-be-analyzed fMRI data for downstream disease detection in a task-oriented learning manner. We evaluate the proposed method on three rs-fMRI datasets for cross-site and cross-dataset learning tasks. Experimental results suggest that the UCGL outperforms several state-of-the-art approaches in automated diagnosis of three brain diseases (i.e., major depressive disorder, autism spectrum disorder, and Alzheimer's disease) with rs-fMRI data.


Assuntos
Doença de Alzheimer , Transtorno do Espectro Autista , Transtorno Depressivo Maior , Humanos , Descanso , Encéfalo , Imageamento por Ressonância Magnética/métodos , Doença de Alzheimer/patologia
13.
Biometrics ; 79(4): 3612-3623, 2023 12.
Artigo em Inglês | MEDLINE | ID: mdl-37323055

RESUMO

In Duchenne muscular dystrophy (DMD) and other rare diseases, recruiting patients into clinical trials is challenging. Additionally, assigning patients to long-term, multi-year placebo arms raises ethical and trial retention concerns. This poses a significant challenge to the traditional sequential drug development paradigm. In this paper, we propose a small-sample, sequential, multiple assignment, randomized trial (snSMART) design that combines dose selection and confirmatory assessment into a single trial. This multi-stage design evaluates the effects of multiple doses of a promising drug and re-randomizes patients to appropriate dose levels based on their Stage 1 dose and response. Our proposed approach increases the efficiency of treatment effect estimates by (i) enriching the placebo arm with external control data, and (ii) using data from all stages. Data from external control and different stages are combined using a robust meta-analytic combined (MAC) approach to consider the various sources of heterogeneity and potential selection bias. We reanalyze data from a DMD trial using the proposed method and external control data from the Duchenne Natural History Study (DNHS). Our method's estimators show improved efficiency compared to the original trial. Also, the robust MAC-snSMART method most often provides more accurate estimators than the traditional analytic method. Overall, the proposed methodology provides a promising candidate for efficient drug development in DMD and other rare diseases.


Assuntos
Distrofia Muscular de Duchenne , Humanos , Distrofia Muscular de Duchenne/tratamento farmacológico , Teorema de Bayes , Doenças Raras
14.
Pain Med ; 24(7): 872-880, 2023 07 05.
Artigo em Inglês | MEDLINE | ID: mdl-36538782

RESUMO

OBJECTIVE: The objective was to investigate the efficacy and safety of soticlestat as adjunctive therapy in participants with complex regional pain syndrome (CRPS). DESIGN: A proof-of-concept phase 2a study, comprising a 15-week randomized, double-blind, placebo-controlled, parallel-group study (part A), and an optional 14-week open-label extension (part B). METHODS: Twenty-four participants (median age 44.5 years [range, 18-62 years]; 70.8% female) with chronic CRPS were randomized (2:1) to receive oral soticlestat or placebo. Soticlestat dosing started at 100 mg twice daily and was titrated up to 300 mg twice daily. In part B, soticlestat dosing started at 200 mg twice daily and was titrated up or down at the investigator's discretion. Pain intensity scores using the 11-point Numeric Pain Scale (NPS) were collected daily. The Patient-Reported Outcomes Measurement Information System (PROMIS)-29, Patients' Global Impression of Change (PGI-C), and CRPS Severity Score (CSS) were completed at screening and weeks 15 and 29. RESULTS: From baseline to week 15, soticlestat treatment was associated with a mean change in 24-hour pain intensity NPS score (95% confidence interval) of -0.75 (-1.55, 0.05) vs -0.41 (-1.41, 0.59) in the placebo group, resulting in a non-significant placebo-adjusted difference of -0.34 (-1.55, 0.88; P = .570). Statistically non-significant numerical changes were observed for the PROMIS-29, PGI-C, and CSS at weeks 15 and 29. CONCLUSIONS: Adjunctive soticlestat treatment did not significantly reduce pain intensity in participants with chronic CRPS.


Assuntos
Síndromes da Dor Regional Complexa , Humanos , Adulto , Feminino , Masculino , Resultado do Tratamento , Síndromes da Dor Regional Complexa/tratamento farmacológico , Método Duplo-Cego , Medição da Dor
15.
Prev Sci ; 24(3): 505-516, 2023 04.
Artigo em Inglês | MEDLINE | ID: mdl-34235633

RESUMO

Growth mixture models (GMMs) are applied to intervention studies with repeated measures to explore heterogeneity in the intervention effect. However, traditional GMMs are known to be difficult to estimate, especially at sample sizes common in single-center interventions. Common strategies to coerce GMMs to converge involve post hoc adjustments to the model, particularly constraining covariance parameters to equality across classes. Methodological studies have shown that although convergence is improved with post hoc adjustments, they embed additional tenuous assumptions into the model that can adversely impact key aspects of the model such as number of classes extracted and the estimated growth trajectories in each class. To facilitate convergence without post hoc adjustments, this paper reviews the recent literature on covariance pattern mixture models, which approach GMMs from a marginal modeling tradition rather than the random effect modeling tradition used by traditional GMMs. We discuss how the marginal modeling tradition can avoid complexities in estimation encountered by GMMs that feature random effects, and we use data from a lifestyle intervention for increasing insulin sensitivity (a risk factor for type 2 diabetes) among 90 Latino adolescents with obesity to demonstrate our point. Specifically, GMMs featuring random effects-even with post hoc adjustments-fail to converge due to estimation errors, whereas covariance pattern mixture models following the marginal model tradition encounter no issues with estimation while maintaining the ability to answer all the research questions.


Assuntos
Diabetes Mellitus Tipo 2 , Humanos , Diabetes Mellitus Tipo 2/prevenção & controle , Fatores de Risco , Obesidade , Projetos de Pesquisa , Estilo de Vida
16.
Sensors (Basel) ; 23(22)2023 Nov 19.
Artigo em Inglês | MEDLINE | ID: mdl-38005656

RESUMO

Predicting energy consumption in large exposition centers presents a significant challenge, primarily due to the limited datasets and fluctuating electricity usage patterns. This study introduces a cutting-edge algorithm, the contrastive transformer network (CTN), to address these issues. By leveraging self-supervised learning, the CTN employs contrastive learning techniques across both temporal and contextual dimensions. Its transformer-based architecture, tailored for efficient feature extraction, allows the CTN to excel in predicting energy consumption in expansive structures, especially when data samples are scarce. Rigorous experiments on a proprietary dataset underscore the potency of the CTN in this domain.

17.
Pharm Stat ; 22(5): 760-772, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37119000

RESUMO

The Multiple Comparison Procedures with Modeling Techniques (MCP-Mod) framework has been recently approved by the U.S. Food, Administration, and European Medicines Agency as fit-for-purpose for phase II studies. Nonetheless, this approach relies on the asymptotic properties of Maximum Likelihood (ML) estimators, which might not be reasonable for small sample sizes. In this paper, we derived improved ML estimators and correction for their covariance matrices in the censored Weibull regression model based on the corrective and preventive approaches. We performed two simulation studies to evaluate ML and improved ML estimators with their covariance matrices in (i) a regression framework (ii) the Multiple Comparison Procedures with Modeling Techniques framework. We have shown that improved ML estimators are less biased than ML estimators yielding Wald-type statistics that controls type I error without loss of power in both frameworks. Therefore, we recommend the use of improved ML estimators in the MCP-Mod approach to control type I error at nominal value for sample sizes ranging from 5 to 25 subjects per dose.


Assuntos
Tamanho da Amostra , Humanos , Simulação por Computador
18.
Sensors (Basel) ; 23(18)2023 Sep 19.
Artigo em Inglês | MEDLINE | ID: mdl-37766024

RESUMO

Insulators are an important part of transmission lines in active distribution networks, and their performance has an impact on the power system's normal operation, security, and dependability. Traditional insulator detection methods, on the other hand, necessitate a significant amount of labor and material resources, necessitating the development of a new detection method to substitute manpower. This paper investigates the abnormal condition detection of insulators based on UAV vision sensors using artificial intelligence algorithms from small samples. Firstly, artificial intelligence for the image data volume requirements was large, i.e., the insulator image samples taken by the UAV vision sensor inspection were not enough, or there was a missing image problem, so the data enhancement method was used to expand the small sample data. Then, the YOLOV5 algorithm was used to compare detection results before and after the extended dataset's optimization to demonstrate the expanded dataset's dependability and universality, and the results revealed that the expanded dataset improved detection accuracy and precision. The insulator abnormal condition detection method based on small sample image data acquired by the visual sensors studied in this paper has certain theoretical guiding significance and engineering application prospects for the safe operation of active distribution networks.

19.
Sensors (Basel) ; 23(17)2023 Aug 31.
Artigo em Inglês | MEDLINE | ID: mdl-37688019

RESUMO

It is essential to accurately diagnose bearing faults to avoid property losses or casualties in the industry caused by motor failures. Recently, the methods of fault diagnosis for bearings using deep learning methods have improved the safety of motor operations in a reliable and intelligent way. However, most of the work is mainly suitable for situations where there is sufficient monitoring data of the bearings. In industrial systems, only a small amount of monitoring data can be collected by the bearing sensors due to the harsh monitoring conditions and the short time of the signals of some special motor bearings. To solve the issue above, this paper introduces a transfer learning strategy by focusing on the multi-local model bearing fault based on small sample fusion. The algorithm mainly includes the following steps: (1) constructing a parallel Bi-LSTM sub-network to extract features from bearing vibration and current signals of industrial motor bearings, serially fusing the extracted vibration and current signal features for fault classification, and using them as a source domain fault diagnosis model; (2) measuring the distribution difference between the source domain bearing data and the target bearing data using the maximum mean difference algorithm; (3) based on the distribution differences between the source domain and the target domain, transferring the network parameters of the source domain fault diagnosis model, fine-tuning the network structure of the source domain fault diagnosis model, and obtaining the target domain fault diagnosis model. A performance evaluation reveals that a higher fault diagnosis accuracy under small sample fusion can be maintained by the proposed method compared to other methods. In addition, the early training time of the fault diagnosis model can be reduced, and its generalization ability can be improved to a great extent. Specifically, the fault diagnosis accuracy can be improved to higher than 80% while the training time can be reduced to 15.3% by using the proposed method.

20.
Sensors (Basel) ; 23(3)2023 Jan 31.
Artigo em Inglês | MEDLINE | ID: mdl-36772585

RESUMO

Aiming at the existing Direction of Arrival (DOA) methods based on neural network, a large number of samples are required to achieve signal-scene adaptation and accurate angle estimation. In the coherent signal environment, the problems of a larger amount of training sample data are required. In this paper, the DOA of coherent signal is converted into the DOA parameter estimation of the angle interval of incident signal. The accurate estimation of coherent DOA under the condition of small samples based on meta-reinforcement learning (MRL) is realized. The meta-reinforcement learning method in this paper models the process of angle interval estimation of coherent signals as a Markov decision process. In the inner loop layer, the sequence to sequence (S2S) neural network is used to express the angular interval feature sequence of the incident signal DOA. The strategy learning of the existence of angle interval under small samples is realized through making full use of the context relevance of spatial spectral sequence through S2S neural network. Thus, according to the optimal strategy, the output sequence is sequentially determined to give the angle interval of the incident signal. Finally, DOA is obtained through one-dimensional spectral peak search according to the angle interval obtained. The experiment shows that the meta-reinforcement learning algorithm based on S2S neural network can quickly converge to the optimal state by only updating the gradient of S2S neural network parameters with a small sample set when a new signal environment appears.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA