Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 426
Filtrar
1.
Heliyon ; 10(15): e35756, 2024 Aug 15.
Artigo em Inglês | MEDLINE | ID: mdl-39170154

RESUMO

With the rapid development of information technology, high-speed digital optical signal transmission technology has become the core of modern communication networks. However, the increase in transmission rates brings challenges such as noise, distortion, and interference, which affect the accuracy of clock recovery. To address these issues, this study proposes a clock recovery algorithm based on the eye diagram opening area to improve the accuracy and efficiency of high-speed digital optical signal jitter measurement. The proposed method extracts clock information from the signal using the opening area and curvature characteristics of the eye diagram for jitter measurement. Experimental results demonstrate that the clock recovery algorithm based on the eye diagram opening area can stably reconstruct the signal eye diagram and obtain jitter parameters under different optical power conditions. At optical powers of -7.2 dBm, -12.2 dBm, and -17.2 dBm, the Q-factors were 8.8, 7.6, and 4.3, respectively, and the RMS jitter values were 12.2 ps, 13.4 ps, and 21.2 ps, respectively. At optical powers of -2.3 dBm, 0.1 dBm, 2.4 dBm, 4.6 dBm, and 6.0 dBm, the Q-factors were 9.1, 9.3, 9.5, 9.7, and 10.0, respectively, and the average jitter values were 8.9 ps, 8.5 ps, 8.0 ps, 7.5 ps, and 7.0 ps. These results indicate that the proposed algorithm performs excellently under low optical power conditions and maintains high recovery accuracy even when jitter increases at higher optical powers. The clock recovery algorithm based on the eye diagram opening area significantly improves the accuracy and stability of high-speed digital optical signal jitter measurement, enriches the theoretical research of clock recovery algorithms, and shows significant advantages in improving signal transmission quality, reducing bit error rate, and enhancing communication link reliability. The research outcomes provide key technical support for the optimization of modern high-speed optical communication systems.

2.
Micromachines (Basel) ; 15(7)2024 Jul 05.
Artigo em Inglês | MEDLINE | ID: mdl-39064396

RESUMO

The problem that the conventional double-exponential transient current model (DE model) can overdrive the circuit, which leads to the overestimation of the soft error rate of the logic cell, is solved. Our work uses a new and accurate model for predicting the soft error rate that brings the soft error rate closer to the actual. The piecewise double-exponential transient current model (PDE model) is chosen, and the accuracy of the model is reflected using the Layout Awareness Single Event Multi Transients Soft Error Rate Calculation tool (LA-SEMT-SER tool). The model can characterize transient current pulses piecewise and limit the peak current magnitude to not exceed the conduction current. TCAD models are constructed from 28 nm process library and cell layouts. The transfer characteristic curves of devices are calibrated, and functional timing verification is performed to ensure the accuracy of the TCAD model. The experimental results show that the PDE model is not only more consistent with TCAD simulation than the DE model in modeling the single event transient currents of the device, but also that the SER calculated by the LA-SEMT-SER tool based on the PDE model has a smaller error than the SER calculated by the LA-SEMT-SER tool based on the DE model.

3.
Sci Rep ; 14(1): 16814, 2024 Jul 22.
Artigo em Inglês | MEDLINE | ID: mdl-39039167

RESUMO

The integration of large intelligent surfaces (LIS) with non-orthogonal multiple access (NOMA) networks has emerged as a promising solution to enhance the capacity and coverage of wireless communication systems. In this study, we analyse the performance of a NOMA network with the assistance of LIS. We propose a system model where a base station (BS) equipped with a LIS serves multiple users. The LIS consists of many passive elements that can influence the wireless channel by adjusting the reflection coefficients. We consider a downlink scenario where the BS transmits to multiple users simultaneously using NOMA, and the LIS helps to improve the signal quality and coverage. We additionally evaluate the efficiency of the suggested LIS-assisted NOMA network. In addition, we evaluate the efficiency of the LIS-assisted NOMA network in comparison to conventional NOMA systems that do not utilize LISs. The findings indicate that the LIS has a notable impact on enhancing the system's performance in terms of diversity gain, probability of error, and pairwise error probability (PEP). Moreover, the suggested LIS-assisted NOMA network is shown to be superior to conventional NOMA systems through comparison. These findings offer useful insights into the performance analysis of LIS-assisted NOMA networks. They also serve as inspiration and motivation for future research and development in this new subject, with the potential to revolutionize wireless communication systems.

4.
Biom J ; 66(5): e202300197, 2024 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-38953619

RESUMO

In biomedical research, the simultaneous inference of multiple binary endpoints may be of interest. In such cases, an appropriate multiplicity adjustment is required that controls the family-wise error rate, which represents the probability of making incorrect test decisions. In this paper, we investigate two approaches that perform single-step p $p$ -value adjustments that also take into account the possible correlation between endpoints. A rather novel and flexible approach known as multiple marginal models is considered, which is based on stacking of the parameter estimates of the marginal models and deriving their joint asymptotic distribution. We also investigate a nonparametric vector-based resampling approach, and we compare both approaches with the Bonferroni method by examining the family-wise error rate and power for different parameter settings, including low proportions and small sample sizes. The results show that the resampling-based approach consistently outperforms the other methods in terms of power, while still controlling the family-wise error rate. The multiple marginal models approach, on the other hand, shows a more conservative behavior. However, it offers more versatility in application, allowing for more complex models or straightforward computation of simultaneous confidence intervals. The practical application of the methods is demonstrated using a toxicological dataset from the National Toxicology Program.


Assuntos
Pesquisa Biomédica , Biometria , Modelos Estatísticos , Biometria/métodos , Pesquisa Biomédica/métodos , Tamanho da Amostra , Determinação de Ponto Final , Humanos
5.
S Afr Fam Pract (2004) ; 66(1): e1-e7, 2024 Jun 07.
Artigo em Inglês | MEDLINE | ID: mdl-38949450

RESUMO

BACKGROUND:  This project is part of a broader effort to develop a new electronic registry for ophthalmology in the KwaZulu-Natal (KZN) province in South Africa. The registry should include a clinical decision support system that reduces the potential for human error and should be applicable for our diversity of hospitals, whether electronic health record (EHR) or paper-based. METHODS:  Post-operative prescriptions of consecutive cataract surgery discharges were included for 2019 and 2020. Comparisons were facilitated by the four chosen state hospitals in KZN each having a different system for prescribing medications: Electronic, tick sheet, ink stamp and handwritten health records. Error types were compared to hospital systems to identify easily-correctable errors. Potential error remedies were sought by a four-step process. RESULTS:  There were 1307 individual errors in 1661 prescriptions, categorised into 20 error types. Increasing levels of technology did not decrease error rates but did decrease the variety of error types. High technology scripts had the most errors but when easily correctable errors were removed, EHRs had the lowest error rates and handwritten the highest. CONCLUSION:  Increasing technology, by itself, does not seem to reduce prescription error. Technology does, however, seem to decrease the variability of potential error types, which make many of the errors simpler to correct.Contribution: Regular audits are an effective tool to greatly reduce prescription errors, and the higher the technology level, the more effective these audit interventions become. This advantage can be transferred to paper-based notes by utilising a hybrid electronic registry to print the formal medical record.


Assuntos
Registros Eletrônicos de Saúde , Erros de Medicação , Humanos , África do Sul , Erros de Medicação/prevenção & controle , Erros de Medicação/estatística & dados numéricos , Sistema de Registros , Prescrições de Medicamentos/estatística & dados numéricos , Extração de Catarata/métodos , Sistemas de Apoio a Decisões Clínicas
6.
Network ; : 1-24, 2024 Jul 16.
Artigo em Inglês | MEDLINE | ID: mdl-39014986

RESUMO

Quantum key distribution (QKD) is a secure communication method that enables two parties to securely exchange a secret key. The secure key rate is a crucial metric for assessing the efficiency and practical viability of a QKD system. There are several approaches that are utilized in practice to calculate the secure key rate. In this manuscript, QKD and error rate optimization based on optimized multi-head self-attention and gated-dilated convolutional neural network (QKD-ERO-MSGCNN) is proposed. Initially, the input signals are gathered from 6G wireless networks which face obstacles to channel. For extending maximum transmission distances and improving secret key rates, the signals are fed to the variable velocity strategy particle swarm optimization algorithm, then the signals are fed to MSGCNN for analysing the quantum bit error rate reduction. The MSGCNN is optimized by intensified sand cat swarm optimization. The performance of the QKD-ERO-MSGCNN approach attains 15.57%, 23.89%, and 31.75% higher accuracy when analysed with existing techniques, like device-independent QKD utilizing random quantum states, practical continuous-variable QKD and feasible optimization parameters, entanglement and teleportation in QKD for secure wireless systems, and QKD for large scale networks methods, respectively.

7.
Entropy (Basel) ; 26(6)2024 May 26.
Artigo em Inglês | MEDLINE | ID: mdl-38920455

RESUMO

This study introduces a novel approach to bolstering quantum key distribution (QKD) security by implementing swift classical channel authentication within the SARG04 and BB84 protocols. We propose mono-authentication, a pioneering paradigm employing quantum-resistant signature algorithms-specifically, CRYSTALS-DILITHIUM and RAINBOW-to authenticate solely at the conclusion of communication. Our numerical analysis comprehensively examines the performance of these algorithms across various block sizes (128, 192, and 256 bits) in both block-based and continuous photon transmission scenarios. Through 100 iterations of simulations, we meticulously assess the impact of noise levels on authentication efficacy. Our results notably highlight CRYSTALS-DILITHIUM's consistent outperformance of RAINBOW, with signature overheads of approximately 0.5% for the QKD-BB84 protocol and 0.4% for the QKD-SARG04 one, when the quantum bit error rate (QBER) is augmented up to 8%. Moreover, our study unveils a correlation between higher security levels and increased authentication times, with CRYSTALS-DILITHIUM maintaining superior efficiency across all key rates up to 10,000 kb/s. These findings underscore the substantial cost and complexity reduction achieved by mono-authentication, particularly in noisy environments, paving the way for more resilient and efficient quantum communication systems.

8.
Gigascience ; 132024 01 02.
Artigo em Inglês | MEDLINE | ID: mdl-38832466

RESUMO

BACKGROUND: Due to human error, sample swapping in large cohort studies with heterogeneous data types (e.g., mix of Oxford Nanopore Technologies, Pacific Bioscience, Illumina data, etc.) remains a common issue plaguing large-scale studies. At present, all sample swapping detection methods require costly and unnecessary (e.g., if data are only used for genome assembly) alignment, positional sorting, and indexing of the data in order to compare similarly. As studies include more samples and new sequencing data types, robust quality control tools will become increasingly important. FINDINGS: The similarity between samples can be determined using indexed k-mer sequence variants. To increase statistical power, we use coverage information on variant sites, calculating similarity using a likelihood ratio-based test. Per sample error rate, and coverage bias (i.e., missing sites) can also be estimated with this information, which can be used to determine if a spatially indexed principal component analysis (PCA)-based prescreening method can be used, which can greatly speed up analysis by preventing exhaustive all-to-all comparisons. CONCLUSIONS: Because this tool processes raw data, is faster than alignment, and can be used on very low-coverage data, it can save an immense degree of computational resources in standard quality control (QC) pipelines. It is robust enough to be used on different sequencing data types, important in studies that leverage the strengths of different sequencing technologies. In addition to its primary use case of sample swap detection, this method also provides information useful in QC, such as error rate and coverage bias, as well as population-level PCA ancestry analysis visualization.


Assuntos
Sequenciamento de Nucleotídeos em Larga Escala , Análise de Sequência de DNA , Humanos , Sequenciamento de Nucleotídeos em Larga Escala/métodos , Análise de Sequência de DNA/métodos , Software , Análise de Componente Principal , Biologia Computacional/métodos , Algoritmos
9.
BMC Med Res Methodol ; 24(1): 124, 2024 Jun 03.
Artigo em Inglês | MEDLINE | ID: mdl-38831421

RESUMO

BACKGROUND: Multi-arm multi-stage (MAMS) randomised trial designs have been proposed to evaluate multiple research questions in the confirmatory setting. In designs with several interventions, such as the 8-arm 3-stage ROSSINI-2 trial for preventing surgical wound infection, there are likely to be strict limits on the number of individuals that can be recruited or the funds available to support the protocol. These limitations may mean that not all research treatments can continue to accrue the required sample size for the definitive analysis of the primary outcome measure at the final stage. In these cases, an additional treatment selection rule can be applied at the early stages of the trial to restrict the maximum number of research arms that can progress to the subsequent stage(s). This article provides guidelines on how to implement treatment selection within the MAMS framework. It explores the impact of treatment selection rules, interim lack-of-benefit stopping boundaries and the timing of treatment selection on the operating characteristics of the MAMS selection design. METHODS: We outline the steps to design a MAMS selection trial. Extensive simulation studies are used to explore the maximum/expected sample sizes, familywise type I error rate (FWER), and overall power of the design under both binding and non-binding interim stopping boundaries for lack-of-benefit. RESULTS: Pre-specification of a treatment selection rule reduces the maximum sample size by approximately 25% in our simulations. The familywise type I error rate of a MAMS selection design is smaller than that of the standard MAMS design with similar design specifications without the additional treatment selection rule. In designs with strict selection rules - for example, when only one research arm is selected from 7 arms - the final stage significance levels can be relaxed for the primary analyses to ensure that the overall type I error for the trial is not underspent. When conducting treatment selection from several treatment arms, it is important to select a large enough subset of research arms (that is, more than one research arm) at early stages to maintain the overall power at the pre-specified level. CONCLUSIONS: Multi-arm multi-stage selection designs gain efficiency over the standard MAMS design by reducing the overall sample size. Diligent pre-specification of the treatment selection rule, final stage significance level and interim stopping boundaries for lack-of-benefit are key to controlling the operating characteristics of a MAMS selection design. We provide guidance on these design features to ensure control of the operating characteristics.


Assuntos
Ensaios Clínicos Controlados Aleatórios como Assunto , Projetos de Pesquisa , Humanos , Ensaios Clínicos Controlados Aleatórios como Assunto/métodos , Tamanho da Amostra , Seleção de Pacientes
10.
Stat Med ; 43(18): 3417-3431, 2024 Aug 15.
Artigo em Inglês | MEDLINE | ID: mdl-38852994

RESUMO

We investigate the familywise error rate (FWER) for time-to-event endpoints evaluated using a group sequential design with a hierarchical testing procedure for secondary endpoints. We show that, in this setup, the correlation between the log-rank test statistics at interim and at end of study is not congruent with the canonical correlation derived for normal-distributed endpoints. We show, both theoretically and by simulation, that the correlation also depends on the level of censoring, the hazard rates of the endpoints, and the hazard ratio. To optimize operating characteristics in this complex scenario, we propose a simulation-based method to assess the FWER which, better than the alpha-spending approach, can inform the choice of critical values for testing secondary endpoints.


Assuntos
Simulação por Computador , Determinação de Ponto Final , Humanos , Determinação de Ponto Final/métodos , Projetos de Pesquisa , Modelos Estatísticos , Modelos de Riscos Proporcionais , Interpretação Estatística de Dados
11.
Oecologia ; 205(2): 257-269, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38806949

RESUMO

Community weighted means (CWMs) are widely used to study the relationship between community-level functional traits and environment. For certain null hypotheses, CWM-environment relationships assessed by linear regression or ANOVA and tested by standard parametric tests are prone to inflated Type I error rates. Previous research has found that this problem can be solved by permutation tests (i.e., the max test). A recent extension of the CWM approach allows the inclusion of intraspecific trait variation (ITV) by the separate calculation of fixed, site-specific, and intraspecific CWMs. The question is whether the same Type I error rate inflation exists for the relationship between environment and site-specific or intraspecific CWM. Using simulated and real-world community datasets, we show that site-specific CWM-environment relationships have also inflated Type I error rate, and this rate is negatively related to the relative ITV magnitude. In contrast, for intraspecific CWM-environment relationships, standard parametric tests have the correct Type I error rate, although somewhat reduced statistical power. We introduce an ITV-extended version of the max test, which can solve the inflation problem for site-specific CWM-environment relationships and, without considering ITV, becomes equivalent to the "original" max test used for the CWM approach. We show that this new ITV-extended max test works well across the full possible magnitude of ITV on both simulated and real-world data. Most real datasets probably do not have intraspecific trait variation large enough to alleviate the problem of inflated Type I error rate, and published studies possibly report overly optimistic significance results.


Assuntos
Ecossistema
12.
Sensors (Basel) ; 24(9)2024 Apr 30.
Artigo em Inglês | MEDLINE | ID: mdl-38732994

RESUMO

This paper studies the maximum reliability of multi-hop relay UAVs, in which UAVs provide wireless services for remote users as a coded cooperative relay without an end-to-end direct communication link. In this paper, the analytical expressions of the total power loss and total bit error rate are derived as reliability measures. First, based on the environmental statistical parameters, a LOS probability model is proposed. Then, the problem of minimizing the bit error rate of static and mobile UAVs is studied. The goal is to minimize the total bit error rate by jointly optimizing the height, elevation, power and path loss and introducing the maximum allowable path loss constraints, transmission power allocation constraints, and UAV height and elevation constraints. At the same time, the total path loss is minimized to achieve maximum ground communication coverage. However, the formulated joint optimization problem is nonconvex and generally difficult to solve. Therefore, we decomposed the problem into two subproblems and proposed an effective joint optimization iteration algorithm. Finally, the simulation results are given, and the analysis shows that the optimal height of different reliability measures is slightly different; thus, using the mobility of UAVs can improve the reliability of communication performance.

13.
Int J Cancer ; 155(5): 925-933, 2024 Sep 01.
Artigo em Inglês | MEDLINE | ID: mdl-38623608

RESUMO

Tumor-informed mutation-based approaches are frequently used for detection of circulating tumor DNA (ctDNA). Not all mutations make equally effective ctDNA markers. The objective was to explore if prioritizing mutations using mutational features-such as cancer cell fraction (CCF), multiplicity, and error rate-would improve the success rate of tumor-informed ctDNA analysis. Additionally, we aimed to develop a practical and easily implementable analysis pipeline for identifying and prioritizing candidate mutations from whole-exome sequencing (WES) data. We analyzed WES and ctDNA data from three tumor-informed ctDNA studies, one on bladder cancer (Cohort A) and two on colorectal cancer (Cohorts I and N). The studies included 390 patients. For each patient, a unique set of mutations (median mutations/patient: 6, interquartile 13, range: 1-46, total n = 4023) were used as markers of ctDNA. The tool PureCN was used to assess the CCF and multiplicity of each mutation. High-CCF mutations were detected more frequently than low-CCF mutations (Cohort A: odds ratio [OR] 20.6, 95% confidence interval [CI] 5.72-173, p = 1.73e-12; Cohort I: OR 2.24, 95% CI 1.44-3.52, p = 1.66e-04; and Cohort N: OR 1.78, 95% CI 1.14-2.79, p = 7.86e-03). The detection-likelihood was additionally improved by selecting mutations with multiplicity of two or above (Cohort A: OR 1.55, 95% CI 1. 14-2.11, p = 3.85e-03; Cohort I: OR 1.78, 95% CI 1.23-2.56, p = 1.34e-03; and Cohort N: OR 1.94, 95% CI 1.63-2.31, p = 2.83e-14). Furthermore, selecting the mutations for which the ctDNA detection method had the lowest error rates, additionally improved the detection-likelihood, particularly evident when plasma cell-free DNA tumor fractions were below 0.1% (p = 2.1e-07). Selecting mutational markers with high CCF, high multiplicity, and low error rate significantly improve ctDNA detection likelihood. We provide free access to the analysis pipeline enabling others to perform qualified prioritization of mutations for tumor-informed ctDNA analysis.


Assuntos
Biomarcadores Tumorais , DNA Tumoral Circulante , Neoplasias Colorretais , Sequenciamento do Exoma , Mutação , Neoplasias da Bexiga Urinária , Humanos , DNA Tumoral Circulante/genética , DNA Tumoral Circulante/sangue , Biomarcadores Tumorais/genética , Sequenciamento do Exoma/métodos , Neoplasias Colorretais/genética , Neoplasias Colorretais/diagnóstico , Neoplasias Colorretais/sangue , Neoplasias da Bexiga Urinária/genética , Neoplasias da Bexiga Urinária/diagnóstico , Neoplasias da Bexiga Urinária/sangue , Feminino , Masculino , Idoso , Pessoa de Meia-Idade , Análise Mutacional de DNA/métodos , Estudos de Coortes
14.
J Forensic Sci ; 69(4): 1334-1349, 2024 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-38684627

RESUMO

Several studies have recently attempted to estimate practitioner accuracy when comparing fired ammunition. But whether this research has included sufficiently challenging comparisons dependent upon expertise for accurate conclusions regarding source remains largely unexplored in the literature. Control groups of lay people comprise one means of vetting this question, of assessing whether comparison samples were at least challenging enough to distinguish between experts and novices. This article therefore utilizes such a group, specifically 82 attorneys, as a post hoc control and juxtaposes their performance on a comparison set of cartridge case images from one commonly cited study (Duez et al. in J Forensic Sci. 2018;63:1069-1084) with that of the original participant pool of professionals. Despite lacking the kind of formalized training and experience common to the latter, our lay participants displayed an ability, generally, to distinguish between cartridge cases fired by the same versus different guns in the 327 comparisons they performed. And while their accuracy rates lagged substantially behind those of the original participant pool of professionals on same-source comparisons, their performance on different-source comparisons was essentially indistinguishable from that of trained examiners. This indicates that although the study we vetted may provide useful information about professional accuracy when performing same-source comparisons, it has little to offer in terms of measuring examiners' ability to distinguish between cartridge cases fired by different guns. If similar issues pervade other accuracy studies, then there is little reason to rely on the false-positive rates they have generated.

15.
Biom J ; 66(3): e2300237, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-38637319

RESUMO

In this paper, we consider online multiple testing with familywise error rate (FWER) control, where the probability of committing at least one type I error will remain under control while testing a possibly infinite sequence of hypotheses over time. Currently, adaptive-discard (ADDIS) procedures seem to be the most promising online procedures with FWER control in terms of power. Now, our main contribution is a uniform improvement of the ADDIS principle and thus of all ADDIS procedures. This means, the methods we propose reject as least as much hypotheses as ADDIS procedures and in some cases even more, while maintaining FWER control. In addition, we show that there is no other FWER controlling procedure that enlarges the event of rejecting any hypothesis. Finally, we apply the new principle to derive uniform improvements of the ADDIS-Spending and ADDIS-Graph.


Assuntos
Modelos Estatísticos , Probabilidade
16.
Artigo em Inglês | MEDLINE | ID: mdl-38623032

RESUMO

Inter-rater reliability (IRR) is one of the commonly used tools for assessing the quality of ratings from multiple raters. However, applicant selection procedures based on ratings from multiple raters usually result in a binary outcome; the applicant is either selected or not. This final outcome is not considered in IRR, which instead focuses on the ratings of the individual subjects or objects. We outline the connection between the ratings' measurement model (used for IRR) and a binary classification framework. We develop a simple way of approximating the probability of correctly selecting the best applicants which allows us to compute error probabilities of the selection procedure (i.e., false positive and false negative rate) or their lower bounds. We draw connections between the IRR and the binary classification metrics, showing that binary classification metrics depend solely on the IRR coefficient and proportion of selected applicants. We assess the performance of the approximation in a simulation study and apply it in an example comparing the reliability of multiple grant peer review selection procedures. We also discuss other possible uses of the explored connections in other contexts, such as educational testing, psychological assessment, and health-related measurement, and implement the computations in the R package IRR2FPR.

17.
J Biopharm Stat ; : 1-14, 2024 Mar 21.
Artigo em Inglês | MEDLINE | ID: mdl-38515269

RESUMO

In recent years, clinical trials utilizing a two-stage seamless adaptive trial design have become very popular in drug development. A typical example is a phase 2/3 adaptive trial design, which consists of two stages. As an example, stage 1 is for a phase 2 dose-finding study and stage 2 is for a phase 3 efficacy confirmation study. Depending upon whether or not the target patient population, study objectives, and study endpoints are the same at different stages, Chow (2020) classified two-stage seamless adaptive design into eight categories. In practice, standard statistical methods for group sequential design with one planned interim analysis are often wrongly directly applied for data analysis. In this article, following similar ideas proposed by Chow and Lin (2015) and Chow (2020), a statistical method for the analysis of a two-stage seamless adaptive trial design with different study endpoints and shifted target patient population is discussed under the fundamental assumption that study endpoints have a known relationship. The proposed analysis method should be useful in both clinical trials with protocol amendments and clinical trials with the existence of disease progression utilizing a two-stage seamless adaptive trial design.

18.
Schizophr Res ; 266: 41-49, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-38367611

RESUMO

BACKGROUND: Antisaccade, which is described as looking at the opposite location of the target, is an eye movements paradigm used for assessing cognitive functions in schizophrenia. Initiation and sustainment of saccades in antisaccade are managed by frontal and parietal cortical areas. Antisaccade abnormalities are well-established findings in schizophrenia. However, studies in the early phases of psychotic disorders and clinical/familial risk for psychosis reported inconsistent findings. The current systematic review aimed to review the results of studies investigating antisaccade error rates in first-episode psychosis (FEP), individuals with ultra-high-risk for psychosis (UHRP), and familial-high-risk for psychosis (FHRP) compared to healthy controls. METHOD: A meta-analysis of 17 studies was conducted to quantitatively review antisaccade errors in FEP, UHR-P and FHRP. The error rate (Hedges'g) was compared between the total of 860 FEP, UHRP, FHRP, and 817 healthy controls. Hedges' g for effect size, I2 for estimating the percentage of variability, and publication bias were evaluated through the R software. RESULTS: The outcomes of this meta-analysis suggested that FEP is associated with a robust deficit in the antisaccade error rate (g = 1.16, CI = 0.95-1.38). Additionally, both the clinical and familial high-risk groups showed small but significant increases in AS errors (g = 0.26, CI = 0.02-0.52 and g = 0.34, CI = 0.13-0.55, respectively). CONCLUSION: The large effect size estimated for FEP was compatible with previously reported results in chronic schizophrenia patients. Additionally, relatives had abnormalities with small to medium effect sizes and significant differences. The current findings suggest that antisaccade errors might be a potential endophenotype for psychotic disorders.


Assuntos
Transtornos Psicóticos , Movimentos Sacádicos , Esquizofrenia , Humanos , Transtornos Psicóticos/fisiopatologia , Movimentos Sacádicos/fisiologia , Esquizofrenia/fisiopatologia , Família
19.
Biometrics ; 80(1)2024 Jan 29.
Artigo em Inglês | MEDLINE | ID: mdl-38412302

RESUMO

Lung cancer is a leading cause of cancer mortality globally, highlighting the importance of understanding its mortality risks to design effective patient-centered therapies. The National Lung Screening Trial (NLST) employed computed tomography texture analysis, which provides objective measurements of texture patterns on CT scans, to quantify the mortality risks of lung cancer patients. Partially linear Cox models have gained popularity for survival analysis by dissecting the hazard function into parametric and nonparametric components, allowing for the effective incorporation of both well-established risk factors (such as age and clinical variables) and emerging risk factors (eg, image features) within a unified framework. However, when the dimension of parametric components exceeds the sample size, the task of model fitting becomes formidable, while nonparametric modeling grapples with the curse of dimensionality. We propose a novel Penalized Deep Partially Linear Cox Model (Penalized DPLC), which incorporates the smoothly clipped absolute deviation (SCAD) penalty to select important texture features and employs a deep neural network to estimate the nonparametric component of the model. We prove the convergence and asymptotic properties of the estimator and compare it to other methods through extensive simulation studies, evaluating its performance in risk prediction and feature selection. The proposed method is applied to the NLST study dataset to uncover the effects of key clinical and imaging risk factors on patients' survival. Our findings provide valuable insights into the relationship between these factors and survival outcomes.


Assuntos
Neoplasias Pulmonares , Humanos , Modelos de Riscos Proporcionais , Neoplasias Pulmonares/diagnóstico por imagem , Análise de Sobrevida , Modelos Lineares , Tomografia Computadorizada por Raios X/métodos
20.
Biom J ; 66(1): e2200312, 2024 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-38285403

RESUMO

To accelerate a randomized controlled trial, historical control data may be used after ensuring little heterogeneity between the historical and current trials. The test-then-pool approach is a simple frequentist borrowing method that assesses the similarity between historical and current control data using a two-sided test. A limitation of the conventional test-then-pool method is the inability to control the type I error rate and power for the primary hypothesis separately and flexibly for heterogeneity between trials. This is because the two-sided test focuses on the absolute value of the mean difference between the historical and current controls. In this paper, we propose a new test-then-pool method that splits the two-sided hypothesis of the conventional method into two one-sided hypotheses. Testing each one-sided hypothesis with different significance levels allows for the separate control of the type I error rate and power for heterogeneity between trials. We also propose a significance-level selection approach based on the maximum type I error rate and the minimum power. The proposed method prevented a decrease in power even when there was heterogeneity between trials while controlling type I error at a maximum tolerable type I error rate larger than the targeted type I error rate. The application of depression trial data and hypothetical trial data further supported the usefulness of the proposed method.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA