Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 45
Filtrar
1.
BMC Med Res Methodol ; 24(1): 197, 2024 Sep 09.
Artigo em Inglês | MEDLINE | ID: mdl-39251907

RESUMO

PURPOSE: In the context of clinical research, there is an increasing need for new study designs that help to incorporate already available data. With the help of historical controls, the existing information can be utilized to support the new study design, but of course, inclusion also carries the risk of bias in the study results. METHODS: To combine historical and randomized controls we investigate the Fill-it-up-design, which in the first step checks the comparability of the historical and randomized controls performing an equivalence pre-test. If equivalence is confirmed, the historical control data will be included in the new RCT. If equivalence cannot be confirmed, the historical controls will not be considered at all and the randomization of the original study will be extended. We are investigating the performance of this study design in terms of type I error rate and power. RESULTS: We demonstrate how many patients need to be recruited in each of the two steps in the Fill-it-up-design and show that the family wise error rate of the design is kept at 5 % . The maximum sample size of the Fill-it-up-design is larger than that of the single-stage design without historical controls and increases as the heterogeneity between the historical controls and the concurrent controls increases. CONCLUSION: The two-stage Fill-it-up-design represents a frequentist method for including historical control data for various study designs. As the maximum sample size of the design is larger, a robust prior belief is essential for its use. The design should therefore be seen as a way out in exceptional situations where a hybrid design is considered necessary.


Assuntos
Ensaios Clínicos Controlados Aleatórios como Assunto , Projetos de Pesquisa , Humanos , Ensaios Clínicos Controlados Aleatórios como Assunto/métodos , Ensaios Clínicos Controlados Aleatórios como Assunto/estatística & dados numéricos , Tamanho da Amostra , Estudo Historicamente Controlado , Grupos Controle
2.
Risk Anal ; 2024 Oct 09.
Artigo em Inglês | MEDLINE | ID: mdl-39380395

RESUMO

Human error constitutes a significant cause of accidents across diverse industries, leading to adverse consequences and heightened disruptions in maintenance operations. Organizations can enhance their decision-making process by quantifying human errors and identifying the underlying influencing factors, thereby mitigating their repercussions. Consequently, it becomes crucial to examine the value of human error probability (HEP) during these activities. The objective of this paper is to determine and simulate HEP in maintenance tasks at a cement factory, utilizing performance shaping factors (PSFs). The research employs the cross-impact matrix multiplication applied to classification (MICMAC) analysis method to evaluate the dependencies, impacts, and relationships among the factors influencing human error. This approach classifies and assesses the dependencies and impacts of different factors on HEP, occupational accidents, and related costs. The study also underscores that PSFs can dynamically change under the influence of other variables, emphasizing the necessity to forecast the behavior of human error over time. Therefore, this paper utilizes the MICMAC method to analyze the interdependencies, relationships, and impact levels among different variables. These relationships are then utilized to optimize the implementation of the system dynamics (SD) method. An SD model is employed to forecast the system's behavior, and multiple scenarios are presented. By considering the HEP value, managers can adjust organizational conditions and personnel to ensure acceptability. The paper also presents various scenarios related to HEP to assist managers in making informed decisions.

3.
Sensors (Basel) ; 23(6)2023 Mar 16.
Artigo em Inglês | MEDLINE | ID: mdl-36991885

RESUMO

Relay-assisted wireless communications, where both the relay and the final destiny employ diversity-combining techniques, represent a compelling strategy for improving the signal-to-noise ratio (SNR) for mobile terminals, mainly at millimeter-wave (mmWave) frequency bands. In this sense, this work considers a wireless network that employs a dual-hop decode-and-forward (DF) relaying protocol, in which the receivers at the relay and at the base station (BS) use an antenna array. Moreover, it is considered that the received signals are combined at reception using equal-gain-combining (EGC). Recent works have enthusiastically employed the Weibull distribution so as to emulate the small-scale fading behavior in mmWave frequencies, which also motivates its use in the present work. For this scenario, exact and asymptotic expressions for the system's outage probability (OP) and average bit error probability (ABEP) are derived in closed form. Useful insights are gained from these expressions. More precisely, they illustrate how the system and fading parameters affect the performance of the DF-EGC system. Monte Carlo simulations corroborate the accuracy and validity of the derived expressions. Furthermore, the mean achievable rate of the considered system is also evaluated via simulations. Useful insights regarding the system performance are obtained from these numerical results.

4.
Sensors (Basel) ; 23(14)2023 Jul 19.
Artigo em Inglês | MEDLINE | ID: mdl-37514808

RESUMO

Covert communications have arisen as an effective communications security measure that overcomes some of the limitations of cryptography and physical layer security. The main objective is to completely conceal from external devices the very existence of the link for exchanging confidential messages. In this paper, we take a step further and consider a scenario in which a covert communications node disguises itself as another functional entity for even more covertness. To be specific, we study a system where a source node communicates with a seemingly receive-only destination node which, in fact, is full-duplex (FD) and covertly delivers critical messages to another hidden receiver while evading the surveillance. Our aim is to identify the achievable covert rate at the hidden receiver by optimizing the public data rate and the transmit power of the FD destination node subject to the worst-case detection error probability (DEP) of the warden. Closed-form solutions are provided, and we investigate the effects of various system parameters on the covert rate through numerical results, one of which reveals that applying more (less) destination transmit power achieves a higher covert rate when the source transmit power is low (high). Since our work provides a performance guideline from the information-theoretic point of view, we conclude this paper with a discussion on possible future research such as analyses with practical modulations and imperfect channel state information.

5.
Entropy (Basel) ; 25(9)2023 Sep 15.
Artigo em Inglês | MEDLINE | ID: mdl-37761641

RESUMO

We examine the effects of imperfect phase estimation of a reference signal on the bit error rate and mutual information over a communication channel influenced by fading and thermal noise. The Two-Wave Diffuse-Power (TWDP) model is utilized for statistical characterization of propagation environment where there are two dominant line-of-sight components together with diffuse ones. We derive novel analytical expression of the Fourier series for probability density function arising from the composite received signal phase. Further, the expression for the bit error rate is presented and numerically evaluated. We develop efficient analytical, numerical and simulation methods for estimating the value of the error floor and identifying the range of acceptable signal-to-noise ratio (SNR) values in cases when the floor is present during the detection of multilevel phase-shift keying (PSK) signals. In addition, we use Monte Carlo simulations in order to evaluate the mutual information for modulation orders two, four and eight, and identify its dependence on receiver hardware imperfections under the given channel conditions. Our results expose direct correspondence between bit error rate and mutual information value on one side, and the parameters of TWDP channel, SNR and phase noise standard deviation on the other side. The results illustrate that the error floor values are strongly influenced by the phase noise when signals propagate over a TWDP channel. In addition, the phase noise considerably affects the mutual information.

6.
Entropy (Basel) ; 25(7)2023 Jun 25.
Artigo em Inglês | MEDLINE | ID: mdl-37509924

RESUMO

Using majorization theory via "Robin Hood" elementary operations, optimal lower and upper bounds are derived on Rényi and guessing entropies with respect to either error probability (yielding reverse-Fano and Fano inequalities) or total variation distance to the uniform (yielding reverse-Pinsker and Pinsker inequalities). This gives a general picture of how the notion of randomness can be measured in many areas of computer science.

7.
Entropy (Basel) ; 25(4)2023 Apr 16.
Artigo em Inglês | MEDLINE | ID: mdl-37190456

RESUMO

The error probability of block codes sent under a non-uniform input distribution over the memoryless binary symmetric channel (BSC) and decoded via the maximum a posteriori (MAP) decoding rule is investigated. It is proved that the ratio of the probability of MAP decoder ties to the probability of error grows most linearly in blocklength when no MAP decoding ties occur, thus showing that decoder ties do not affect the code's error exponent. This result generalizes a similar recent result shown for the case of block codes transmitted over the BSC under a uniform input distribution.

8.
Sensors (Basel) ; 22(20)2022 Oct 15.
Artigo em Inglês | MEDLINE | ID: mdl-36298194

RESUMO

This study considers a detection scheme for cooperative multi-input-multi-output (MIMO) systems using one-bit analog-to-digital converters (ADCs) in a decode-and-forward (DF) relay protocol. The use of one-bit ADCs is a promising technique for reducing the power consumption, which is necessary for supporting future wireless systems comprising a large number of antennas. However, the use of a large number of antennas remains still limited to mobile devices owing to their size. Cooperative communication using a DF relay can resolve this limitation; however, detection errors at the relay make it difficult to employ cooperative communication directly. This difficulty is more severe in a MIMO system using one-bit ADCs due to its nonlinear nature. To efficiently address the difficulty, this paper proposes a detection scheme that mitigates the error propagation effect. The upper bound of the pairwise error probability (PEP) of one-bit ADCs is first derived in a weighted Hamming distance form. Then, using the derived PEP, the proposed detection for the DF relay protocol is derived as a single weighted Hamming distance. Finally, the complexity of the proposed detection is analyzed in terms of real multiplications. The simulation results show that the proposed detection method efficiently mitigates the error propagation effect but has a relatively low level of complexity when compared to conventional detection methods.

9.
Pharm Stat ; 21(1): 122-132, 2022 01.
Artigo em Inglês | MEDLINE | ID: mdl-34346169

RESUMO

The method of generalized pairwise comparisons (GPC) is a multivariate extension of the well-known non-parametric Wilcoxon-Mann-Whitney test. It allows comparing two groups of observations based on multiple hierarchically ordered endpoints, regardless of the number or type of the latter. The summary measure, "net benefit," quantifies the difference between the probabilities that a random observation from one group is doing better than an observation from the opposite group. The method takes into account the correlations between the endpoints. We have performed a simulation study for the case of two hierarchical endpoints to evaluate the impact of their correlation on the type-I error probability and power of the test based on GPC. The simulations show that the power of the GPC test for the primary endpoint is modified if the secondary endpoint is included in the hierarchical GPC analysis. The change in power depends on the correlation between the endpoints. Interestingly, a decrease in power can occur, regardless of whether there is any marginal treatment effect on the secondary endpoint. It appears that the overall power of the hierarchical GPC procedure depends, in a complex manner, on the entire variance-covariance structure of the set of outcomes. Any additional factors (such as thresholds of clinical relevance, drop out, or censoring scheme) will also affect the power and will have to be taken into account when designing a trial based on the hierarchical GPC procedure.


Assuntos
Projetos de Pesquisa , Simulação por Computador , Humanos , Probabilidade
10.
J Proteome Res ; 20(4): 1997-2004, 2021 04 02.
Artigo em Inglês | MEDLINE | ID: mdl-33683901

RESUMO

MetaMorpheus is a free, open-source software program for the identification of peptides and proteoforms from data-dependent acquisition tandem MS experiments. There is inherent uncertainty in these assignments for several reasons, including the limited overlap between experimental and theoretical peaks, the m/z uncertainty, and noise peaks or peaks from coisolated peptides that produce false matches. False discovery rates provide only a set-wise approximation for incorrect spectrum matches. Here we implemented a binary decision tree calculation within MetaMorpheus to compute a posterior error probability, which provides a measure of uncertainty for each peptide-spectrum match. We demonstrate its utility for increasing identifications and resolving ambiguities in bottom-up, top-down, proteogenomic, and nonspecific digestion searches.


Assuntos
Proteômica , Espectrometria de Massas em Tandem , Algoritmos , Bases de Dados de Proteínas , Peptídeos , Probabilidade , Software
11.
Entropy (Basel) ; 23(8)2021 Aug 13.
Artigo em Inglês | MEDLINE | ID: mdl-34441185

RESUMO

In this paper, we investigate the problem of classifying feature vectors with mutually independent but non-identically distributed elements that take values from a finite alphabet set. First, we show the importance of this problem. Next, we propose a classifier and derive an analytical upper bound on its error probability. We show that the error probability moves to zero as the length of the feature vectors grows, even when there is only one training feature vector per label available. Thereby, we show that for this important problem at least one asymptotically optimal classifier exists. Finally, we provide numerical examples where we show that the performance of the proposed classifier outperforms conventional classification algorithms when the number of training data is small and the length of the feature vectors is sufficiently high.

12.
Entropy (Basel) ; 23(5)2021 Apr 28.
Artigo em Inglês | MEDLINE | ID: mdl-33924782

RESUMO

In 2017, Polyanskiy showed that the trade-off between power and bandwidth efficiency for massive Gaussian random access is governed by two fundamentally different regimes: low power and high power. For both regimes, tight performance bounds were found by Zadik et al., in 2019. This work utilizes recent results on the exact block error probability of Gaussian random codes in additive white Gaussian noise to propose practical methods based on iterative soft decoding to closely approach these bounds. In the low power regime, this work finds that orthogonal random codes can be applied directly. In the high power regime, a more sophisticated effort is needed. This work shows that power-profile optimization by means of linear programming, as pioneered by Caire et al. in 2001, is a promising strategy to apply. The proposed combination of orthogonal random coding and iterative soft decoding even outperforms the existence bounds of Zadik et al. in the low power regime and is very close to the non-existence bounds for message lengths around 100 and above. Finally, the approach of power optimization by linear programming proposed for the high power regime is found to benefit from power imbalances due to fading which makes it even more attractive for typical mobile radio channels.

13.
BMC Bioinformatics ; 21(1): 173, 2020 May 04.
Artigo em Inglês | MEDLINE | ID: mdl-32366221

RESUMO

BACKGROUND: In shotgun proteomics, database searching of tandem mass spectra results in a great number of peptide-spectrum matches (PSMs), many of which are false positives. Quality control of PSMs is a multiple hypothesis testing problem, and the false discovery rate (FDR) or the posterior error probability (PEP) is the commonly used statistical confidence measure. PEP, also called local FDR, can evaluate the confidence of individual PSMs and thus is more desirable than FDR, which evaluates the global confidence of a collection of PSMs. Estimation of PEP can be achieved by decomposing the null and alternative distributions of PSM scores as long as the given data is sufficient. However, in many proteomic studies, only a group (subset) of PSMs, e.g. those with specific post-translational modifications, are of interest. The group can be very small, making the direct PEP estimation by the group data inaccurate, especially for the high-score area where the score threshold is taken. Using the whole set of PSMs to estimate the group PEP is inappropriate either, because the null and/or alternative distributions of the group can be very different from those of combined scores. RESULTS: The transfer PEP algorithm is proposed to more accurately estimate the PEPs of peptide identifications in small groups. Transfer PEP derives the group null distribution through its empirical relationship with the combined null distribution, and estimates the group alternative distribution, as well as the null proportion, using an iterative semi-parametric method. Validated on both simulated data and real proteomic data, transfer PEP showed remarkably higher accuracy than the direct combined and separate PEP estimation methods. CONCLUSIONS: We presented a novel approach to group PEP estimation for small groups and implemented it for the peptide identification problem in proteomics. The methodology of the approach is in principle applicable to the small-group PEP estimation problems in other fields.


Assuntos
Computação Matemática , Peptídeos/química , Algoritmos , Probabilidade , Processamento de Proteína Pós-Traducional , Proteômica , Espectrometria de Massas em Tandem
14.
Sensors (Basel) ; 20(24)2020 Dec 10.
Artigo em Inglês | MEDLINE | ID: mdl-33322078

RESUMO

In this paper, we propose a multipath-assisted device-free localization (DFL) system that includes magnitude and phase information (MAMPI). The DFL system employs ultra-wideband (UWB) channel impulse response (CIR) measurements, enabling the extraction of several multipath components (MPCs) and thereby benefits from multipath propagation. We propose a radio propagation model that calculates the effect on the received signal based on the position of a person within a target area. Additionally, we propose a validated error model for the measurements and explain the creation of different feature vectors and extraction of the MPCs from Decawave DW1000 CIR measurements. We evaluate the system via simulations of the position error probability and a measurement setup in an indoor scenario. We compare the performance of MAMPI to a conventional DFL system based on four sensor nodes that measures radio signal strength values. The combination of the magnitude and phase differences for the feature vectors results in a position error probability that is comparable to a conventional system but requires only two sensor nodes.

15.
Sensors (Basel) ; 21(1)2020 Dec 27.
Artigo em Inglês | MEDLINE | ID: mdl-33375446

RESUMO

Non-orthogonal multiple access schemes with grant free access have been recently highlighted as a prominent solution to meet the stringent requirements of massive machine-type communications (mMTCs). In particular, the multi-user shared access (MUSA) scheme has shown great potential to grant free access to the available resources. For the sake of simplicity, MUSA is generally conducted with the successive interference cancellation (SIC) receiver, which offers a low decoding complexity. However, this family of receivers requires sufficiently diversified received user powers in order to ensure the best performance and avoid the error propagation phenomenon. The power allocation has been considered as a complicated issue especially for a decentralized decision with a minimum signaling overhead. In this paper, we propose a novel algorithm for an autonomous power decision with a minimal overhead based on a tight approximation of the bit error probability (BEP) while considering the error propagation phenomenon. We investigate the efficiency of multi-armed bandit (MAB) approaches for this problem in two different reward scenarios: (i) in Scenario 1, each user reward only informs about whether its own packet was successfully transmitted or not; (ii) in Scenario 2, each user reward may carry information about the other interfering user packets. The performances of the proposed algorithm and the MAB techniques are compared in terms of the successful transmission rate. The simulation results prove that the MAB algorithms show a better performance in the second scenario compared to the first one. However, in both scenarios, the proposed algorithm outperforms the MAB techniques with a lower complexity at user equipment.

16.
Entropy (Basel) ; 22(8)2020 Jul 30.
Artigo em Inglês | MEDLINE | ID: mdl-33286611

RESUMO

Spatial modulation (SM) is a multiple-input multiple-output (MIMO) technique that achieves a MIMO capacity by conveying information through antenna indices, while keeping the transmitter as simple as that of a single-input system. Quadrature SM (QSM) expands the spatial dimension of the SM into in-phase and quadrature dimensions, which are used to transmit the real and imaginary parts of a signal symbol, respectively. A parallel QSM (PQSM) was recently proposed to achieve more gain in the spectral efficiency. In PQSM, transmit antennas are split into parallel groups, where QSM is performed independently in each group using the same signal symbol. In this paper, we analytically model the asymptotic pairwise error probability of the PQSM. Accordingly, the constellation design for the PQSM is formulated as an optimization problem of the sum of multivariate functions. We provide the proposed constellations for several values of constellation size, number of transmit antennas, and number of receive antennas. The simulation results show that the proposed constellation outperforms the phase-shift keying (PSK) constellation by more than 10 dB and outperforms the quadrature-amplitude modulation (QAM) schemes by approximately 5 dB for large constellations and number of transmit antennas.

17.
Sensors (Basel) ; 19(24)2019 Dec 06.
Artigo em Inglês | MEDLINE | ID: mdl-31817712

RESUMO

Full-duplex (FD) communication and spatial modulation (SM) are two promising techniques to achieve high spectral efficiency. Recent works in the literature have investigated the possibility of combining the FD mode with SM in the relay system to benefit their advantages. In this paper, we analyze the performance of the FD-SM decode-and-forward (DF) relay system and derive the closed-form expression for the symbol error probability (SEP). To tackle the residual self-interference (RSI) due to the FD mode at the relay, we propose a simple yet effective power allocation algorithm to compensate for the RSI impact and improve the system SEP performance. Both numerical and simulation results confirm the accuracy of the derived SEP expression and the efficacy of the proposed optimal power allocation.

18.
Sensors (Basel) ; 18(2)2018 Feb 10.
Artigo em Inglês | MEDLINE | ID: mdl-29439413

RESUMO

In this paper, we propose a novel low-complexity multi-user superposition transmission (MUST) technique for 5G downlink networks, which allows multiple cell-edge users to be multiplexed with a single cell-center user. We call the proposed technique diversity-controlled MUST technique since the cell-center user enjoys the frequency diversity effect via signal repetition over multiple orthogonal frequency division multiplexing (OFDM) sub-carriers. We assume that a base station is equipped with a single antenna but users are equipped with multiple antennas. In addition, we assume that the quadrature phase shift keying (QPSK) modulation is used for users. We mathematically analyze the bit error rate (BER) of both cell-edge users and cell-center users, which is the first theoretical result in the literature to the best of our knowledge. The mathematical analysis is validated through extensive link-level simulations.

19.
Entropy (Basel) ; 20(3)2018 Mar 17.
Artigo em Inglês | MEDLINE | ID: mdl-33265294

RESUMO

Evaluating the performance of Bayesian classification in a high-dimensional random tensor is a fundamental problem, usually difficult and under-studied. In this work, we consider two Signal to Noise Ratio (SNR)-based binary classification problems of interest. Under the alternative hypothesis, i.e., for a non-zero SNR, the observed signals are either a noisy rank-R tensor admitting a Q-order Canonical Polyadic Decomposition (CPD) with large factors of size N q × R , i.e., for 1 ≤ q ≤ Q , where R , N q → ∞ with R 1 / q / N q converge towards a finite constant or a noisy tensor admitting TucKer Decomposition (TKD) of multilinear ( M 1 , … , M Q ) -rank with large factors of size N q × M q , i.e., for 1 ≤ q ≤ Q , where N q , M q → ∞ with M q / N q converge towards a finite constant. The classification of the random entries (coefficients) of the core tensor in the CPD/TKD is hard to study since the exact derivation of the minimal Bayes' error probability is mathematically intractable. To circumvent this difficulty, the Chernoff Upper Bound (CUB) for larger SNR and the Fisher information at low SNR are derived and studied, based on information geometry theory. The tightest CUB is reached for the value minimizing the error exponent, denoted by s ⋆ . In general, due to the asymmetry of the s-divergence, the Bhattacharyya Upper Bound (BUB) (that is, the Chernoff Information calculated at s ⋆ = 1 / 2 ) cannot solve this problem effectively. As a consequence, we rely on a costly numerical optimization strategy to find s ⋆ . However, thanks to powerful random matrix theory tools, a simple analytical expression of s ⋆ is provided with respect to the Signal to Noise Ratio (SNR) in the two schemes considered. This work shows that the BUB is the tightest bound at low SNRs. However, for higher SNRs, the latest property is no longer true.

20.
BMC Med Res Methodol ; 17(1): 159, 2017 Dec 04.
Artigo em Inglês | MEDLINE | ID: mdl-29202708

RESUMO

BACKGROUND: Randomization is considered to be a key feature to protect against bias in randomized clinical trials. Randomization induces comparability with respect to known and unknown covariates, mitigates selection bias, and provides a basis for inference. Although various randomization procedures have been proposed, no single procedure performs uniformly best. In the design phase of a clinical trial, the scientist has to decide which randomization procedure to use, taking into account the practical setting of the trial with respect to the potential of bias. Less emphasis has been placed on this important design decision than on analysis, and less support has been available to guide the scientist in making this decision. METHODS: We propose a framework that weights the properties of the randomization procedure with respect to practical needs of the research question to be answered by the clinical trial. In particular, the framework assesses the impact of chronological and selection bias on the probability of a type I error. The framework is applied to a case study with a 2-arm parallel group, single center randomized clinical trial with continuous endpoint, with no-interim analysis, 1:1 allocation and no adaptation in the randomization process. RESULTS: In so doing, we derive scientific arguments for the selection of an appropriate randomization procedure and develop a template which is illustrated in parallel by a case study. Possible extensions are discussed. CONCLUSION: The proposed ERDO framework guides the investigator through a template for the choice of a randomization procedure, and provides easy to use tools for the assessment. The barriers for the thorough reporting and assessment of randomization procedures could be further reduced in the future when regulators and pharmaceutical companies employ similar, standardized frameworks for the choice of a randomization procedure.


Assuntos
Ensaios Clínicos Controlados Aleatórios como Assunto/métodos , Algoritmos , Humanos , Distribuição Aleatória , Viés de Seleção
SELEÇÃO DE REFERÊNCIAS
Detalhe da pesquisa