Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 120
Filtrar
1.
Proc Natl Acad Sci U S A ; 119(4)2022 01 25.
Artigo em Inglês | MEDLINE | ID: mdl-35046025

RESUMO

The ongoing COVID-19 pandemic underscores the importance of developing reliable forecasts that would allow decision makers to devise appropriate response strategies. Despite much recent research on the topic, epidemic forecasting remains poorly understood. Researchers have attributed the difficulty of forecasting contagion dynamics to a multitude of factors, including complex behavioral responses, uncertainty in data, the stochastic nature of the underlying process, and the high sensitivity of the disease parameters to changes in the environment. We offer a rigorous explanation of the difficulty of short-term forecasting on networked populations using ideas from computational complexity. Specifically, we show that several forecasting problems (e.g., the probability that at least a given number of people will get infected at a given time and the probability that the number of infections will reach a peak at a given time) are computationally intractable. For instance, efficient solvability of such problems would imply that the number of satisfying assignments of an arbitrary Boolean formula in conjunctive normal form can be computed efficiently, violating a widely believed hypothesis in computational complexity. This intractability result holds even under the ideal situation, where all the disease parameters are known and are assumed to be insensitive to changes in the environment. From a computational complexity viewpoint, our results, which show that contagion dynamics become unpredictable for both macroscopic and individual properties, bring out some fundamental difficulties of predicting disease parameters. On the positive side, we develop efficient algorithms or approximation algorithms for restricted versions of forecasting problems.


Assuntos
Modelos Epidemiológicos , Previsões/métodos , Algoritmos , COVID-19/epidemiologia , COVID-19/prevenção & controle , COVID-19/transmissão , Humanos , Probabilidade , SARS-CoV-2 , Fatores de Tempo
2.
Sensors (Basel) ; 24(17)2024 Aug 30.
Artigo em Inglês | MEDLINE | ID: mdl-39275576

RESUMO

Wi-Fi fingerprint-based indoor localization methods are effective in static environments but encounter challenges in dynamic, real-world scenarios due to evolving fingerprint patterns and feature spaces. This study investigates the temporal variations in signal strength over a 25-month period to enhance adaptive long-term Wi-Fi localization. Key aspects explored include the significance of signal features, the effects of sampling fluctuations, and overall accuracy measured by mean absolute error. Techniques such as mean-based feature selection, principal component analysis (PCA), and functional discriminant analysis (FDA) were employed to analyze signal features. The proposed algorithm, Ada-LT IP, which incorporates data reduction and transfer learning, shows improved accuracy compared to state-of-the-art methods evaluated in the study. Additionally, the study addresses multicollinearity through PCA and covariance analysis, revealing a reduction in computational complexity and enhanced accuracy for the proposed method, thereby providing valuable insights for improving adaptive long-term Wi-Fi indoor localization systems.

3.
Sensors (Basel) ; 24(2)2024 Jan 09.
Artigo em Inglês | MEDLINE | ID: mdl-38257502

RESUMO

A Global Navigation Satellite System (GNSS) is widely used today for both positioning and timing purposes. Many distinct receiver chips are available as Application-Specific Integrated Circuit (ASIC)s off-the-shelf, each tailored to the requirements of various applications. These chips deliver good performance and low energy consumption but offer customers little-to-no transparency about their internal features. This prevents modification, research in GNSS processing chain enhancement (e.g., application of Approximate Computing (AxC) techniques), and design space exploration to find the optimal receiver for a use case. In this paper, we review the GNSS processing chain using SyDR, our open-source GNSS Software-Defined Radio (SDR) designed for algorithm benchmarking, and highlight the limitations of a software-only environment. In return, we propose an evolution to our system, called Hard SyDR to become closer to the hardware layer and access new Key Performance Indicator (KPI)s, such as power/energy consumption and resource utilization. We use High-Level Synthesis (HLS) and the PYNQ platform to ease our development process and provide an overview of their advantages/limitations in our project. Finally, we evaluate the foreseen developments, including how this work can serve as the foundation for an exploration of AxC techniques in future low-power GNSS receivers.

4.
Entropy (Basel) ; 26(5)2024 Apr 30.
Artigo em Inglês | MEDLINE | ID: mdl-38785634

RESUMO

In brain imaging segmentation, precise tumor delineation is crucial for diagnosis and treatment planning. Traditional approaches include convolutional neural networks (CNNs), which struggle with processing sequential data, and transformer models that face limitations in maintaining computational efficiency with large-scale data. This study introduces MambaBTS: a model that synergizes the strengths of CNNs and transformers, is inspired by the Mamba architecture, and integrates cascade residual multi-scale convolutional kernels. The model employs a mixed loss function that blends dice loss with cross-entropy to refine segmentation accuracy effectively. This novel approach reduces computational complexity, enhances the receptive field, and demonstrates superior performance for accurately segmenting brain tumors in MRI images. Experiments on the MICCAI BraTS 2019 dataset show that MambaBTS achieves dice coefficients of 0.8450 for the whole tumor (WT), 0.8606 for the tumor core (TC), and 0.7796 for the enhancing tumor (ET) and outperforms existing models in terms of accuracy, computational efficiency, and parameter efficiency. These results underscore the model's potential to offer a balanced, efficient, and effective segmentation method, overcoming the constraints of existing models and promising significant improvements in clinical diagnostics and planning.

5.
BMC Bioinformatics ; 24(1): 435, 2023 Nov 16.
Artigo em Inglês | MEDLINE | ID: mdl-37974081

RESUMO

Biclustering of biologically meaningful binary information is essential in many applications related to drug discovery, like protein-protein interactions and gene expressions. However, for robust performance in recently emerging large health datasets, it is important for new biclustering algorithms to be scalable and fast. We present a rapid unsupervised biclustering (RUBic) algorithm that achieves this objective with a novel encoding and search strategy. RUBic significantly reduces the computational overhead on both synthetic and experimental datasets shows significant computational benefits, with respect to several state-of-the-art biclustering algorithms. In 100 synthetic binary datasets, our method took [Formula: see text] s to extract 494,872 biclusters. In the human PPI database of size [Formula: see text], our method generates 1840 biclusters in [Formula: see text] s. On a central nervous system embryonic tumor gene expression dataset of size 712,940, our algorithm takes   101 min to produce 747,069 biclusters, while the recent competing algorithms take significantly more time to produce the same result. RUBic is also evaluated on five different gene expression datasets and shows significant speed-up in execution time with respect to existing approaches to extract significant KEGG-enriched bi-clustering. RUBic can operate on two modes, base and flex, where base mode generates maximal biclusters and flex mode generates less number of clusters and faster based on their biological significance with respect to KEGG pathways. The code is available at ( https://github.com/CMATERJU-BIOINFO/RUBic ) for academic use only.


Assuntos
Algoritmos , Gerenciamento de Dados , Humanos , Bases de Dados Factuais , Análise por Conglomerados , Perfilação da Expressão Gênica/métodos
6.
Stat Med ; 42(23): 4207-4235, 2023 10 15.
Artigo em Inglês | MEDLINE | ID: mdl-37527835

RESUMO

Additive frailty models are used to model correlated survival data. However, the complexity of the models increases with cluster size to the extent that practical usage becomes increasingly challenging. We present a modification of the additive genetic gamma frailty (AGGF) model, the lean AGGF (L-AGGF) model, which alleviates some of these challenges by using a leaner additive decomposition of the frailty. The performances of the models were compared and evaluated in a simulation study. The L-AGGF model was used to analyze population-wide data on clustering of melanoma in 2 391 125 two-generational Norwegian families, 1960-2015. Using this model, we could analyze the complete data set, while the original model limited the analysis to a restricted data set (with cluster sizes ≤ 7 $$ \le 7 $$ ). We found a substantial clustering of melanoma in Norwegian families and large heterogeneity in melanoma risk across the population, where 52% of the frailty was attributed to the 10% of the population at highest unobserved risk. Due to the improved scalability, the L-AGGF model enables a wider range of analyses of population-wide data compared to the AGGF model. Moreover, the methods outlined here make it possible to perform these analyses in a computationally efficient manner.


Assuntos
Fragilidade , Melanoma , Humanos , Modelos Estatísticos , Fragilidade/epidemiologia , Simulação por Computador , Análise por Conglomerados , Melanoma/epidemiologia , Melanoma/genética , Análise de Sobrevida
7.
Sensors (Basel) ; 23(6)2023 Mar 11.
Artigo em Inglês | MEDLINE | ID: mdl-36991750

RESUMO

Spiking neural networks (SNNs) are subjects of a topic that is gaining more and more interest nowadays. They more closely resemble actual neural networks in the brain than their second-generation counterparts, artificial neural networks (ANNs). SNNs have the potential to be more energy efficient than ANNs on event-driven neuromorphic hardware. This can yield drastic maintenance cost reduction for neural network models, as the energy consumption would be much lower in comparison to regular deep learning models hosted in the cloud today. However, such hardware is still not yet widely available. On standard computer architectures consisting mainly of central processing units (CPUs) and graphics processing units (GPUs) ANNs, due to simpler models of neurons and simpler models of connections between neurons, have the upper hand in terms of execution speed. In general, they also win in terms of learning algorithms, as SNNs do not reach the same levels of performance as their second-generation counterparts in typical machine learning benchmark tasks, such as classification. In this paper, we review existing learning algorithms for spiking neural networks, divide them into categories by type, and assess their computational complexity.


Assuntos
Algoritmos , Redes Neurais de Computação , Humanos , Potenciais de Ação/fisiologia , Computadores , Encéfalo/fisiologia
8.
Sensors (Basel) ; 23(15)2023 Jul 28.
Artigo em Inglês | MEDLINE | ID: mdl-37571550

RESUMO

In recent years, environmental sound classification (ESC) has prevailed in many artificial intelligence Internet of Things (AIoT) applications, as environmental sound contains a wealth of information that can be used to detect particular events. However, existing ESC methods have high computational complexity and are not suitable for deployment on AIoT devices with constrained computing resources. Therefore, it is of great importance to propose a model with both high classification accuracy and low computational complexity. In this work, a new ESC method named BSN-ESC is proposed, including a big-small network-based ESC model that can assess the classification difficulty level and adaptively activate a big or small network for classification as well as a pre-classification processing technique with logmel spectrogram refining, which prevents distortion in the frequency-domain characteristics of the sound clip at the joint part of two adjacent sound clips. With the proposed methods, the computational complexity is significantly reduced, while the classification accuracy is still high. The proposed BSN-ESC model is implemented on both CPU and FPGA to evaluate its performance on both PC and embedded systems with the dataset ESC-50, which is the most commonly used dataset. The proposed BSN-ESC model achieves the lowest computational complexity with the number of floating-point operations (FLOPs) of only 0.123G, which represents a reduction of up to 2309 times in computational complexity compared with state-of-the-art methods while delivering a high classification accuracy of 89.25%. This work can achieve the realization of ESC being applied to AIoT devices with constrained computational resources.

9.
Sensors (Basel) ; 23(17)2023 Aug 28.
Artigo em Inglês | MEDLINE | ID: mdl-37687916

RESUMO

This research presents a comprehensive study of the dichotomous search iterative parabolic discrete time Fourier transform (Ds-IpDTFT) estimator, a novel approach for fine frequency estimation in noisy exponential signals. The proposed estimator leverages a dichotomous search process before iterative interpolation estimation, which significantly reduces computational complexity while maintaining high estimation accuracy. An in-depth exploration of the relationship between the optimal parameter p and the unknown parameter δ forms the backbone of the methodology. Through extensive simulations and real-world experiments, the Ds-IpDTFT estimator exhibits superior performance relative to other established estimators, demonstrating robustness in noisy conditions and stability across varying frequencies. This efficient and accurate estimation method is a significant contribution to the field of signal processing and offers promising potential for practical applications.

10.
Sensors (Basel) ; 23(9)2023 Apr 30.
Artigo em Inglês | MEDLINE | ID: mdl-37177604

RESUMO

This work investigates the effectiveness of deep neural networks within the realm of battery charging. This is done by introducing an innovative control methodology that not only ensures safety and optimizes the charging current, but also substantially reduces the computational complexity with respect to traditional model-based approaches. In addition to their high computational costs, model-based approaches are also hindered by their need to accurately know the model parameters and the internal states of the battery, which are typically unmeasurable in a realistic scenario. In this regard, the deep learning-based methodology described in this work was been applied for the first time to the best of the authors' knowledge, to scenarios where the battery's internal states cannot be measured and an estimate of the battery's parameters is unavailable. The reported results from the statistical validation of such a methodology underline the efficacy of this approach in approximating the optimal charging policy.

11.
Sensors (Basel) ; 23(5)2023 Feb 26.
Artigo em Inglês | MEDLINE | ID: mdl-36904788

RESUMO

Hexagonal grid layouts are advantageous in microarray technology; however, hexagonal grids appear in many fields, especially given the rise of new nanostructures and metamaterials, leading to the need for image analysis on such structures. This work proposes a shock-filter-based approach driven by mathematical morphology for the segmentation of image objects disposed in a hexagonal grid. The original image is decomposed into a pair of rectangular grids, such that their superposition generates the initial image. Within each rectangular grid, the shock-filters are once again used to confine the foreground information for each image object into an area of interest. The proposed methodology was successfully applied for microarray spot segmentation, whereas its character of generality is underlined by the segmentation results obtained for two other types of hexagonal grid layouts. Considering the segmentation accuracy through specific quality measures for microarray images, such as the mean absolute error and the coefficient of variation, high correlations of our computed spot intensity features with the annotated reference values were found, indicating the reliability of the proposed approach. Moreover, taking into account that the shock-filter PDE formalism is targeting the one-dimensional luminance profile function, the computational complexity to determine the grid is minimized. The order of growth for the computational complexity of our approach is at least one order of magnitude lower when compared with state-of-the-art microarray segmentation approaches, ranging from classical to machine learning ones.

12.
Sensors (Basel) ; 23(24)2023 Dec 13.
Artigo em Inglês | MEDLINE | ID: mdl-38139643

RESUMO

To solve error propagation and exorbitant computational complexity of signal detection in wireless multiple-input multiple-output-orthogonal frequency division multiplexing (MIMO-OFDM) systems, a low-complex and efficient signal detection with iterative feedback is proposed via a constellation point feedback optimization of minimum mean square error-ordered successive interference cancellation (MMSE-OSIC) to approach the optimal detection. The candidate vectors are formed by selecting the candidate constellation points. Additionally, the vector most approaching received signals is chosen by the maximum likelihood (ML) criterion in formed candidate vectors to reduce the error propagation by previous erroneous decision, thus improving the detection performance. Under a large number of matrix inversion operations in the above iterative MMSE process, effective and fast signal detection is hard to be achieved. Then, a symmetric successive relaxation iterative algorithm is proposed to avoid the complex matrix inversion calculation process. The relaxation factor and initial iteration value are reasonably configured with low computational complexity to achieve good detection close to that of the MMSE with fewer iterations. Simultaneously, the error diffusion and complexity accumulation caused by the successive detection of the subsequent OSIC scheme are also improved. In addition, a method via a parallel coarse and fine detection deals with several layers to both reduce iterations and improve performance. Therefore, the proposed scheme significantly promotes the MIMO-OFDM performance and thus plays an irreplaceable role in the future sixth generation (6G) mobile communications and wireless sensor networks, and so on.

13.
Sensors (Basel) ; 23(16)2023 Aug 18.
Artigo em Inglês | MEDLINE | ID: mdl-37631792

RESUMO

Traditional encoder-decoder networks like U-Net have been extensively used for polyp segmentation. However, such networks have demonstrated limitations in explicitly modeling long-range dependencies. In such networks, local patterns are emphasized over the global context, as each convolutional kernel focuses on only a local subset of pixels in the entire image. Several recent transformer-based networks have been shown to overcome such limitations. Such networks encode long-range dependencies using self-attention methods and thus learn highly expressive representations. However, due to the computational complexity of modeling the whole image, self-attention is expensive to compute, as there is a quadratic increment in cost with the increase in pixels in the image. Thus, patch embedding has been utilized, which groups small regions of the image into single input features. Nevertheless, these transformers still lack inductive bias, even with the image as a 1D sequence of visual tokens. This results in the inability to generalize to local contexts due to limited low-level features. We introduce a hybrid transformer combined with a convolutional mixing network to overcome computational and long-range dependency issues. A pretrained transformer network is introduced as a feature-extracting encoder, and a mixing module network (MMNet) is introduced to capture the long-range dependencies with a reduced computational cost. Precisely, in the mixing module network, we use depth-wise and 1 × 1 convolution to model long-range dependencies to establish spatial and cross-channel correlation, respectively. The proposed approach is evaluated qualitatively and quantitatively on five challenging polyp datasets across six metrics. Our MMNet outperforms the previous best polyp segmentation methods.


Assuntos
Algoritmos , Benchmarking , Fontes de Energia Elétrica , Aprendizagem
14.
Entropy (Basel) ; 25(2)2023 Jan 21.
Artigo em Inglês | MEDLINE | ID: mdl-36832577

RESUMO

The original formulation of the boson sampling problem assumed that little or no photon collisions occur. However, modern experimental realizations rely on setups where collisions are quite common, i.e., the number of photons M injected into the circuit is close to the number of detectors N. Here we present a classical algorithm that simulates a bosonic sampler: it calculates the probability of a given photon distribution at the interferometer outputs for a given distribution at the inputs. This algorithm is most effective in cases with multiple photon collisions, and in those cases, it outperforms known algorithms.

15.
Entropy (Basel) ; 25(8)2023 Aug 11.
Artigo em Inglês | MEDLINE | ID: mdl-37628227

RESUMO

Designing reasonable MAC scheduling strategies is an important means to ensure transmission quality in wireless sensor networks (WSNs). When there exist multiple available routes from the source to the destination, it is necessary to combine a data traffic allocation mechanism and design a multi-path MAC scheduling scheme in order to ensure QoS. This paper develops a multi-path resource allocation method for multi-channel wireless sensor networks, which uses random-access technology to complete MAC scheduling and selects the transmission path for each packet according to the probability. Through theoretical analysis and simulation experiments, it can be found that the proposed strategy can provide a reliable throughput capacity region. Meanwhile, due to the use of random-access technology, the computational complexity of the proposed algorithm can be independent of the number of links and channels.

16.
Entropy (Basel) ; 25(10)2023 Oct 08.
Artigo em Inglês | MEDLINE | ID: mdl-37895546

RESUMO

Symmetric extensions are essential in quantum mechanics, providing a lens through which to investigate the correlations of entangled quantum systems and to address challenges like the quantum marginal problem. Though semi-definite programming (SDP) is a recognized method for handling symmetric extensions, it struggles with computational constraints, especially due to the large real parameters in generalized qudit systems. In this study, we introduce an approach that adeptly leverages permutation symmetry. By fine-tuning the SDP problem for detecting k-symmetric extensions, our method markedly diminishes the searching space dimensionality and trims the number of parameters essential for positive-definiteness tests. This leads to an algorithmic enhancement, reducing the complexity from O(d2k) to O(kd2) in the qudit k-symmetric extension scenario. Additionally, our approach streamlines the process of verifying the positive definiteness of the results. These advancements pave the way for deeper insights into quantum correlations, highlighting potential avenues for refined research and innovations in quantum information theory.

17.
Curr Genomics ; 23(5): 299-317, 2022 Nov 18.
Artigo em Inglês | MEDLINE | ID: mdl-36778194

RESUMO

Genome sequences indicate a wide variety of characteristics, which include species and sub-species type, genotype, diseases, growth indicators, yield quality, etc. To analyze and study the characteristics of the genome sequences across different species, various deep learning models have been proposed by researchers, such as Convolutional Neural Networks (CNNs), Deep Belief Networks (DBNs), Multilayer Perceptrons (MLPs), etc., which vary in terms of evaluation performance, area of application and species that are processed. Due to a wide differentiation between the algorithmic implementations, it becomes difficult for research programmers to select the best possible genome processing model for their application. In order to facilitate this selection, the paper reviews a wide variety of such models and compares their performance in terms of accuracy, area of application, computational complexity, processing delay, precision and recall. Thus, in the present review, various deep learning and machine learning models have been presented that possess different accuracies for different applications. For multiple genomic data, Repeated Incremental Pruning to Produce Error Reduction with Support Vector Machine (Ripper SVM) outputs 99.7% of accuracy, and for cancer genomic data, it exhibits 99.27% of accuracy using the CNN Bayesian method. Whereas for Covid genome analysis, Bidirectional Long Short-Term Memory with CNN (BiLSTM CNN) exhibits the highest accuracy of 99.95%. A similar analysis of precision and recall of different models has been reviewed. Finally, this paper concludes with some interesting observations related to the genomic processing models and recommends applications for their efficient use.

18.
Proc Natl Acad Sci U S A ; 116(42): 20881-20885, 2019 10 15.
Artigo em Inglês | MEDLINE | ID: mdl-31570618

RESUMO

Optimization algorithms and Monte Carlo sampling algorithms have provided the computational foundations for the rapid growth in applications of statistical machine learning in recent years. There is, however, limited theoretical understanding of the relationships between these 2 kinds of methodology, and limited understanding of relative strengths and weaknesses. Moreover, existing results have been obtained primarily in the setting of convex functions (for optimization) and log-concave functions (for sampling). In this setting, where local properties determine global properties, optimization algorithms are unsurprisingly more efficient computationally than sampling algorithms. We instead examine a class of nonconvex objective functions that arise in mixture modeling and multistable systems. In this nonconvex setting, we find that the computational complexity of sampling algorithms scales linearly with the model dimension while that of optimization algorithms scales exponentially.

19.
Sensors (Basel) ; 22(24)2022 Dec 17.
Artigo em Inglês | MEDLINE | ID: mdl-36560338

RESUMO

Post-equalization using neural network (NN) is a promising technique that models and offsets the nonlinear distortion in visible light communication (VLC) channels, which is recognized as an essential component in the incoming 6G era. NN post-equalizer is good at modeling complex channel effects without previously knowing the law of physics during the transmission. However, the trained NN might be weak in generalization, and thus consumes considerable computation in retraining new models for different channel conditions. In this paper, we studied transfer learning strategy, growing DNN models from a well-trained 'stem model' instead of exhaustively training multiple models from randomly initialized states. It extracts the main feature of the channel first whose signal power balances the signal-to-noise ratio and the nonlinearity, and later focuses on the detailed difference in other channel conditions. Compared with the exhaustive training strategy, stem-originated DNN models achieve 64% of the working range with five times the training efficiency at most or more than 95% of the working range with 150% higher efficiency. This finding is beneficial to improving the feasibility of DNN application in real-world UVLC systems.


Assuntos
Aprendizagem , Luz , Redes Neurais de Computação , Aprendizado de Máquina , Comunicação
20.
Sensors (Basel) ; 22(5)2022 Feb 24.
Artigo em Inglês | MEDLINE | ID: mdl-35270920

RESUMO

The advancement of the Internet of Things (IoT) has transfigured the overlay of the physical world by superimposing digital information in various sectors, including smart cities, industry, healthcare, etc. Among the various shared information, visual data are an insensible part of smart cities, especially in healthcare. As a result, visual-IoT research is gathering momentum. In visual-IoT, visual sensors, such as cameras, collect critical multimedia information about industries, healthcare, shopping, autonomous vehicles, crowd management, etc. In healthcare, patient-related data are captured and then transmitted via insecure transmission lines. The security of this data are of paramount importance. Besides the fact that visual data requires a large bandwidth, the gap between communication and computation is an additional challenge for visual IoT system development. In this paper, we present SVIoT, a Secure Visual-IoT framework, which addresses the issues of both data security and resource constraints in IoT-based healthcare. This was achieved by proposing a novel reversible data hiding (RDH) scheme based on One Dimensional Neighborhood Mean Interpolation (ODNMI). The use of ODNMI reduces the computational complexity and storage/bandwidth requirements by 50 percent. We upscaled the original image from M × N to M ± 2N, dissimilar to conventional interpolation methods, wherein images are upscaled to 2M × 2N. We made use of an innovative mechanism, Left Data Shifting (LDS), before embedding data in the cover image. Before embedding the data, we encrypted it using an AES-128 encryption algorithm to offer additional security. The use of LDS ensures better perceptual quality at a relatively high payload. We achieved an average PSNR of 43 dB for a payload of 1.5 bpp (bits per pixel). In addition, we embedded a fragile watermark in the cover image to ensure authentication of the received content.


Assuntos
Segurança Computacional , Atenção à Saúde , Algoritmos , Comunicação , Humanos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA