Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 17 de 17
Filter
Add more filters











Publication year range
1.
Spat Spatiotemporal Epidemiol ; 49: 100645, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38876555

ABSTRACT

Bayesian inference in modelling infectious diseases using Bayesian inference using Gibbs Sampling (BUGS) is notable in the last two decades in parallel with the advancements in computing and model development. The ability of BUGS to easily implement the Markov chain Monte Carlo (MCMC) method brought Bayesian analysis to the mainstream of infectious disease modelling. However, with the existing software that runs MCMC to make Bayesian inferences, it is challenging, especially in terms of computational complexity, when infectious disease models become more complex with spatial and temporal components, in addition to the increasing number of parameters and large datasets. This study investigates two alternative subscripting strategies for creating models in Just Another Gibbs Sampler (JAGS) environment and their performance in terms of run times. Our results are useful for practitioners to ensure the efficiency and timely implementation of Bayesian spatiotemporal infectious disease modelling.


Subject(s)
Bayes Theorem , Markov Chains , Spatio-Temporal Analysis , Humans , Epidemiological Models , Monte Carlo Method , Software , Communicable Diseases/epidemiology
2.
Orthod Craniofac Res ; 2024 May 07.
Article in English | MEDLINE | ID: mdl-38712670

ABSTRACT

OBJECTIVES: Since developing AI procedures demands significant computing resources and time, the implementation of a careful experimental design is essential. The purpose of this study was to investigate factors influencing the development of AI in orthodontics. MATERIALS AND METHODS: A total of 162 AI models were developed, with various combinations of sample sizes (170, 340, 679), input variables (40, 80, 160), output variables (38, 76, 154), training sessions (100, 500, 1000), and computer specifications (new vs. old). The TabNet deep-learning algorithm was used to develop these AI models, and leave-one-out cross-validation was applied in training. The goodness-of-fit of the regression models was compared using the adjusted coefficient of determination values, and the best-fit model was selected accordingly. Multiple linear regression analyses were employed to investigate the relationship between the influencing factors. RESULTS: Increasing the number of training sessions enhanced the effectiveness of the AI models. The best-fit regression model for predicting the computational time of AI, which included logarithmic transformation of time, sample size, and training session variables, demonstrated an adjusted coefficient of determination of 0.99. CONCLUSION: The study results show that estimating the time required for AI development may be possible using logarithmic transformations of time, sample size, and training session variables, followed by applying coefficients estimated through several pilot studies with reduced sample sizes and reduced training sessions.

3.
Heliyon ; 9(7): e17530, 2023 Jul.
Article in English | MEDLINE | ID: mdl-37449124

ABSTRACT

The process of examining the data flow over the internet to identify abnormalities in wireless network performance is known as network traffic analysis. When analyzing network traffic data, traffic classification becomes an important task. The traffic data classification is used to determine whether data in network traffic is in real-time or not. This analysis controls network traffic data in a network and allows for efficient network performance improvement. Real-time and non-real-time data are effectively classified from the given input data set using data mining clustering and classification algorithms. The proposed work focuses on the performance of traffic data classification with high clustering accuracy and low Classification Time (CT). This research work is carried out to fill the gap in the existing network traffic classification algorithms. However, the traffic data classification remained unaddressed for performing the network traffic analysis effectively. Then, we proposed an Enhanced Self-Learning-based Clustering Scheme (ESLCS) using an enhanced unsupervised algorithm and adaptive seeding approach to improve the classification accuracy while performing the real-time traffic data distribution in wireless networks. Test-bed results demonstrate that the proposed model enhances the clustering accuracy and True Positive Rate (TPR) effectively as well as reduces the CT time and Communication Overhead (CO) substantially to compare with the peer-existing routing techniques.

4.
Micromachines (Basel) ; 14(3)2023 Mar 02.
Article in English | MEDLINE | ID: mdl-36985001

ABSTRACT

The performance of wireless networks is related to the optimized structure of the antenna. Therefore, in this paper, a Machine Learning (ML)-assisted new methodology named Self-Adaptive Bayesian Neural Network (SABNN) is proposed, aiming to optimize the antenna pattern for next-generation wireless networks. In addition, the statistical analysis for the presented SABNN is evaluated in this paper and compared with the current Gaussian Process (GP). The training cost and convergence speed are also discussed in this paper. In the final stage, the proposed model's measured results are demonstrated, showing that the system has optimized outcomes with less calculation time.

5.
J Supercomput ; 78(9): 11975-12023, 2022.
Article in English | MEDLINE | ID: mdl-35221523

ABSTRACT

Wireless sensor networks (WSNs) contain sensor nodes in enormous amount to accumulate the information about the nearby surroundings, and this information is insignificant until the exact position from where data have been collected is revealed. Localization of sensor nodes in WSNs plays a significant role in several applications such as detecting the enemy movement in military applications. The major aim of localization problem is to find the coordinates of all target nodes with the help of anchor nodes. In this paper, two variants of bat optimization algorithm (BOA) are proposed to localize the sensor nodes in a more efficient way and to overcome the drawbacks of original BOA, i.e. being trapped in local optimum solution. The exploration and exploitation features of original BOA are modified in the proposed BOA variants 1 and 2 using improved global and local search strategies. To validate the efficiency of the proposed BOA variants 1 and 2, several simulations have been performed for various numbers of target nodes and anchor nodes, and the results are compared with original BOA and other existing optimization algorithms applied to node localization problem. The proposed BOA variants 1 and 2 outperform the other algorithms in terms of mean localization error, number of localized nodes and computing time. Further, the proposed BOA variants 1 and 2 and original BOA are also compared in terms of various errors and localization efficiency for several values of target and anchor nodes. The simulations results signify that the proposed BOA variant 2 is superior to the proposed BOA variant 1 and existing BOA in terms of several errors. The node localization based on the proposed BOA variant 2 is more effective as it takes less time to perform computations and has less mean localization error than the proposed BOA variant 1, BOA and other existing optimization algorithms.

6.
Sensors (Basel) ; 20(21)2020 Nov 09.
Article in English | MEDLINE | ID: mdl-33182486

ABSTRACT

IEEE Time-Sensitive Networking (TSN) Task Group specifies a series of standards such as 802.1Qbv for enhancing the management of time-critical flows in real-time networks. Under the IEEE 802.1Qbv standard, the scheduling algorithm is employed to determine the time when a specific gate in the network entities is opened or closed so that the real-time requirements for the flows are guaranteed. The computation time of this scheduling algorithm is critical for the system where dynamic network configurations and settings are required. In addition, the network routing where the paths of the flows are determined has a significant impact on the computation time of the network scheduling. This paper presents a novel scheduling-aware routing algorithm to minimize the computation time of the scheduling algorithm in network management. The proposed routing algorithm determines the path for each time-triggered flow by including the consideration of the period of the flow. This decreases the occurrence of path-conflict during the stage of network scheduling. The detailed outline of the proposed algorithm is presented in this paper. The experimental results show that the proposed routing algorithm reduces the computation time of network scheduling by up to 30% and improves the schedulability of time-triggered flows is the network.

7.
J Digit Imaging ; 33(5): 1065-1072, 2020 10.
Article in English | MEDLINE | ID: mdl-32748300

ABSTRACT

We quantitatively investigate the influence of image registration, using open-source software (3DSlicer), on kinetic analysis (Tofts model) of dynamic contrast enhanced MRI of early-stage breast cancer patients. We also show that registration computation time can be reduced by reducing the percent sampling (PS) of voxels used for estimation of the cost function. DCE-MRI breast images were acquired on a 3T-PET/MRI system in 13 patients with early-stage breast cancer who were scanned in a prone radiotherapy position. Images were registered using a BSpline transformation with a 2 cm isotropic grid at 100, 20, 5, 1, and 0.5PS (BRAINSFit in 3DSlicer). Signal enhancement curves were analyzed voxel-by-voxel using the Tofts kinetic model. Comparing unregistered with registered groups, we found a significant change in the 90th percentile of the voxel-wise distribution of Ktrans. We also found a significant reduction in the following: (1) in the standard error (uncertainty) of the parameter value estimation, (2) the number of voxel fits providing unphysical values for the extracellular-extravascular volume fraction (ve > 1), and (3) goodness of fit. We found no significant differences in the median of parameter value distributions (Ktrans, ve) between unregistered and registered images. Differences between parameters and uncertainties obtained using 100PS versus 20PS were small and statistically insignificant. As such, computation time can be reduced by a factor of 2, on average, by using 20PS while not affecting the kinetic fit. The methods outlined here are important for studies including a large number of post-contrast images or number of patient images.


Subject(s)
Breast Neoplasms , Breast Neoplasms/diagnostic imaging , Contrast Media , Humans , Kinetics , Magnetic Resonance Imaging , Uncertainty
8.
Genes (Basel) ; 11(1)2020 01 03.
Article in English | MEDLINE | ID: mdl-31947774

ABSTRACT

The rapid proliferation of low-cost RNA-seq data has resulted in a growing interest in RNA analysis techniques for various applications, ranging from identifying genotype-phenotype relationships to validating discoveries of other analysis results. However, many practical applications in this field are limited by the available computational resources and associated long computing time needed to perform the analysis. GATK has a popular best practices pipeline specifically designed for variant calling RNA-seq analysis. Some tools in this pipeline are not optimized to scale the analysis to multiple processors or compute nodes efficiently, thereby limiting their ability to process large datasets. In this paper, we present SparkRA, an Apache Spark based pipeline to efficiently scale up the GATK RNA-seq variant calling pipeline on multiple cores in one node or in a large cluster. On a single node with 20 hyper-threaded cores, the original pipeline runs for more than 5 h to process a dataset of 32 GB. In contrast, SparkRA is able to reduce the overall computation time of the pipeline on the same single node by about 4×, reducing the computation time down to 1.3 h. On a cluster with 16 nodes (each with eight single-threaded cores), SparkRA is able to further reduce this computation time by 7.7× compared to a single node. Compared to other scalable state-of-the-art solutions, SparkRA is 1.2× faster while achieving the same accuracy of the results.


Subject(s)
Databases, Nucleic Acid , RNA-Seq , Sequence Analysis, RNA , Software
9.
Math Biosci Eng ; 16(6): 7546-7561, 2019 08 19.
Article in English | MEDLINE | ID: mdl-31698628

ABSTRACT

Medical ultrasound images are corrupted by speckle noise, and despeckling methods are required to effectively and efficiently reduce speckle noise while simultaneously preserving details of tissues. This paper proposes a despeckling approach named the Gabor-based anisotropic diffusion coupled with the lattice Boltzmann method (GAD-LBM), which uses the lattice Boltzmann method (LBM) to fast solve the partial differential equation of an anisotropic diffusion model embedded with the Gabor edge detector. We evaluated the GAD-LBM on both synthetic and clinical ultrasound images, and the experimental results suggested that the GAD-LBM was superior to other nine methods in speckle suppression and detail preservation. For synthetic and clinical images, the computation time of the GAD-LBM was about 1/90 to 1/20 of the GAD solved with the finite difference, indicating the advantage of the GAD-LBM in efficiency. The GAD-LBM not only has excellent ability of noise reduction and detail preservation for ultrasound images, but also has advantages in computational efficiency.


Subject(s)
Anisotropy , Breast Neoplasms/diagnostic imaging , Image Processing, Computer-Assisted/methods , Ultrasonography , Algorithms , Artifacts , Computer Simulation , Diffusion , Female , Finite Element Analysis , Humans , Image Enhancement/methods , Image Interpretation, Computer-Assisted/methods , Models, Statistical , Reproducibility of Results , Signal-To-Noise Ratio , Software
10.
Comput Methods Biomech Biomed Engin ; 22(2): 159-168, 2019 Feb.
Article in English | MEDLINE | ID: mdl-30582359

ABSTRACT

Providing a biomechanical feedback during experimental sessions is a real outcome for rehabilitation, ergonomics or training applications. However, such applications imply a fast computation of the biomechanical quantities to be observed. The MusIC method has been designed to solve quickly the muscle forces estimation problem, thanks to a database interpolation. The current paper aims at enhancing its performance. Without generating any database, the method allows to identify optimal densities (number of samples contained in the database) with respect to the method accuracy and the off-line computation time needed to generate the database. On a lower limbs model (12 degrees of freedom, 82 muscles), thanks to this work, the MusIC method exhibits an accuracy error of 0.1% with an off-line computation time lower than 10 minutes. The on-line computation frequency (number of samples computed per second) is about 58 Hz. Thanks to these improvements, the MusIC method can be used to produce a feedback during an experimentation with a wide variety of musculoskeletal models or cost functions (used to share forces into muscles). The interaction between the subject, the experimenter (e.g. trainer, ergonomist or clinician) and the biomechanical data (e.g. muscle forces) in experimental sessions is a promising way to enhance rehabilitation, training or design techniques.


Subject(s)
Algorithms , Lower Extremity/physiology , Models, Biological , Musculoskeletal System/anatomy & histology , Biomechanical Phenomena , Databases as Topic , Humans , Joints/physiology
11.
Healthc Technol Lett ; 5(4): 130-135, 2018 Aug.
Article in English | MEDLINE | ID: mdl-30155265

ABSTRACT

Cancer is one of the deadly diseases of human life. The patient may likely to survive if the disease is diagnosed in its early stages. In this Letter, the authors propose a genetic search fuzzy rough (GSFR) feature selection algorithm, which is hybridised using the evolutionary sequential genetic search technique and fuzzy rough set to select features. The genetic operator's selection, crossover and mutation are applied to generate the subset of features from dataset. The generated subset is subjected to the evaluation with the modified dependency function of the fuzzy rough set using positive and boundary regions, which act as a fitness function. The generation and evaluation of the subset of features continue until the best subset is arrived at to develop the classification model. Selected features are applied to the different classifiers, from the classifiers fuzzy-rough nearest neighbour (FRNN) classifier, which outperforms in terms of classification accuracy and computation time. Hence, the FRNN is applied for performance analysis of existing feature selection algorithms against the proposed GSFR feature selection algorithm. The result generated from the proposed GSFR feature selection algorithm proved to be precise when compared to other feature selection algorithms.

12.
Comput Methods Biomech Biomed Engin ; 21(2): 149-160, 2018 Feb.
Article in English | MEDLINE | ID: mdl-29451014

ABSTRACT

The present paper aims at presenting a fast and quasi-optimal method of muscle forces estimation: the MusIC method. It consists in interpolating a first estimation in a database generated offline thanks to a classical optimization problem, and then correcting it to respect the motion dynamics. Three different cost functions - two polynomial criteria and a min/max criterion - were tested on a planar musculoskeletal model. The MusIC method provides a computation frequency approximately 10 times higher compared to a classical optimization problem with a relative mean error of 4% on cost function evaluation.


Subject(s)
Muscles/physiology , Algorithms , Biomechanical Phenomena , Humans , Models, Biological , Motion , Muscles/anatomy & histology , Time Factors
13.
Med Biol Eng Comput ; 56(8): 1459-1473, 2018 Aug.
Article in English | MEDLINE | ID: mdl-29359257

ABSTRACT

Nowadays, bio-reliable modeling of muscle contraction is becoming more accurate and complex. This increasing complexity induces a significant increase in computation time which prevents the possibility of using this model in certain applications and studies. Accordingly, the aim of this work is to significantly reduce the computation time of high-density surface electromyogram (HD-sEMG) generation. This will be done through a new model of motor unit (MU)-specific electrical source based on the fibers composing the MU. In order to assess the efficiency of this approach, we computed the normalized root mean square error (NRMSE) between several simulations on single generated MU action potential (MUAP) using the usual fiber electrical sources and the MU-specific electrical source. This NRMSE was computed for five different simulation sets wherein hundreds of MUAPs are generated and summed into HD-sEMG signals. The obtained results display less than 2% error on the generated signals compared to the same signals generated with fiber electrical sources. Moreover, the computation time of the HD-sEMG signal generation model is reduced to about 90% compared to the fiber electrical source model. Using this model with MU electrical sources, we can simulate HD-sEMG signals of a physiological muscle (hundreds of MU) in less than an hour on a classical workstation. Graphical Abstract Overview of the simulation of HD-sEMG signals using the fiber scale and the MU scale. Upscaling the electrical source to the MU scale reduces the computation time by 90% inducing only small deviation of the same simulated HD-sEMG signals.


Subject(s)
Electricity , Electromyography , Models, Biological , Motor Neurons/physiology , Signal Processing, Computer-Assisted , Action Potentials/physiology , Computer Simulation , Humans , Time Factors
14.
Data Brief ; 13: 444-452, 2017 Aug.
Article in English | MEDLINE | ID: mdl-28702483

ABSTRACT

This data article provides detailed optimization input and output datasets and optimization code for the published research work titled "Dynamic green supplier selection and order allocation with quantity discounts and varying supplier availability" (Hamdan and Cheaitou, 2017, In press) [1]. Researchers may use these datasets as a baseline for future comparison and extensive analysis of the green supplier selection and order allocation problem with all-unit quantity discount and varying number of suppliers. More particularly, the datasets presented in this article allow researchers to generate the exact optimization outputs obtained by the authors of Hamdan and Cheaitou (2017, In press) [1] using the provided optimization code and then to use them for comparison with the outputs of other techniques or methodologies such as heuristic approaches. Moreover, this article includes the randomly generated optimization input data and the related outputs that are used as input data for the statistical analysis presented in Hamdan and Cheaitou (2017 In press) [1] in which two different approaches for ranking potential suppliers are compared. This article also provides the time analysis data used in (Hamdan and Cheaitou (2017, In press) [1] to study the effect of the problem size on the computation time as well as an additional time analysis dataset. The input data for the time study are generated randomly, in which the problem size is changed, and then are used by the optimization problem to obtain the corresponding optimal outputs as well as the corresponding computation time.

15.
Stat Methods Med Res ; 26(6): 2758-2779, 2017 Dec.
Article in English | MEDLINE | ID: mdl-26446001

ABSTRACT

In longitudinal studies, continuous, binary, categorical, and survival outcomes are often jointly collected, possibly with some observations missing. However, when it comes to modeling responses, the ordinal ones have received less attention in the literature. In a longitudinal or hierarchical context, the univariate proportional odds mixed model (POMM) can be regarded as an instance of the generalized linear mixed model (GLMM). When the response of the joint multivariate model encompass ordinal responses, the complexity further increases. An additional problem of model fitting is the size of the collected data. Pseudo-likelihood based methods for pairwise fitting, for partitioned samples and, as introduced in this paper, pairwise fitting within partitioned samples allow joint modeling of even larger numbers of responses. We show that that pseudo-likelihood methodology allows for highly efficient and fast inferences in high-dimensional large datasets.


Subject(s)
Likelihood Functions , Models, Statistical , Biomarkers/blood , Biostatistics/methods , Blood Pressure , Cholesterol, LDL/blood , Data Interpretation, Statistical , Diabetes Mellitus/blood , Diabetes Mellitus/physiopathology , Diabetes Mellitus/therapy , Glycated Hemoglobin/metabolism , Humans , Linear Models , Longitudinal Studies , Time Factors
16.
J Theor Biol ; 399: 148-58, 2016 06 21.
Article in English | MEDLINE | ID: mdl-27049046

ABSTRACT

Genotype imputation is an important tool for prediction of unknown genotypes for both unrelated individuals and parent-offspring trios. Several imputation methods are available and can either employ universal machine learning methods, or deploy algorithms dedicated to infer missing genotypes. In this research the performance of eight machine learning methods: Support Vector Machine, K-Nearest Neighbors, Extreme Learning Machine, Radial Basis Function, Random Forest, AdaBoost, LogitBoost, and TotalBoost compared in terms of the imputation accuracy, computation time and the factors affecting imputation accuracy. The methods employed using real and simulated datasets to impute the un-typed SNPs in parent-offspring trios. The tested methods show that imputation of parent-offspring trios can be accurate. The Random Forest and Support Vector Machine were more accurate than the other machine learning methods. The TotalBoost performed slightly worse than the other methods.The running times were different between methods. The ELM was always most fast algorithm. In case of increasing the sample size, the RBF requires long imputation time.The tested methods in this research can be an alternative for imputation of un-typed SNPs in low missing rate of data. However, it is recommended that other machine learning methods to be used for imputation.


Subject(s)
Genotyping Techniques , Machine Learning , Parents , Algorithms , Computer Simulation , Databases, Genetic , Female , Gene Frequency/genetics , Genotype , Humans , Male , Polymorphism, Single Nucleotide/genetics
17.
J Anim Sci Technol ; 58: 1, 2016.
Article in English | MEDLINE | ID: mdl-26740888

ABSTRACT

BACKGROUND: Genotype imputation is an important process of predicting unknown genotypes, which uses reference population with dense genotypes to predict missing genotypes for both human and animal genetic variations at a low cost. Machine learning methods specially boosting methods have been used in genetic studies to explore the underlying genetic profile of disease and build models capable of predicting missing values of a marker. METHODS: In this study strategies and factors affecting the imputation accuracy of parent-offspring trios compared from lower-density SNP panels (5 K) to high density (10 K) SNP panel using three different Boosting methods namely TotalBoost (TB), LogitBoost (LB) and AdaBoost (AB). The methods employed using simulated data to impute the un-typed SNPs in parent-offspring trios. Four different datasets of G1 (100 trios with 5 k SNPs), G2 (100 trios with 10 k SNPs), G3 (500 trios with 5 k SNPs), and G4 (500 trio with 10 k SNPs) were simulated. In four datasets all parents were genotyped completely, and offspring genotyped with a lower density panel. RESULTS: Comparison of the three methods for imputation showed that the LB outperformed AB and TB for imputation accuracy. The time of computation were different between methods. The AB was the fastest algorithm. The higher SNP densities resulted the increase of the accuracy of imputation. Larger trios (i.e. 500) was better for performance of LB and TB. CONCLUSIONS: The conclusion is that the three methods do well in terms of imputation accuracy also the dense chip is recommended for imputation of parent-offspring trios.

SELECTION OF CITATIONS
SEARCH DETAIL