Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 882
Filter
1.
Heliyon ; 10(19): e37912, 2024 Oct 15.
Article in English | MEDLINE | ID: mdl-39386875

ABSTRACT

The convenience and cost-effectiveness offered by cloud computing have attracted a large customer base. In a cloud environment, the inclusion of the concept of virtualization requires careful management of resource utilization and energy consumption. With a rapidly increasing consumer base of cloud data centers, it faces an overwhelming influx of Virtual Machine (VM) requests. In cloud computing technology, the mapping of these requests onto the actual cloud hardware is known as VM placement which is a significant area of research. The article presents the Dragonfly Algorithm integrated with Modified Best Fit Decreasing (DA-MBFD) is proposed to minimize the overall power consumption and the migration count. DA-MBFD uses MBFD for ranking VMs based on their resource requirement, then uses the Minimization of Migration (MM) algorithm for hotspot detection followed by DA to optimize the replacement of VMs from the overutilized hosts. DA-MBFD is compared with a few of the other existing techniques to show its efficiency. The comparative analysis of DA-MBFD against E-ABC, E-MBFD, and MBFD-MM shows %improvement reflecting a significant reduction in power consumption 8.21 %, 8.6 %, 6.77 %, violations in service level agreement from 9.25 %, 6.98 %-7.86 % and number of migrations 6.65 %, 8.92 %, 7.02 %, respectively.

2.
Brief Bioinform ; 25(Supplement_1)2024 Jul 23.
Article in English | MEDLINE | ID: mdl-39376084

ABSTRACT

Biomedical data are growing exponentially in both volume and levels of complexity, due to the rapid advancement of technologies and research methodologies. Analyzing these large datasets, referred to collectively as "big data," has become an integral component of research that guides experimentation-driven discovery and a new engine of discovery itself as it uncovers previously unknown connections through mining of existing data. To fully realize the potential of big data, biomedical researchers need access to high-performance-computing (HPC) resources. However, supporting on-premises infrastructure that keeps up with these consistently expanding research needs presents persistent financial and staffing challenges, even for well-resourced institutions. For other institutions, including primarily undergraduate institutions and minority serving institutions, that educate a large portion of the future workforce in the USA, this challenge presents an insurmountable barrier. Therefore, new approaches are needed to provide broad and equitable access to HPC resources to biomedical researchers and students who will advance biomedical research in the future.


Subject(s)
Biomedical Research , Cloud Computing , Humans , Big Data , Computational Biology/methods , Computational Biology/education , Software , United States
3.
Sensors (Basel) ; 24(17)2024 Aug 28.
Article in English | MEDLINE | ID: mdl-39275461

ABSTRACT

In the dynamic world of cloud computing, auto-scaling stands as a beacon of efficiency, dynamically aligning resources with fluctuating demands. This paper presents a comprehensive review of auto-scaling techniques, highlighting significant advancements and persisting challenges in the field. First, we overview the fundamental principles and mechanisms of auto-scaling, including its role in improving cost efficiency, performance, and energy consumption in cloud services. We then discuss various strategies employed in auto-scaling, ranging from threshold-based rules and queuing theory to sophisticated machine learning and time series analysis approaches. After that, we explore the critical issues in auto-scaling practices and review several studies that demonstrate how these challenges can be addressed. We then conclude by offering insights into several promising research directions, emphasizing the development of predictive scaling mechanisms and the integration of advanced machine learning techniques to achieve more effective and efficient auto-scaling solutions.

4.
Biomed Phys Eng Express ; 10(6)2024 Oct 04.
Article in English | MEDLINE | ID: mdl-39315479

ABSTRACT

Automation is revamping our preprocessing pipelines, and accelerating the delivery of personalized digital medicine. It improves efficiency, reduces costs, and allows clinicians to treat patients without significant delays. However, the influx of multimodal data highlights the need to protect sensitive information, such as clinical data, and safeguard data fidelity. One of the neuroimaging modalities that produces large amounts of time-series data is Electroencephalography (EEG). It captures the neural dynamics in a task or resting brain state with high temporal resolution. EEG electrodes placed on the scalp acquire electrical activity from the brain. These electrical potentials attenuate as they cross multiple layers of brain tissue and fluid yielding relatively weaker signals than noise-low signal-to-noise ratio. EEG signals are further distorted by internal physiological artifacts, such as eye movements (EOG) or heartbeat (ECG), and external noise, such as line noise (50 Hz). EOG artifacts, due to their proximity to the frontal brain regions, are particularly challenging to eliminate. Therefore, a widely used EOG rejection method, independent component analysis (ICA), demands manual inspection of the marked EOG components before they are rejected from the EEG data. We underscore the inaccuracy of automatized ICA rejection and provide an auxiliary algorithm-Second Layer Inspection for EOG (SLOG) in the clinical environment. SLOG based on spatial and temporal patterns of eye movements, re-examines the already marked EOG artifacts and confirms no EEG-related activity is mistakenly eliminated in this artifact rejection step. SLOG achieved a 99% precision rate on the simulated dataset while 85% precision on the real EEG dataset. One of the primary considerations for cloud-based applications is operational costs, including computing power. Algorithms like SLOG allow us to maintain data fidelity and precision without overloading the cloud platforms and maxing out our budgets.


Subject(s)
Algorithms , Artifacts , Brain , Cloud Computing , Electroencephalography , Signal Processing, Computer-Assisted , Electroencephalography/methods , Humans , Brain/diagnostic imaging , Brain/physiology , Signal-To-Noise Ratio , Eye Movements/physiology , Electrooculography/methods , Data Accuracy
5.
Sensors (Basel) ; 24(18)2024 Sep 13.
Article in English | MEDLINE | ID: mdl-39338693

ABSTRACT

In cloud-based Distributed Acoustic Sensing (DAS) sensor data management, we are confronted with two primary challenges. First, the development of efficient storage mechanisms capable of handling the enormous volume of data generated by these sensors poses a challenge. To solve this issue, we propose a method to address the issue of handling the large amount of data involved in DAS by designing and implementing a pipeline system to efficiently send the big data to DynamoDB in order to fully use the low latency of the DynamoDB data storage system for a benchmark DAS scheme for performing continuous monitoring over a 100 km range at a meter-scale spatial resolution. We employ the DynamoDB functionality of Amazon Web Services (AWS), which allows highly expandable storage capacity with latency of access of a few tens of milliseconds. The different stages of DAS data handling are performed in a pipeline, and the scheme is optimized for high overall throughput with reduced latency suitable for concurrent, real-time event extraction as well as the minimal storage of raw and intermediate data. In addition, the scalability of the DynamoDB-based data storage scheme is evaluated for linear and nonlinear variations of number of batches of access and a wide range of data sample sizes corresponding to sensing ranges of 1-110 km. The results show latencies of 40 ms per batch of access with low standard deviations of a few milliseconds, and latency per sample decreases for increasing the sample size, paving the way toward the development of scalable, cloud-based data storage services integrating additional post-processing for more precise feature extraction. The technique greatly simplifies DAS data handling in key application areas requiring continuous, large-scale measurement schemes. In addition, the processing of raw traces in a long-distance DAS for real-time monitoring requires the careful design of computational resources to guarantee requisite dynamic performance. Now, we will focus on the design of a system for the performance evaluation of cloud computing systems for diverse computations on DAS data. This system is aimed at unveiling valuable insights into performance metrics and operational efficiencies of computations on the data in the cloud, which will provide a deeper understanding of the system's performance, identify potential bottlenecks, and suggest areas for improvement. To achieve this, we employ the CloudSim framework. The analysis reveals that the virtual machine (VM) performance decreases significantly the processing time with more capable VMs, influenced by Processing Elements (PEs) and Million Instructions Per Second (MIPS). The results also reflect that, although a larger number of computations is required as the fiber length increases, with the subsequent increase in processing time, the overall speed of computation is still suitable for continuous real-time monitoring. We also see that VMs with lower performance in terms of processing speed and number of CPUs have more inconsistent processing times compared to those with higher performance, while not incurring significantly higher prices. Additionally, the impact of VM parameters on computation time is explored, highlighting the importance of resource optimization in the DAS system design for efficient performance. The study also observes a notable trend in processing time, showing a significant decrease for every additional 50,000 columns processed as the length of the fiber increases. This finding underscores the efficiency gains achieved with larger computational loads, indicating improved system performance and capacity utilization as the DAS system processes more extensive datasets.

6.
Sensors (Basel) ; 24(18)2024 Sep 16.
Article in English | MEDLINE | ID: mdl-39338747

ABSTRACT

This paper evaluated deployment efficiency by comparing manual deployment with automated deployment through a CI/CD pipeline using Jenkins. This study involved moving from a manual deployment process to an automated system using Jenkins and experimenting with both deployment methods in a real-world environment. The results showed that the automated deployment system significantly reduced the deployment time compared to manual deployment and significantly reduced the error rate. Manual deployment required human intervention at each step, making it time-consuming and prone to mistakes, while automated deployment using Jenkins automated each step to ensure consistency and maximized time efficiency through parallel processing. Automated testing verified the stability of the code before deployment, minimizing errors. This study demonstrates the effectiveness of adopting a CI/CD pipeline and shows that automated systems can provide high efficiency in real-world production environments. It also highlights the importance of security measures to prevent sensitive information leakage during CI/CD, suggesting the use of secrecy management tools and environment variables and limiting access rights. This research will contribute to exploring the applicability of CI/CD pipelines in different environments and, in doing so, validate the universality of automated systems.

7.
Network ; : 1-20, 2024 Sep 25.
Article in English | MEDLINE | ID: mdl-39320977

ABSTRACT

The rapid growth of cloud computing has led to the widespread adoption of heterogeneous virtualized environments, offering scalable and flexible resources to meet diverse user demands. However, the increasing complexity and variability in workload characteristics pose significant challenges in optimizing energy consumption. Many scheduling algorithms have been suggested to address this. Therefore, a self-attention-based progressive generative adversarial network optimized with Dwarf Mongoose algorithm adopted Energy and Deadline Aware Scheduling in heterogeneous virtualized cloud computing (SAPGAN-DMA-DAS-HVCC) is proposed in this paper. Here, a self-attention based progressive generative adversarial network (SAPGAN) is proposed to schedule activities in a cloud environment with an objective function of makespan and energy consumption. Then Dwarf Mongoose algorithm is proposed to optimize the weight parameters of SAPGAN. Outcome of proposed approach SAPGAN-DMA-DAS-HVCC contains 32.77%, 34.83% and 35.76% higher right skewed makespan, 31.52%, 33.28% and 29.14% lower cost when analysed to the existing models, like task scheduling in heterogeneous cloud environment utilizing mean grey wolf optimization approach, energy and performance-efficient task scheduling in heterogeneous virtualized Energy and Performance Efficient Task Scheduling Algorithm, energy and make span aware scheduling of deadline sensitive tasks on the cloud environment, respectively.

8.
Sci Rep ; 14(1): 21850, 2024 Sep 19.
Article in English | MEDLINE | ID: mdl-39300104

ABSTRACT

Task scheduling problem (TSP) is huge challenge in cloud computing paradigm as number of tasks comes to cloud application platform vary from time to time and all the tasks consists of variable length, runtime capacities. All these tasks may generated from various heterogeneous resources which comes onto cloud console directly effects the performance of cloud paradigm with increase in makespan, energy consumption, resource costs. Traditional task scheduling algorithms cannot handle these type of complex workloads in cloud paradigm. Many authors developed Task Scheduling algorithms by using metaheuristic techniques, hybrid approaches but all these algorithms give near optimal solutions but still TSP is a highly challenging and dynamic scenario as it resembles NP hard problem. Therefore, to tackle the TSP in cloud computing paradigm and schedule the tasks in an effective way in cloud paradigm, we formulated Adaptive Task scheduler which segments all the tasks comes to cloud console as sub tasks and fed these to the scheduler which is modeled by Improved Asynchronous Advantage Actor Critic Algorithm(IA3C) to generate schedules. This scheduling process is carried out in two stages. In first stage, all incoming tasks are segmented as sub tasks. After segmentation, all these sub tasks according to their size, execution time, communication time are grouped together and fed to the (ATSIA3C) scheduler. In the second stage, it checks for the above said constraints and disperse them onto the corresponding suitable processing capacity VMs resided in datacenters. Proposed ATSIA3C is simulated on Cloudsim. Extensive simulations are conducted using both fabricated worklogs and as well as realtime supercomputing worklogs. Our proposed mechanism evaluated over baseline algorithms i.e. RATS-HM, AINN-BPSO, MOABCQ. From results it is evident that our proposed ATSIA3C outperforms existing task schedulers by improving makespan by 70.49%. Resource cost is improved by 77.42%. Energy Consumption is improved over compared algorithms 74.24% in multi cloud environment by proposed ATSIA3C.

9.
Heliyon ; 10(16): e36273, 2024 Aug 30.
Article in English | MEDLINE | ID: mdl-39253244

ABSTRACT

With the rapid development of informatization, a vast amount of data is continuously generated and accumulated, leading to the emergence of cloud storage services. However, data stored in the cloud is beyond the control of users, posing various security risks. Cloud data auditing technology enables the inspection of data integrity in the cloud without the necessity of data downloading. Among these, public auditing schemes have experienced rapid development due to their ability to avoid additional user auditing expenses. However, malicious third-party auditors can compromise data privacy. This paper proposes an improved identity-based cloud auditing scheme that can resist malicious auditors. This scheme is also constructed on an identity-based public auditing scheme using blockchain to prevent malicious auditing. We found the scheme is not secure because a malicious cloud server can forge authentication tags for outsourced data blocks, while our scheme has not these security flaws. Through security proofs and performance analysis, we further demonstrate that our scheme is secure and efficient. Additionally, our scheme has typical application scenarios.

10.
Stud Health Technol Inform ; 317: 11-19, 2024 Aug 30.
Article in English | MEDLINE | ID: mdl-39234702

ABSTRACT

BACKGROUND: In the context of the telematics infrastructure, new data usage regulations, and the growing potential of artificial intelligence, cloud computing plays a key role in driving the digitalization in the German hospital sector. METHODS: Against this background, the study aims to develop and validate a scale for assessing the cloud readiness of German hospitals. It uses the TPOM (Technology, People, Organization, Macro-Environment) framework to create a scoring system. A survey involving 110 Chief Information Officers (CIOs) from German hospitals was conducted, followed by an exploratory factor analysis and reliability testing to refine the items, resulting in a final set of 30 items. RESULTS: The analysis confirmed the statistical robustness and identified key factors contributing to cloud readiness. These include IT security in the dimension "technology", collaborative research and acceptance for the need to make high quality data available in the dimension "people", scalability of IT resources in the dimension "organization", and legal aspects in the dimension "macroenvironment". The macroenvironment dimension emerged as particularly stable, highlighting the critical role of regulatory compliance in the healthcare sector. CONCLUSION: The findings suggest a certain degree of cloud readiness among German hospitals, with potential for improvement in all four dimensions. Systemically, legal requirements and a challenging political environment are top concerns for CIOs, impacting their cloud readiness.


Subject(s)
Cloud Computing , Germany , Hospitals , Computer Security , Humans , Surveys and Questionnaires
11.
Sci Rep ; 14(1): 20650, 2024 09 04.
Article in English | MEDLINE | ID: mdl-39232070

ABSTRACT

In human microbiome studies, mediation analysis has recently been spotlighted as a practical and powerful analytic tool to survey the causal roles of the microbiome as a mediator to explain the observed relationships between a medical treatment/environmental exposure and a human disease. We also note that, in a clinical research, investigators often trace disease progression sequentially in time; as such, time-to-event (e.g., time-to-disease, time-to-cure) responses, known as survival responses, are prevalent as a surrogate variable for human health or disease. In this paper, we introduce a web cloud computing platform, named as microbiome mediation analysis with survival responses (MiMedSurv), for comprehensive microbiome mediation analysis with survival responses on user-friendly web environments. MiMedSurv is an extension of our prior web cloud computing platform, named as microbiome mediation analysis (MiMed), for survival responses. The two main features that are well-distinguished are as follows. First, MiMedSurv conducts some baseline exploratory non-mediational survival analysis, not involving microbiome, to survey the disparity in survival response between medical treatments/environmental exposures. Then, MiMedSurv identifies the mediating roles of the microbiome in various aspects: (i) as a microbial ecosystem using ecological indices (e.g., alpha and beta diversity indices) and (ii) as individual microbial taxa in various hierarchies (e.g., phyla, classes, orders, families, genera, species). To illustrate its use, we survey the mediating roles of the gut microbiome between antibiotic treatment and time-to-type 1 diabetes. MiMedSurv is freely available on our web server ( http://mimedsurv.micloud.kr ).


Subject(s)
Cloud Computing , Internet , Microbiota , Humans , Software , Survival Analysis
12.
Brief Bioinform ; 25(Supplement_1)2024 Jul 23.
Article in English | MEDLINE | ID: mdl-39101486

ABSTRACT

Multi-omics (genomics, transcriptomics, epigenomics, proteomics, metabolomics, etc.) research approaches are vital for understanding the hierarchical complexity of human biology and have proven to be extremely valuable in cancer research and precision medicine. Emerging scientific advances in recent years have made high-throughput genome-wide sequencing a central focus in molecular research by allowing for the collective analysis of various kinds of molecular biological data from different types of specimens in a single tissue or even at the level of a single cell. Additionally, with the help of improved computational resources and data mining, researchers are able to integrate data from different multi-omics regimes to identify new prognostic, diagnostic, or predictive biomarkers, uncover novel therapeutic targets, and develop more personalized treatment protocols for patients. For the research community to parse the scientifically and clinically meaningful information out of all the biological data being generated each day more efficiently with less wasted resources, being familiar with and comfortable using advanced analytical tools, such as Google Cloud Platform becomes imperative. This project is an interdisciplinary, cross-organizational effort to provide a guided learning module for integrating transcriptomics and epigenetics data analysis protocols into a comprehensive analysis pipeline for users to implement in their own work, utilizing the cloud computing infrastructure on Google Cloud. The learning module consists of three submodules that guide the user through tutorial examples that illustrate the analysis of RNA-sequence and Reduced-Representation Bisulfite Sequencing data. The examples are in the form of breast cancer case studies, and the data sets were procured from the public repository Gene Expression Omnibus. The first submodule is devoted to transcriptomics analysis with the RNA sequencing data, the second submodule focuses on epigenetics analysis using the DNA methylation data, and the third submodule integrates the two methods for a deeper biological understanding. The modules begin with data collection and preprocessing, with further downstream analysis performed in a Vertex AI Jupyter notebook instance with an R kernel. Analysis results are returned to Google Cloud buckets for storage and visualization, removing the computational strain from local resources. The final product is a start-to-finish tutorial for the researchers with limited experience in multi-omics to integrate transcriptomics and epigenetics data analysis into a comprehensive pipeline to perform their own biological research.This manuscript describes the development of a resource module that is part of a learning platform named ``NIGMS Sandbox for Cloud-based Learning'' https://github.com/NIGMS/NIGMS-Sandbox. The overall genesis of the Sandbox is described in the editorial NIGMS Sandbox [16] at the beginning of this Supplement. This module delivers learning materials on the analysis of bulk and single-cell ATAC-seq data in an interactive format that uses appropriate cloud resources for data access and analyses.


Subject(s)
Cloud Computing , Epigenomics , Humans , Epigenomics/methods , Epigenesis, Genetic , Transcriptome , Computational Biology/methods , Gene Expression Profiling/methods , Software , Data Mining/methods
13.
Sensors (Basel) ; 24(15)2024 Jul 30.
Article in English | MEDLINE | ID: mdl-39123976

ABSTRACT

Industry 4.0 introduced new concepts, technologies, and paradigms, such as Cyber Physical Systems (CPSs), Industrial Internet of Things (IIoT) and, more recently, Artificial Intelligence of Things (AIoT). These paradigms ease the creation of complex systems by integrating heterogeneous devices. As a result, the structure of the production systems is changing completely. In this scenario, the adoption of reference architectures based on standards may guide designers and developers to create complex AIoT applications. This article surveys the main reference architectures available for industrial AIoT applications, analyzing their key characteristics, objectives, and benefits; it also presents some use cases that may help designers create new applications. The main goal of this review is to help engineers identify the alternative that best suits every application. The authors conclude that existing reference architectures are a necessary tool for standardizing AIoT applications, since they may guide developers in the process of developing new applications. However, the use of reference architectures in real AIoT industrial applications is still incipient, so more development effort is needed in order for it to be widely adopted.

14.
Network ; : 1-30, 2024 Aug 20.
Article in English | MEDLINE | ID: mdl-39163538

ABSTRACT

In cloud computing (CC), task scheduling allocates the task to best suitable resource for execution. This article proposes a model for task scheduling utilizing the multi-objective optimization and deep learning (DL) model. Initially, the multi-objective task scheduling is carried out by the incoming user utilizing the proposed hybrid fractional flamingo beetle optimization (FFBO) which is formed by integrating dung beetle optimization (DBO), flamingo search algorithm (FSA) and fractional calculus (FC). Here, the fitness function depends on reliability, cost, predicted energy, and makespan, the predicted energy is forecasted by a deep residual network (DRN). Thereafter, task scheduling is accomplished based on DL using the proposed deep feedforward neural network fused long short-term memory (DFNN-LSTM), which is the combination of DFNN and LSTM. Moreover, when scheduling the workflow, the task parameters and the virtual machine's (VM) live parameters are taken into consideration. Task parameters are earliest finish time (EFT), earliest start time (EST), task length, task priority, and actual task running time, whereas VM parameters include memory utilization, bandwidth utilization, capacity, and central processing unit (CPU). The proposed model DFNN-LSTM+FFBO has achieved superior makespan, energy, and resource utilization of 0.188, 0.950J, and 0.238, respectively.

15.
Heliyon ; 10(12): e32399, 2024 Jun 30.
Article in English | MEDLINE | ID: mdl-39183823

ABSTRACT

Recent years, edge-cloud computing has attracted more and more attention due to benefits from the combination of edge and cloud computing. Task scheduling is still one of the major challenges for improving service quality and resource efficiency of edge-clouds. Though several researches have studied on the scheduling problem, there remains issues needed to be addressed for their applications, e.g., ignoring resource heterogeneity, focusing on only one kind of requests. Therefore, in this paper, we aim at providing a heterogeneity aware task scheduling algorithm to improve task completion rate and resource utilization for edge-clouds with deadline constraints. Due to NP-hardness of the scheduling problem, we exploit genetic algorithm (GA), one of the most representative and widely used meta-heuristic algorithms, to solve the problem considering task completion rate and resource utilization as major and minor optimization objectives, respectively. In our GA-based scheduling algorithm, a gene indicates which resource that its corresponding task is processed by. To improve the performance of GA, we propose to exploit a skew mutation operator where genes are associated to resource heterogeneity during the population evolution. We conduct extensive experiments to evaluate the performance of our algorithm, and results verify the performance superiority of our algorithm in task completion rate, compared with other thirteen classical and up-to-date scheduling algorithms.

16.
Sci Rep ; 14(1): 18028, 2024 Aug 04.
Article in English | MEDLINE | ID: mdl-39098886

ABSTRACT

Users can purchase virtualized computer resources using the cloud computing concept, which is a novel and innovative way of computing. It offers numerous advantages for IT and healthcare industries over traditional methods. However, a lack of trust between CSUs and CSPs is hindering the widespread adoption of cloud computing across industries. Since cloud computing offers a wide range of trust models and strategies, it is essential to analyze the service using a detailed methodology in order to choose the appropriate cloud service for various user types. Finding a wide variety of comprehensive elements that are both required and sufficient for evaluating any cloud service is vital in order to achieve that. As a result, this study suggests an accurate, fuzzy logic-based trust evaluation model for evaluating the trustworthiness of a cloud service provider. Here, we examine how fuzzy logic raises the efficiency of trust evaluation. Trust is assessed using Quality of Service (QoS) characteristics like security, privacy, dynamicity, data integrity, and performance. The outcomes of a MATLAB simulation demonstrate the viability of the suggested strategy in a cloud setting.

17.
Heliyon ; 10(14): e34701, 2024 Jul 30.
Article in English | MEDLINE | ID: mdl-39149018

ABSTRACT

The definition of service has evolved from a focus on material value in manufacturing before the 2000s to a customer-centric value based on the significant growth of the service industry. Digital transformation has become essential for companies in the service industry due to the incorporation of digital technology through the Fourth Industrial Revolution and COVID-19. This study utilised Bidirectional Encoder Representations from Transformer (BERT) to analyse 3029 international patents related to the customer service industry and digital transformation registered between 2000 and 2022. Through topic modelling, this study identified 10 major topics in the customer service industry and analysed their yearly trends. Our findings show that as of 2022, the trend with the highest frequency is user-centric network service design, while cloud computing has experienced the steepest increase in the last five years. User-centric network services have been steadily developing since the inception of the Internet. Cloud computing is one of the key technologies being developed intensively in 2023 for the digital transformation of customer service. This study identifies time series trends of customer service industry patents and suggests the effectiveness of using BERTopic to predict future trends in technology.

18.
Sensors (Basel) ; 24(16)2024 Aug 14.
Article in English | MEDLINE | ID: mdl-39204967

ABSTRACT

Task scheduling is a critical challenge in cloud computing systems, greatly impacting their performance. Task scheduling is a nondeterministic polynomial time hard (NP-Hard) problem that complicates the search for nearly optimal solutions. Five major uncertainty parameters, i.e., security, traffic, workload, availability, and price, influence task scheduling decisions. The primary rationale for selecting these uncertainty parameters lies in the challenge of accurately measuring their values, as empirical estimations often diverge from the actual values. The integral-valued Pythagorean fuzzy set (IVPFS) is a promising mathematical framework to deal with parametric uncertainties. The Dyna Q+ algorithm is the updated form of the Dyna Q agent designed specifically for dynamic computing environments by providing bonus rewards to non-exploited states. In this paper, the Dyna Q+ agent is enriched with the IVPFS mathematical framework to make intelligent task scheduling decisions. The performance of the proposed IVPFS Dyna Q+ task scheduler is tested using the CloudSim 3.3 simulator. The execution time is reduced by 90%, the makespan time is also reduced by 90%, the operation cost is below 50%, and the resource utilization rate is improved by 95%, all of these parameters meeting the desired standards or expectations. The results are also further validated using an expected value analysis methodology that confirms the good performance of the task scheduler. A better balance between exploration and exploitation through rigorous action-based learning is achieved by the Dyna Q+ agent.

19.
Brief Bioinform ; 25(Supplement_1)2024 Jul 23.
Article in English | MEDLINE | ID: mdl-39041913

ABSTRACT

This study describes the development of a resource module that is part of a learning platform named 'NIGMS Sandbox for Cloud-based Learning' https://github.com/NIGMS/NIGMS-Sandbox. The overall genesis of the Sandbox is described in the editorial NIGMS Sandbox at the beginning of this Supplement. This module is designed to facilitate interactive learning of whole-genome bisulfite sequencing (WGBS) data analysis utilizing cloud-based tools in Google Cloud Platform, such as Cloud Storage, Vertex AI notebooks and Google Batch. WGBS is a powerful technique that can provide comprehensive insights into DNA methylation patterns at single cytosine resolution, essential for understanding epigenetic regulation across the genome. The designed learning module first provides step-by-step tutorials that guide learners through two main stages of WGBS data analysis, preprocessing and the identification of differentially methylated regions. And then, it provides a streamlined workflow and demonstrates how to effectively use it for large datasets given the power of cloud infrastructure. The integration of these interconnected submodules progressively deepens the user's understanding of the WGBS analysis process along with the use of cloud resources. Through this module, we can enhance the accessibility and adoption of cloud computing in epigenomic research, speeding up the advancements in the related field and beyond. This manuscript describes the development of a resource module that is part of a learning platform named ``NIGMS Sandbox for Cloud-based Learning'' https://github.com/NIGMS/NIGMS-Sandbox. The overall genesis of the Sandbox is described in the editorial NIGMS Sandbox [1] at the beginning of this Supplement. This module delivers learning materials on the analysis of bulk and single-cell ATAC-seq data in an interactive format that uses appropriate cloud resources for data access and analyses.


Subject(s)
Cloud Computing , DNA Methylation , Software , Whole Genome Sequencing , Whole Genome Sequencing/methods , Sulfites/chemistry , Humans , Epigenesis, Genetic , Computational Biology/methods
20.
Brief Bioinform ; 25(Supplement_1)2024 Jul 23.
Article in English | MEDLINE | ID: mdl-39041916

ABSTRACT

This manuscript describes the development of a resource module that is part of a learning platform named 'NIGMS Sandbox for Cloud-based Learning' (https://github.com/NIGMS/NIGMS-Sandbox). The module delivers learning materials on Cloud-based Consensus Pathway Analysis in an interactive format that uses appropriate cloud resources for data access and analyses. Pathway analysis is important because it allows us to gain insights into biological mechanisms underlying conditions. But the availability of many pathway analysis methods, the requirement of coding skills, and the focus of current tools on only a few species all make it very difficult for biomedical researchers to self-learn and perform pathway analysis efficiently. Furthermore, there is a lack of tools that allow researchers to compare analysis results obtained from different experiments and different analysis methods to find consensus results. To address these challenges, we have designed a cloud-based, self-learning module that provides consensus results among established, state-of-the-art pathway analysis techniques to provide students and researchers with necessary training and example materials. The training module consists of five Jupyter Notebooks that provide complete tutorials for the following tasks: (i) process expression data, (ii) perform differential analysis, visualize and compare the results obtained from four differential analysis methods (limma, t-test, edgeR, DESeq2), (iii) process three pathway databases (GO, KEGG and Reactome), (iv) perform pathway analysis using eight methods (ORA, CAMERA, KS test, Wilcoxon test, FGSEA, GSA, SAFE and PADOG) and (v) combine results of multiple analyses. We also provide examples, source code, explanations and instructional videos for trainees to complete each Jupyter Notebook. The module supports the analysis for many model (e.g. human, mouse, fruit fly, zebra fish) and non-model species. The module is publicly available at https://github.com/NIGMS/Consensus-Pathway-Analysis-in-the-Cloud. This manuscript describes the development of a resource module that is part of a learning platform named ``NIGMS Sandbox for Cloud-based Learning'' https://github.com/NIGMS/NIGMS-Sandbox. The overall genesis of the Sandbox is described in the editorial NIGMS Sandbox [1] at the beginning of this Supplement. This module delivers learning materials on the analysis of bulk and single-cell ATAC-seq data in an interactive format that uses appropriate cloud resources for data access and analyses.


Subject(s)
Cloud Computing , Software , Humans , Computational Biology/methods , Computational Biology/education , Animals , Gene Ontology
SELECTION OF CITATIONS
SEARCH DETAIL