Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 9 de 9
Filtrar
1.
Heliyon ; 10(5): e26969, 2024 Mar 15.
Artigo em Inglês | MEDLINE | ID: mdl-38455540

RESUMO

The article discusses the need for a lightweight software architecture evaluation framework that can address practitioners' concerns. Specifically, the proposed framework uses process mining and Petri nets to analyze security and performance in software development's early and late stages. Moreover, the framework has been implemented in six case studies, and the results show that it is a feasible and effective solution that can detect security and performance issues in complex and heterogeneous architecture with less time and effort. Furthermore, the article provides a detailed explanation of the framework's features, factors, and evaluation criteria. Additionally, this article discusses the challenges associated with traditional software architecture documentation methods using Unified Modeling Language diagrams and the limitations of code alone for creating comprehensive Software Architecture models. Various methods have been developed to extract implicit Software Architecture from code artifacts, but they tend to produce code-oriented diagrams instead of Software Architecture diagrams. Therefore, to bridge the model-code gap, the article proposes a framework that considers existing Software Architecture in the source code as architectural components and focuses on Software Architecture behaviors for analyzing performance and security. The proposed framework also suggests comparing Software Architecture extracted by different Process Mining algorithms to achieve consensus on architecture descriptions, using visualizations to understand differences and similarities. Finally, the article suggests that analyzing the previous version of a system's Software Architecture can lead to improvements and deviations from planned Software Architecture can be detected using traceability approaches to aid software architects in detecting inconsistencies.

2.
Sensors (Basel) ; 23(6)2023 Mar 08.
Artigo em Inglês | MEDLINE | ID: mdl-36991663

RESUMO

Traditional parallel computing for power management systems has prime challenges such as execution time, computational complexity, and efficiency like process time and delays in power system condition monitoring, particularly consumer power consumption, weather data, and power generation for detecting and predicting data mining in the centralized parallel processing and diagnosis. Due to these constraints, data management has become a critical research consideration and bottleneck. To cope with these constraints, cloud computing-based methodologies have been introduced for managing data efficiently in power management systems. This paper reviews the concept of cloud computing architecture that can meet the multi-level real-time requirements to improve monitoring and performance which is designed for different application scenarios for power system monitoring. Then, cloud computing solutions are discussed under the background of big data, and emerging parallel programming models such as Hadoop, Spark, and Storm are briefly described to analyze the advancement, constraints, and innovations. The key performance metrics of cloud computing applications such as core data sampling, modeling, and analyzing the competitiveness of big data was modeled by applying related hypotheses. Finally, it introduces a new design concept with cloud computing and eventually some recommendations focusing on cloud computing infrastructure, and methods for managing real-time big data in the power management system that solve the data mining challenges.

3.
Sensors (Basel) ; 22(20)2022 Oct 12.
Artigo em Inglês | MEDLINE | ID: mdl-36298096

RESUMO

The Mobile Ad-Hoc Network (MANET) has received significant interest from researchers for several applications. In spite of developing and proposing numerous routing protocols for MANET, there are still routing protocols that are too inefficient in terms of sending data and energy consumption, which limits the lifetime of the network for forest fire monitoring. Therefore, this paper presents the development of a Location Aided Routing (LAR) protocol in forest fire detection. The new routing protocol is named the LAR-Based Reliable Routing Protocol (LARRR), which is used to detect a forest fire based on three criteria: the route length between nodes, the temperature sensing, and the number of packets within node buffers (i.e., route busyness). The performance of the LARRR protocol is evaluated by using widely known evaluation measurements, which are the Packet Delivery Ratio (PDR), Energy Consumption (EC), End-to-End Delay (E2E Delay), and Routing Overhead (RO). The simulation results show that the proposed LARRR protocol achieves 70% PDR, 403 joules of EC, 2.733 s of E2E delay, and 43.04 RO. In addition, the performance of the proposed LARRR protocol outperforms its competitors and is able to detect forest fires efficiently.


Assuntos
Redes de Comunicação de Computadores , Incêndios Florestais , Tecnologia sem Fio , Algoritmos , Simulação por Computador
4.
Sensors (Basel) ; 22(3)2022 Feb 07.
Artigo em Inglês | MEDLINE | ID: mdl-35161996

RESUMO

Processes for evaluating software architecture (SA) help to investigate problems and potential risks in SA. It is derived from many studies that proposed a plethora of systematic SA evaluation methods, while industrial practitioners currently refrain from applying them since they are heavyweight. Nowadays, heterogeneous software architectures are organized based on the new infrastructure. Hardware and associated software allow different systems, such as embedded, sensor-based, modern AI, and cloud-based systems, to cooperate efficiently. It brings more complexities to SA evaluation. Alternatively, lightweight architectural evaluation methods have been proposed to satisfy the practitioner's concerns, but practitioners still do not adopt these methods. This study employs a systematic literature review with a text analysis of SA's definitions to propose a comparison framework for SA. It identifies lightweight features and factors to improve the architectural evaluation methods among industrial practitioners. The features are determined based on the practitioner's concerns by analyzing the architecture's definitions from stakeholders and reviewing architectural evaluation methods. The lightweight factors are acquired by studying the five most commonly used lightweight methods and the Architecture-based Tradeoff Analysis Method (ATAM), the most well-known heavyweight method. Subsequently, the research addresses these features and factors.


Assuntos
Hérnia Inguinal , Hérnia Inguinal/cirurgia , Herniorrafia , Humanos , Indústrias , Software , Telas Cirúrgicas
5.
PeerJ Comput Sci ; 7: e344, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-33816995

RESUMO

Artificial neural networks (ANN) perform well in real-world classification problems. In this paper, a robust classification model using ANN was constructed to enhance the accuracy of breast cancer classification. The Taguchi method was used to determine the suitable number of neurons in a single hidden layer of the ANN. The selection of a suitable number of neurons helps to solve the overfitting problem by affecting the classification performance of an ANN. With this, a robust classification model was then built for breast cancer classification. Based on the Taguchi method results, the suitable number of neurons selected for the hidden layer in this study is 15, which was used for the training of the proposed ANN model. The developed model was benchmarked upon the Wisconsin Diagnostic Breast Cancer Dataset, popularly known as the UCI dataset. Finally, the proposed model was compared with seven other existing classification models, and it was confirmed that the model in this study had the best accuracy at breast cancer classification, at 98.8%. This confirmed that the proposed model significantly improved performance.

6.
PLoS One ; 16(2): e0245579, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-33630876

RESUMO

Achieving biologically interpretable neural-biomarkers and features from neuroimaging datasets is a challenging task in an MRI-based dyslexia study. This challenge becomes more pronounced when the needed MRI datasets are collected from multiple heterogeneous sources with inconsistent scanner settings. This study presents a method of improving the biological interpretation of dyslexia's neural-biomarkers from MRI datasets sourced from publicly available open databases. The proposed system utilized a modified histogram normalization (MHN) method to improve dyslexia neural-biomarker interpretations by mapping the pixels' intensities of low-quality input neuroimages to range between the low-intensity region of interest (ROIlow) and high-intensity region of interest (ROIhigh) of the high-quality image. This was achieved after initial image smoothing using the Gaussian filter method with an isotropic kernel of size 4mm. The performance of the proposed smoothing and normalization methods was evaluated based on three image post-processing experiments: ROI segmentation, gray matter (GM) tissues volume estimations, and deep learning (DL) classifications using Computational Anatomy Toolbox (CAT12) and pre-trained models in a MATLAB working environment. The three experiments were preceded by some pre-processing tasks such as image resizing, labelling, patching, and non-rigid registration. Our results showed that the best smoothing was achieved at a scale value, σ = 1.25 with a 0.9% increment in the peak-signal-to-noise ratio (PSNR). Results from the three image post-processing experiments confirmed the efficacy of the proposed methods. Evidence emanating from our analysis showed that using the proposed MHN and Gaussian smoothing methods can improve comparability of image features and neural-biomarkers of dyslexia with a statistically significantly high disc similarity coefficient (DSC) index, low mean square error (MSE), and improved tissue volume estimations. After 10 repeated 10-fold cross-validation, the highest accuracy achieved by DL models is 94.7% at a 95% confidence interval (CI) level. Finally, our finding confirmed that the proposed MHN method significantly outperformed the normalization method of the state-of-the-art histogram matching.


Assuntos
Aprendizado Profundo , Dislexia/classificação , Dislexia/diagnóstico por imagem , Processamento de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética/métodos , Neuroimagem/métodos , Biomarcadores , Bases de Dados Factuais , Humanos , Distribuição Normal , Razão Sinal-Ruído
7.
Brain Sci ; 10(12)2020 Dec 07.
Artigo em Inglês | MEDLINE | ID: mdl-33297436

RESUMO

Autism Spectrum Disorder (ASD), according to DSM-5 in the American Psychiatric Association, is a neurodevelopmental disorder that includes deficits of social communication and social interaction with the presence of restricted and repetitive behaviors. Children with ASD have difficulties in joint attention and social reciprocity, using non-verbal and verbal behavior for communication. Due to these deficits, children with autism are often socially isolated. Researchers have emphasized the importance of early identification and early intervention to improve the level of functioning in language, communication, and well-being of children with autism. However, due to limited local assessment tools to diagnose these children, limited speech-language therapy services in rural areas, etc., these children do not get the rehabilitation they need until they get into compulsory schooling at the age of seven years old. Hence, efficient approaches towards early identification and intervention through speedy diagnostic procedures for ASD are required. In recent years, advanced technologies like machine learning have been used to analyze and investigate ASD to improve diagnostic accuracy, time, and quality without complexity. These machine learning methods include artificial neural networks, support vector machines, a priori algorithms, and decision trees, most of which have been applied to datasets connected with autism to construct predictive models. Meanwhile, the selection of features remains an essential task before developing a predictive model for ASD classification. This review mainly investigates and analyzes up-to-date studies on machine learning methods for feature selection and classification of ASD. We recommend methods to enhance machine learning's speedy execution for processing complex data for conceptualization and implementation in ASD diagnostic research. This study can significantly benefit future research in autism using a machine learning approach for feature selection, classification, and processing imbalanced data.

8.
Biosystems ; 114(3): 219-26, 2013 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-24120990

RESUMO

This paper presents a method to convert the deterministic, continuous representation of a biological system by ordinary differential equations into a non-deterministic, discrete membrane computation. The dynamics of the membrane computation is governed by rewrite rules operating at certain rates. That has the advantage of applying accurately to small systems, and to expressing rates of change that are determined locally, by region, but not necessary globally. Such spatial information augments the standard differentiable approach to provide a more realistic model. A biological case study of the ligand-receptor network of protein TGF-ß is used to validate the effectiveness of the conversion method. It demonstrates the sense in which the behaviours and properties of the system are better preserved in the membrane computing model, suggesting that the proposed conversion method may prove useful for biological systems in particular.


Assuntos
Biologia Computacional/métodos , Computadores Moleculares/tendências , Modelos Biológicos , Biologia de Sistemas/métodos , Fator de Crescimento Transformador beta/metabolismo , Ligantes
9.
Pak J Biol Sci ; 14(24): 1100-8, 2011 Dec 15.
Artigo em Inglês | MEDLINE | ID: mdl-22335049

RESUMO

Ligand-Receptor Networks of TGF-beta plays essential role in transmitting a wide range of extracellular signals that affect many cellular processes such as cell growth. However, the modeling of these networks with conventional approach such as ordinary differential equations has not taken into account, the spatial structure and stochastic behavior of processes involve in these networks. Membrane computing as the alternatives approach provides spatial structure for molecular computation in which processes are evaluated in a non-deterministic and maximally parallel way. This study is carried out to evaluate the membrane computing model of Ligand-Receptor Networks of TGF-beta with model checking approach. The results show that membrane computing model has sustained the behaviors and properties of Ligand-Receptor Networks of TGF-beta. This reinforce that membrane computing is capable in analyzing processes and behaviors in hierarchical structure of cell such as Ligand-Receptor Networks of TGF-beta better than the deterministic approach of conventional mathematical models.


Assuntos
Membrana Celular/metabolismo , Simulação por Computador , Ligantes , Receptores de Fatores de Crescimento Transformadores beta/metabolismo , Fator de Crescimento Transformador beta/metabolismo , Endocitose/fisiologia , Endossomos/metabolismo , Modelos Biológicos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA