Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 30
Filter
Add more filters











Publication year range
1.
Sci Rep ; 14(1): 20595, 2024 Sep 04.
Article in English | MEDLINE | ID: mdl-39232132

ABSTRACT

The Internet of Things (IoT) generates substantial data through sensors for diverse applications, such as healthcare services. This article addresses the challenge of efficiently utilizing resources in resource-scarce IoT-enabled sensors to enhance data collection, transmission, and storage. Redundant data transmission from sensors covering overlapping areas incurs additional communication and storage costs. Existing schemes, namely Asymmetric Extremum (AE) and Rapid Asymmetric Maximum (RAM), employ fixed and variable-sized windows during chunking. However, these schemes face issues while selecting the index value to decide the variable window size, which may remain zero or very low, resulting in poor deduplication. This article resolves this issue in the proposed Controlled Cut-point Identification Algorithm (CCIA), designed to restrict the variable-sized window to a certain threshold. The index value for deciding the threshold will always be larger than the half size of the fixed window. It helps to find more duplicates, but the upper limit offset is also applied to avoid the unnecessarily large-sized window, which may cause extensive computation costs. The extensive simulations are performed by deploying Windows Communication Foundation services in the Azure cloud. The results demonstrate the superiority of CCIA in various metrics, including chunk number, average chunk size, minimum and maximum chunk number, variable chunking size, and probability of failure for cut point identification. In comparison to its competitors, RAM and AE, CCIA exhibits better performance across key parameters. Specifically, CCIA outperforms in total number of chunks (6.81%, 14.17%), average number of chunks (4.39%, 18.45%), and minimum chunk size (153%, 190%). These results highlight the effectiveness of CCIA in optimizing data transmission and storage within IoT systems, showcasing its potential for improved resource utilization and reduced operational costs.

2.
Sensors (Basel) ; 24(14)2024 Jul 22.
Article in English | MEDLINE | ID: mdl-39066138

ABSTRACT

Without a well-defined energy management plan, achieving meaningful improvements in human lifestyle becomes challenging. Adequate energy resources are essential for development, but they are both limited and costly. In the literature, several solutions have been proposed for energy management but they either minimize energy consumption or improve the occupant's comfort index. The energy management problem is a multi-objective problem where the user wants to reduce energy consumption while keeping the occupant's comfort index intact. To address the multi-objective problem this paper proposed an energy control system for a green environment called PMC (Power Management and Control). The system is based on hybrid energy optimization, energy prediction, and multi-preprocessing. The combination of GA (Genetic Algorithm) and PSO (Particle Swarm Optimization) is performed to make a fusion methodology to improve the occupant comfort index (OCI) and decrease energy utilization. The proposed framework gives a better OCI when compared with its counterparts, the Ant Bee Colony Knowledge Base framework (ABCKB), GA-based prediction framework (GAP), Hybrid Prediction with Single Optimization framework (SOHP), and PSO-based power consumption framework. Compared with the existing AEO framework, the PMC gives practically the same OCI but consumes less energy. The PMC framework additionally accomplished the ideal OCI (i-e 1) when compared with the existing model, FA-GA (i-e 0.98). The PMC model consumed less energy as compared to existing models such as the ABCKB, GAP, PSO, and AEO. The PMC model consumed a little bit more energy than the SOHP but provided a better OCI. The comparative outcomes show the capability of the PMC framework to reduce energy utilization and improve the OCI. Unlike other existing methodologies except for the AEO framework, the PMC technique is additionally confirmed through a simulation by controlling the indoor environment using actuators, such as fan, light, AC, and boiler.

3.
PeerJ Comput Sci ; 10: e1914, 2024.
Article in English | MEDLINE | ID: mdl-38660179

ABSTRACT

Sugar in the blood can harm individuals and their vital organs, potentially leading to blindness, renal illness, as well as kidney and heart diseases. Globally, diabetic patients face an average annual mortality rate of 38%. This study employs Chi-square, mutual information, and sequential feature selection (SFS) to choose features for training multiple classifiers. These classifiers include an artificial neural network (ANN), a random forest (RF), a gradient boosting (GB) algorithm, Tab-Net, and a support vector machine (SVM). The goal is to predict the onset of diabetes at an earlier age. The classifier, developed based on the selected features, aims to enable early diagnosis of diabetes. The PIMA and early-risk diabetes datasets serve as test subjects for the developed system. The feature selection technique is then applied to focus on the most important and relevant features for model training. The experiment findings conclude that the ANN exhibited a spectacular performance in terms of accuracy on the PIMA dataset, achieving a remarkable accuracy rate of 99.35%. The second experiment, conducted on the early diabetes risk dataset using selected features, revealed that RF achieved an accuracy of 99.36%. Based on our experimental results, it can be concluded that our suggested method significantly outperformed baseline machine learning algorithms already employed for diabetes prediction on both datasets.

4.
Materials (Basel) ; 16(23)2023 Nov 24.
Article in English | MEDLINE | ID: mdl-38068066

ABSTRACT

The scientific community has raised increasing apprehensions over the transparency and interpretability of machine learning models employed in various domains, particularly in the field of materials science. The intrinsic intricacy of these models frequently results in their characterization as "black boxes", which poses a difficulty in emphasizing the significance of producing lucid and readily understandable model outputs. In addition, the assessment of model performance requires careful deliberation of several essential factors. The objective of this study is to utilize a deep learning framework called TabNet to predict lead zirconate titanate (PZT) ceramics' dielectric constant property by employing their components and processes. By recognizing the crucial importance of predicting PZT properties, this research seeks to enhance the comprehension of the results generated by the model and gain insights into the association between the model and predictor variables using various input parameters. To achieve this, we undertake a thorough analysis with Shapley additive explanations (SHAP). In order to enhance the reliability of the prediction model, a variety of cross-validation procedures are utilized. The study demonstrates that the TabNet model significantly outperforms traditional machine learning models in predicting ceramic characteristics of PZT components, achieving a mean squared error (MSE) of 0.047 and a mean absolute error (MAE) of 0.042. Key contributing factors, such as d33, tangent loss, and chemical formula, are identified using SHAP plots, highlighting their importance in predictive analysis. Interestingly, process time is less effective in predicting the dielectric constant. This research holds considerable potential for advancing materials discovery and predictive systems in PZT ceramics, offering deep insights into the roles of various parameters.

5.
Sensors (Basel) ; 23(19)2023 Oct 06.
Article in English | MEDLINE | ID: mdl-37837112

ABSTRACT

The paradigm of the Internet of Things (IoT) and edge computing brings a number of heterogeneous devices to the network edge for monitoring and controlling the environment. For reacting to events dynamically and automatically in the environment, rule-enabled IoT edge platforms operate the deployed service scenarios at the network edge, based on filtering events to perform control actions. However, due to the heterogeneity of the IoT edge networks, deploying a consistent rule context for operating a consistent rule scenario on multiple heterogeneous IoT edge platforms is difficult because of the difference in protocols and data formats. In this paper, we propose a transparent rule enablement, based on the commonization approach, for enabling a consistent rule scenario in heterogeneous IoT edge networks. The proposed IoT Edge Rule Agent Platform (IERAP) deploys device proxies to share consistent rules with IoT edge platforms without considering the difference in protocols and data formats. Therefore, each device proxy only considers the translation of the corresponding platform-specific and common formats. Also, the rules are deployed by the corresponding device proxy, which enables rules to be deployed to heterogeneous IoT edge platforms to perform the consistent rule scenario without considering the format and underlying protocols of the destination platform.

6.
PeerJ Comput Sci ; 9: e1186, 2023.
Article in English | MEDLINE | ID: mdl-37346539

ABSTRACT

A sketch is a black-and-white, 2-D graphical representation of an object and contains fewer visual details as compared to a colored image. Despite fewer details, humans can recognize a sketch and its context very efficiently and consistently across languages, cultures, and age groups, but it is a difficult task for computers to recognize such low-detail sketches and get context out of them. With the tremendous increase in popularity of IoT devices such as smartphones and smart cameras, etc., it has become more critical to recognize free hand-drawn sketches in computer vision and human-computer interaction in order to build a successful artificial intelligence of things (AIoT) system that can first recognize the sketches and then understand the context of multiple drawings. Earlier models which addressed this problem are scale-invariant feature transform (SIFT) and bag-of-words (BoW). Both SIFT and BoW used hand-crafted features and scale-invariant algorithms to address this issue. But these models are complex and time-consuming due to the manual process of features setup. The deep neural networks (DNNs) performed well with object recognition on many large-scale datasets such as ImageNet and CIFAR-10. However, the DDN approach cannot be carried out for hand-drawn sketches problems. The reason is that the data source is images, and all sketches in the images are, for example, 'birds' instead of their specific category (e.g., 'sparrow'). Some deep learning approaches for sketch recognition problems exist in the literature, but the results are not promising because there is still room for improvement. This article proposed a convolutional neural network (CNN) architecture called Sketch-DeepNet for the sketch recognition task. The proposed Sketch-DeepNet architecture used the TU-Berlin dataset for classification. The experimental results show that the proposed method beats the performance of the state-of-the-art sketch classification methods. The proposed model achieved 95.05% accuracy as compared to existing models DeformNet (62.6%), Sketch-DNN (72.2%), Sketch-a-Net (77.95%), SketchNet (80.42%), Thinning-DNN (74.3%), CNN-PCA-SVM (72.5%), Hybrid-CNN (84.42%), and human recognition accuracy of 73% on the TU-Berlin dataset.

7.
Sensors (Basel) ; 23(7)2023 Mar 31.
Article in English | MEDLINE | ID: mdl-37050700

ABSTRACT

Home appliances are considered to account for a large portion of smart homes' energy consumption. This is due to the abundant use of IoT devices. Various home appliances, such as heaters, dishwashers, and vacuum cleaners, are used every day. It is thought that proper control of these home appliances can reduce significant amounts of energy use. For this purpose, optimization techniques focusing mainly on energy reduction are used. Current optimization techniques somewhat reduce energy use but overlook user convenience, which was the main goal of introducing home appliances. Therefore, there is a need for an optimization method that effectively addresses the trade-off between energy saving and user convenience. Current optimization techniques should include weather metrics other than temperature and humidity to effectively optimize the energy cost of controlling the desired indoor setting of a smart home for the user. This research work involves an optimization technique that addresses the trade-off between energy saving and user convenience, including the use of air pressure, dew point, and wind speed. To test the optimization, a hybrid approach utilizing GWO and PSO was modeled. This work involved enabling proactive energy optimization using appliance energy prediction. An LSTM model was designed to test the appliances' energy predictions. Through predictions and optimized control, smart home appliances could be proactively and effectively controlled. First, we evaluated the RMSE score of the predictive model and found that the proposed model results in low RMSE values. Second, we conducted several simulations and found the proposed optimization results to provide energy cost savings used in appliance control to regulate the desired indoor setting of the smart home. Energy cost reduction goals using the optimization strategies were evaluated for seasonal and monthly patterns of data for result verification. Hence, the proposed work is considered a better candidate solution for proactively optimizing the energy of smart homes.

8.
Big Data ; 11(3): 225-238, 2023 06.
Article in English | MEDLINE | ID: mdl-37036805

ABSTRACT

With the development of automatic electrical devices in smart grids, the data generated by time and transmitted are vast and thus impossible to control consumption by humans. The problem of abnormal detection in power consumption is crucial in monitoring and controlling smart grids. This article proposes the detection of electrical meter anomalies by detecting abnormal patterns and learning unlabeled data. Furthermore, a framework for big data and machine learning-based anomaly detection framework are introduced. The experimental results show that the time series anomaly detection for electric meters has better results in accuracy and time than the expert alternatives.


Subject(s)
Big Data , Computer Systems , Humans , Intelligence , Machine Learning , Time Factors
9.
Sensors (Basel) ; 23(4)2023 Feb 08.
Article in English | MEDLINE | ID: mdl-36850491

ABSTRACT

Rule-enabled Internet of Things (IoT) systems operate autonomous and dynamic service scenarios through real-time events and actions based on deployed rules. For handling the increasing events and actions in the IoT networks, the computational ability can be distributed and deployed to the edge of networks. However, operating a consistent rule to provide the same service scenario in heterogeneous IoT networks is difficult because of the difference in the protocols and rule models. In this paper, we propose a transparent rule deployment approach based on the rule translator by integrating the interworking proxy to IoT platforms for operating consistent service scenarios in heterogeneous IoT networks. The rule-enabled IoT architecture is proposed to provide functional blocks in the layers of the client, rule service, IoT service, and device. Additionally, the interworking proxy is used for translating and transferring rules between IoT platforms in different IoT networks. Based on the interactions between the IoT platforms, the same service scenarios are operated in the IoT environment. Moreover, the integrated interworking proxy enables the heterogeneity of IoT frameworks in the IoT platform. Therefore, rules are deployed on IoT platforms transparently, and consistent rules are operated in heterogeneous IoT networks without considering the underlying IoT frameworks.

10.
Sensors (Basel) ; 22(17)2022 Aug 25.
Article in English | MEDLINE | ID: mdl-36080861

ABSTRACT

The shift of the world in the past two decades towards renewable energy (RES), due to the continuously decreasing fossil fuel reserves and their bad impact on the environment, has attracted researchers all around the world to improve the efficiency of RES and eliminate problems that arise at the point of common coupling (PCC). Harmonics and un-balance in 3-phase voltages because of dynamic and nonlinear loads cause a lagging power factor due to inductive load, active power losses, and instability at the point of common coupling. This also happens due to a lack of system inertia in micro-grids. Passive filters are used to eliminate harmonics at both the electrical converter's input and output sides and improve the system's power factor. A Synchronous Reference Frame (SRF) control method is used to overcome the problem related to grid synchronization. The sine pulse width modulation (SPWM) technique provides gating signals to the switches of the multilevel inverter. A multi-layer feed forward neural network (ML-FFNN) is employed at the output of a system to minimize mean square error (MSE) by removing the errors between target voltages and reference voltages produced at the output of a trained model. Simulations were performed using MATLAB Simulink to highlight the significance of the proposed research study. The simulation results show that our proposed intelligent control scheme used for the suppression of harmonics compensated for reactive power more effectively than the SRF-based control methods. The simulation-based results confirm that the proposed ML-FFNN-based harmonic and reactive power control technique performs 0.752 better in terms of MAE, 0.52 for the case of MSE, and 0.222 when evaluating based on the RMSE.

11.
Tomography ; 8(4): 1905-1927, 2022 07 26.
Article in English | MEDLINE | ID: mdl-35894026

ABSTRACT

A brain tumor is the growth of abnormal cells in certain brain tissues with a high mortality rate; therefore, it requires high precision in diagnosis, as a minor human judgment can eventually cause severe consequences. Magnetic Resonance Image (MRI) serves as a non-invasive tool to detect the presence of a tumor. However, Rician noise is inevitably instilled during the image acquisition process, which leads to poor observation and interferes with the treatment. Computer-Aided Diagnosis (CAD) systems can perform early diagnosis of the disease, potentially increasing the chances of survival, and lessening the need for an expert to analyze the MRIs. Convolutional Neural Networks (CNN) have proven to be very effective in tumor detection in brain MRIs. There have been multiple studies dedicated to brain tumor classification; however, these techniques lack the evaluation of the impact of the Rician noise on state-of-the-art deep learning techniques and the consideration of the scaling impact on the performance of the deep learning as the size and location of tumors vary from image to image with irregular shape and boundaries. Moreover, transfer learning-based pre-trained models such as AlexNet and ResNet have been used for brain tumor detection. However, these architectures have many trainable parameters and hence have a high computational cost. This study proposes a two-fold solution: (a) Multi-Scale CNN (MSCNN) architecture to develop a robust classification model for brain tumor diagnosis, and (b) minimizing the impact of Rician noise on the performance of the MSCNN. The proposed model is a multi-class classification solution that classifies MRIs into glioma, meningioma, pituitary, and non-tumor. The core objective is to develop a robust model for enhancing the performance of the existing tumor detection systems in terms of accuracy and efficiency. Furthermore, MRIs are denoised using a Fuzzy Similarity-based Non-Local Means (FSNLM) filter to improve the classification results. Different evaluation metrics are employed, such as accuracy, precision, recall, specificity, and F1-score, to evaluate and compare the performance of the proposed multi-scale CNN and other state-of-the-art techniques, such as AlexNet and ResNet. In addition, trainable and non-trainable parameters of the proposed model and the existing techniques are also compared to evaluate the computational efficiency. The experimental results show that the proposed multi-scale CNN model outperforms AlexNet and ResNet in terms of accuracy and efficiency at a lower computational cost. Based on experimental results, it is found that our proposed MCNN2 achieved accuracy and F1-score of 91.2% and 91%, respectively, which is significantly higher than the existing AlexNet and ResNet techniques. Moreover, our findings suggest that the proposed model is more effective and efficient in facilitating clinical research and practice for MRI classification.


Subject(s)
Brain Neoplasms , Deep Learning , Meningeal Neoplasms , Brain/diagnostic imaging , Brain Neoplasms/diagnostic imaging , Humans , Magnetic Resonance Imaging/methods , Neural Networks, Computer
12.
Materials (Basel) ; 15(4)2022 Feb 15.
Article in English | MEDLINE | ID: mdl-35207968

ABSTRACT

Research has become increasingly more interdisciplinary over the past few years. Artificial intelligence and its sub-fields have proven valuable for interdisciplinary research applications, especially physical sciences. Recently, machine learning-based mechanisms have been adapted for material science applications, meeting traditional experiments' challenges in a time and cost-efficient manner. The scientific community focuses on harnessing varying mechanisms to process big data sets extracted from material databases to derive hidden knowledge that can successfully be employed in technical frameworks of material screening, selection, and recommendation. However, a plethora of underlying aspects of the existing material discovery methods needs to be critically assessed to have a precise and collective analysis that can serve as a baseline for various forthcoming material discovery problems. This study presents a comprehensive survey of state-of-the-art benchmark data sets, detailed pre-processing and analysis, appropriate learning model mechanisms, and simulation techniques for material discovery. We believe that such an in-depth analysis of the mentioned aspects provides promising directions to the young interdisciplinary researchers from computing and material science fields. This study will help devise useful modeling in the materials discovery to positively contribute to the material industry, reducing the manual effort involved in the traditional material discovery. Moreover, we also present a detailed analysis of experimental and computation-based artificial intelligence mechanisms suggested by the existing literature.

13.
Sensors (Basel) ; 23(1)2022 Dec 23.
Article in English | MEDLINE | ID: mdl-36616725

ABSTRACT

Energy consumption is increasing daily, and with that comes a continuous increase in energy costs. Predicting future energy consumption and building an effective energy management system for smart homes has become essential for many industrialists to solve the problem of energy wastage. Machine learning has shown significant outcomes in the field of energy management systems. This paper presents a comprehensive predictive-learning based framework for smart home energy management systems. We propose five modules: classification, prediction, optimization, scheduling, and controllers. In the classification module, we classify the category of users and appliances by using k-means clustering and support vector machine based classification. We predict the future energy consumption and energy cost for each user category using long-term memory in the prediction module. We define objective functions for optimization and use grey wolf optimization and particle swarm optimization for scheduling appliances. For each case, we give priority to user preferences and indoor and outdoor environmental conditions. We define control rules to control the usage of appliances according to the schedule while prioritizing user preferences and minimizing energy consumption and cost. We perform experiments to evaluate the performance of our proposed methodology, and the results show that our proposed approach significantly reduces energy cost while providing an optimized solution for energy consumption that prioritizes user preferences and considers both indoor and outdoor environmental factors.


Subject(s)
Algorithms , Machine Learning
14.
Sensors (Basel) ; 21(23)2021 Nov 27.
Article in English | MEDLINE | ID: mdl-34883933

ABSTRACT

Internet of Vehicles (IoV) has emerged as an advancement over the traditional Vehicular Ad-hoc Networks (VANETs) towards achieving a more efficient intelligent transportation system that is capable of providing various intelligent services and supporting different applications for the drivers and passengers on roads. In order for the IoV and VANETs environments to be able to offer such beneficial road services, huge amounts of data are generated and exchanged among the different communicated entities in these vehicular networks wirelessly via open channels, which could attract the adversaries and threaten the network with several possible types of security attacks. In this survey, we target the authentication part of the security system while highlighting the efficiency of blockchains in the IoV and VANETs environments. First, a detailed background on IoV and blockchain is provided, followed by a wide range of security requirements, challenges, and possible attacks in vehicular networks. Then, a more focused review is provided on the recent blockchain-based authentication schemes in IoV and VANETs with a detailed comparative study in terms of techniques used, network models, evaluation tools, and attacks counteracted. Lastly, some future challenges for IoV security are discussed that are necessary to be addressed in the upcoming research.

15.
Sensors (Basel) ; 21(23)2021 Dec 06.
Article in English | MEDLINE | ID: mdl-34884165

ABSTRACT

Pakistan receives Direct Normal Irradiation (DNI) exceeding 2000 kWh/m²/annum on approximately 83% of its land, which is very suitable for photovoltaic production. This energy can be easily utilized in conjunction with other renewable energy resources to meet the energy demands and reduce the carbon footprint of the country. In this research, a hybrid renewable energy solution based on a nearly Zero Energy Building (nZEB) model is proposed for a university facility. The building in consideration has a continuous flow of water through its water delivery vertical pipelines. A horizontal-axis spherical helical turbine is designed in SolidWorks and is analyzed through a computational fluid dynamics (CFD) analysis in ANSYS Fluent 18.1 based on the K-epsilon turbulent model. Results obtained from ANSYS Fluent have shown that a 24 feet vertical channel with a water flow of 0.2309 m3/s and velocity of 12.66 m/s can run the designed hydroelectric turbine, delivering 168 W of mechanical power at 250 r.p.m. Based on the turbine, a hybrid renewable energy system (HRES) comprising photovoltaic and hydroelectric power is modelled and analyzed in HOMER Pro software. Among different architectures, it was found that architecture with hydroelectric and photovoltaic energy provided the best COE of $0.09418.

16.
Sensors (Basel) ; 21(21)2021 Oct 21.
Article in English | MEDLINE | ID: mdl-34770279

ABSTRACT

This paper presents an enhanced PDR-BLE compensation mechanism for improving indoor localization, which is considerably resilient against variant uncertainties. The proposed method of ePDR-BLE compensation mechanism (EPBCM) takes advantage of the non-requirement of linearization of the system around its current state in an unscented Kalman filter (UKF) and Kalman filter (KF) in smoothing of received signal strength indicator (RSSI) values. In this paper, a fusion of conflicting information and the activity detection approach of an object in an indoor environment contemplates varying magnitude of accelerometer values based on the hidden Markov model (HMM). On the estimated orientation, the proposed approach remunerates the inadvertent body acceleration and magnetic distortion sensor data. Moreover, EPBCM can precisely calculate the velocity and position by reducing the position drift, which gives rise to a fault in zero-velocity and heading error. The developed EPBCM localization algorithm using Bluetooth low energy beacons (BLE) was applied and analyzed in an indoor environment. The experiments conducted in an indoor scenario shows the results of various activities performed by the object and achieves better orientation estimation, zero velocity measurements, and high position accuracy than other methods in the literature.

17.
Sensors (Basel) ; 21(16)2021 Aug 11.
Article in English | MEDLINE | ID: mdl-34450872

ABSTRACT

Over the past years, numerous Internet of Things (IoT)-based healthcare systems have been developed to monitor patient health conditions, but these traditional systems do not adapt to constraints imposed by revolutionized IoT technology. IoT-based healthcare systems are considered mission-critical applications whose missing deadlines cause critical situations. For example, in patients with chronic diseases or other fatal diseases, a missed task could lead to fatalities. This study presents a smart patient health monitoring system (PHMS) based on an optimized scheduling mechanism using IoT-tasks orchestration architecture to monitor vital signs data of remote patients. The proposed smart PHMS consists of two core modules: a healthcare task scheduling based on optimization and optimization of healthcare services using a real-time IoT-based task orchestration architecture. First, an optimized time-constraint-aware scheduling mechanism using a real-time IoT-based task orchestration architecture is developed to generate autonomous healthcare tasks and effectively handle the deployment of emergent healthcare tasks. Second, an optimization module is developed to optimize the services of the e-Health industry based on objective functions. Furthermore, our study uses Libelium e-Health toolkit to monitors the physiological data of remote patients continuously. The experimental results reveal that an optimized scheduling mechanism reduces the tasks starvation by 14% and tasks failure by 17% compared to a conventional fair emergency first (FEF) scheduling mechanism. The performance analysis results demonstrate the effectiveness of the proposed system, and it suggests that the proposed solution can be an effective and sustainable solution towards monitoring patient's vital signs data in the IoT-based e-Health domain.


Subject(s)
Internet of Things , Delivery of Health Care , Humans , Monitoring, Physiologic , Vital Signs
18.
Biomed Res Int ; 2021: 5554487, 2021.
Article in English | MEDLINE | ID: mdl-34368352

ABSTRACT

The clinical research faces numerous challenges, from patient enrollment to data privacy concerns and regulatory requirements to spiraling costs. Blockchain technology has the potential to overcome these challenges, thus making clinical trials transparent and enhancing public trust in a fair and open process with all stakeholders because of its distinct features such as data immutability and transparency. This paper proposes a permissioned blockchain platform to ensure clinical data transparency and provides secure clinical trial-related solutions. We explore the core functionalities of blockchain applied to clinical trials and illustrate its general principle concretely. These clinical trial operations are automated using the smart contract, which ensures traceability, prevents a posteriori reconstruction, and securely automates the clinical trial. A web-based user interface is also implemented to visualize the data from the blockchain and ease the interaction with the blockchain network. A proof of concept is implemented on Hyperledger Fabric in the case study of clinical management for multiple clinical trials to demonstrate the designed approach's feasibility. Lastly, the experiment results demonstrate the efficiency and usability of the proposed platform.


Subject(s)
Blockchain , Clinical Trials as Topic , Computer Security , Humans , Reproducibility of Results
19.
Sensors (Basel) ; 21(5)2021 Feb 26.
Article in English | MEDLINE | ID: mdl-33652773

ABSTRACT

Blockchain technology has recently inspired remarkable attention due to its unique features, such as privacy, accountability, immutability, and anonymity, to name of the few. In contrast, core functionalities of most Internet of Things (IoT) resources make them vulnerable to security threats. The IoT devices, such as smartphones and tablets, have limited capacity in terms of network, computing, and storage, which make them easier for vulnerable threats. Furthermore, a massive amount of data produced by the IoT devices, which is still an open challenge for the existing platforms to process, analyze, and unearth underlying patterns to provide convenience environment. Therefore, a new solution is required to ensure data accountability, improve data privacy and accessibility, and extract hidden patterns and useful knowledge to provide adequate services. In this paper, we present a secure fitness framework that is based on an IoT-enabled blockchain network integrated with machine learning approaches. The proposed framework consists of two modules: a blockchain-based IoT network to provide security and integrity to sensing data as well as an enhanced smart contract enabled relationship and inference engine to discover hidden insights and useful knowledge from IoT and user device network data. The enhanced smart contract aims to support users with a practical application that provides real-time monitoring, control, easy access, and immutable logs of multiple devices that are deployed in several domains. The inference engine module aims to unearth underlying patterns and useful knowledge from IoT environment data, which helps in effective decision making to provide convenient services. For experimental analysis, we implement an intelligent fitness service that is based on an enhanced smart contract enabled relationship and inference engine as a case study where several IoT fitness devices are used to securely acquire user personalized fitness data. Furthermore, a real-time inference engine investigates user personalized data to discover useful knowledge and hidden insights. Based on inference engine knowledge, a recommendation model is developed to recommend a daily and monthly diet, as well as a workout plan for better and improved body shape. The recommendation model aims to facilitate a trainer formulating effective future decisions of trainee's health in terms of a diet and workout plan. Lastly, for performance analysis, we have used Hyperledger Caliper to access the system performance in terms of latency, throughput, resource utilization, and varying orderer and peers nodes. The analysis results imply that the design architecture is applicable for resource-constrained IoT blockchain platform and it is extensible for different IoT scenarios.

20.
Sensors (Basel) ; 21(2)2021 Jan 18.
Article in English | MEDLINE | ID: mdl-33477481

ABSTRACT

Computation offloading enables intensive computational tasks in edge computing to be separated into multiple computing resources of the server to overcome hardware limitations. Deep learning derives the inference approach based on the learning approach with a volume of data using a sufficient computing resource. However, deploying the domain-specific inference approaches to edge computing provides intelligent services close to the edge of the networks. In this paper, we propose intelligent edge computing by providing a dynamic inference approach for building environment control. The dynamic inference approach is provided based on the rules engine that is deployed on the edge gateway to select an inference function by the triggered rule. The edge gateway is deployed in the entry of a network edge and provides comprehensive functions, including device management, device proxy, client service, intelligent service and rules engine. The functions are provided by microservices provider modules that enable flexibility, extensibility and light weight for offloading domain-specific solutions to the edge gateway. Additionally, the intelligent services can be updated through offloading the microservices provider module with the inference models. Then, using the rules engine, the edge gateway operates an intelligent scenario based on the deployed rule profile by requesting the inference model of the intelligent service provider. The inference models are derived by training the building user data with the deep learning model using the edge server, which provides a high-performance computing resource. The intelligent service provider includes inference models and provides intelligent functions in the edge gateway using a constrained hardware resource based on microservices. Moreover, for bridging the Internet of Things (IoT) device network to the Internet, the gateway provides device management and proxy to enable device access to web clients.

SELECTION OF CITATIONS
SEARCH DETAIL