Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 29
Filter
1.
Sensors (Basel) ; 23(6)2023 Mar 11.
Article in English | MEDLINE | ID: mdl-36991755

ABSTRACT

The exponentially growing concern of cyber-attacks on extremely dense underwater sensor networks (UWSNs) and the evolution of UWSNs digital threat landscape has brought novel research challenges and issues. Primarily, varied protocol evaluation under advanced persistent threats is now becoming indispensable yet very challenging. This research implements an active attack in the Adaptive Mobility of Courier Nodes in Threshold-optimized Depth-based Routing (AMCTD) protocol. A variety of attacker nodes were employed in diverse scenarios to thoroughly assess the performance of AMCTD protocol. The protocol was exhaustively evaluated both with and without active attacks with benchmark evaluation metrics such as end-to-end delay, throughput, transmission loss, number of active nodes and energy tax. The preliminary research findings show that active attack drastically lowers the AMCTD protocol's performance (i.e., active attack reduces the number of active nodes by up to 10%, reduces throughput by up to 6%, increases transmission loss by 7%, raises energy tax by 25%, and increases end-to-end delay by 20%).

2.
Sensors (Basel) ; 23(6)2023 Mar 16.
Article in English | MEDLINE | ID: mdl-36991903

ABSTRACT

The exponential growth in the number of smart devices connected to the Internet of Things (IoT) that are associated with various IoT-based smart applications and services, raises interoperability challenges. Service-oriented architecture for IoT (SOA-IoT) solutions has been introduced to deal with these interoperability challenges by integrating web services into sensor networks via IoT-optimized gateways to fill the gap between devices, networks, and access terminals. The main aim of service composition is to transform user requirements into a composite service execution. Different methods have been used to perform service composition, which has been classified as trust-based and non-trust-based. The existing studies in this field have reported that trust-based approaches outperform non-trust-based ones. Trust-based service composition approaches use the trust and reputation system as a brain to select appropriate service providers (SPs) for the service composition plan. The trust and reputation system computes each candidate SP's trust value and selects the SP with the highest trust value for the service composition plan. The trust system computes the trust value from the self-observation of the service requestor (SR) and other service consumers' (SCs) recommendations. Several experimental solutions have been proposed to deal with trust-based service composition in the IoT; however, a formal method for trust-based service composition in the IoT is lacking. In this study, we used the formal method for representing the components of trust-based service management in the IoT, by using higher-order logic (HOL) and verifying the different behaviors in the trust system and the trust value computation processes. Our findings showed that the presence of malicious nodes performing trust attacks leads to biased trust value computation, which results in inappropriate SP selection during the service composition. The formal analysis has given us a clear insight and complete understanding, which will assist in the development of a robust trust system.

3.
Sensors (Basel) ; 22(17)2022 Aug 27.
Article in English | MEDLINE | ID: mdl-36080922

ABSTRACT

Nowadays, Human Activity Recognition (HAR) is being widely used in a variety of domains, and vision and sensor-based data enable cutting-edge technologies to detect, recognize, and monitor human activities. Several reviews and surveys on HAR have already been published, but due to the constantly growing literature, the status of HAR literature needed to be updated. Hence, this review aims to provide insights on the current state of the literature on HAR published since 2018. The ninety-five articles reviewed in this study are classified to highlight application areas, data sources, techniques, and open research challenges in HAR. The majority of existing research appears to have concentrated on daily living activities, followed by user activities based on individual and group-based activities. However, there is little literature on detecting real-time activities such as suspicious activity, surveillance, and healthcare. A major portion of existing studies has used Closed-Circuit Television (CCTV) videos and Mobile Sensors data. Convolutional Neural Network (CNN), Long short-term memory (LSTM), and Support Vector Machine (SVM) are the most prominent techniques in the literature reviewed that are being utilized for the task of HAR. Lastly, the limitations and open challenges that needed to be addressed are discussed.


Subject(s)
Human Activities , Neural Networks, Computer , Activities of Daily Living , Humans , Monitoring, Physiologic , Support Vector Machine
4.
Sensors (Basel) ; 22(24)2022 Dec 12.
Article in English | MEDLINE | ID: mdl-36560104

ABSTRACT

Travel time prediction is essential to intelligent transportation systems directly affecting smart cities and autonomous vehicles. Accurately predicting traffic based on heterogeneous factors is highly beneficial but remains a challenging problem. The literature shows significant performance improvements when traditional machine learning and deep learning models are combined using an ensemble learning approach. This research mainly contributes by proposing an ensemble learning model based on hybridized feature spaces obtained from a bidirectional long short-term memory module and a bidirectional gated recurrent unit, followed by support vector regression to produce the final travel time prediction. The proposed approach consists of three stages-initially, six state-of-the-art deep learning models are applied to traffic data obtained from sensors. Then the feature spaces and decision scores (outputs) of the model with the highest performance are fused to obtain hybridized deep feature spaces. Finally, a support vector regressor is applied to the hybridized feature spaces to get the final travel time prediction. The performance of our proposed heterogeneous ensemble using test data showed significant improvements compared to the baseline techniques in terms of the root mean square error (53.87±3.50), mean absolute error (12.22±1.35) and the coefficient of determination (0.99784±0.00019). The results demonstrated that the hybridized deep feature space concept could produce more stable and superior results than the other baseline techniques.


Subject(s)
Machine Learning , Time Factors
5.
Sensors (Basel) ; 22(19)2022 Oct 02.
Article in English | MEDLINE | ID: mdl-36236583

ABSTRACT

Automatic modulation recognition (AMR) is used in various domains-from general-purpose communication to many military applications-thanks to the growing popularity of the Internet of Things (IoT) and related communication technologies. In this research article, we propose an innovative idea of combining the classical mathematical technique of computing linear combinations (LCs) of cumulants with a genetic algorithm (GA) to create super-cumulants. These super-cumulants are further used to classify five digital modulation schemes on fading channels using the K-nearest neighbor (KNN). Our proposed classifier significantly improves the percentage recognition accuracy at lower SNRs when using smaller sample sizes. A comparison with existing techniques manifests the supremacy of our proposed classifier.


Subject(s)
Algorithms , Cluster Analysis , Mathematics
6.
Expert Syst ; 39(3): e12823, 2022 Mar.
Article in English | MEDLINE | ID: mdl-34898799

ABSTRACT

Currently, many deep learning models are being used to classify COVID-19 and normal cases from chest X-rays. However, the available data (X-rays) for COVID-19 is limited to train a robust deep-learning model. Researchers have used data augmentation techniques to tackle this issue by increasing the numbers of samples through flipping, translation, and rotation. However, by adopting this strategy, the model compromises for the learning of high-dimensional features for a given problem. Hence, there are high chances of overfitting. In this paper, we used deep-convolutional generative adversarial networks algorithm to address this issue, which generates synthetic images for all the classes (Normal, Pneumonia, and COVID-19). To validate whether the generated images are accurate, we used the k-mean clustering technique with three clusters (Normal, Pneumonia, and COVID-19). We only selected the X-ray images classified in the correct clusters for training. In this way, we formed a synthetic dataset with three classes. The generated dataset was then fed to The EfficientNetB4 for training. The experiments achieved promising results of 95% in terms of area under the curve (AUC). To validate that our network has learned discriminated features associated with lung in the X-rays, we used the Grad-CAM technique to visualize the underlying pattern, which leads the network to its final decision.

7.
Sensors (Basel) ; 21(22)2021 Nov 09.
Article in English | MEDLINE | ID: mdl-34833507

ABSTRACT

Effective communication in vehicular networks depends on the scheduling of wireless channel resources. There are two types of channel resource scheduling in Release 14 of the 3GPP, i.e., (1) controlled by eNodeB and (2) a distributed scheduling carried out by every vehicle, known as Autonomous Resource Selection (ARS). The most suitable resource scheduling for vehicle safety applications is the ARS mechanism. ARS includes (a) counter selection (i.e., specifying the number of subsequent transmissions) and (b) resource reselection (specifying the reuse of the same resource after counter expiry). ARS is a decentralized approach for resource selection. Therefore, resource collisions can occur during the initial selection, where multiple vehicles might select the same resource, hence resulting in packet loss. ARS is not adaptive towards vehicle density and employs a uniform random selection probability approach for counter selection and reselection. As a result, it can prevent some vehicles from transmitting in a congested vehicular network. To this end, the paper presents Truly Autonomous Resource Selection (TARS) for vehicular networks. TARS considers resource allocation as a problem of locally detecting the selected resources at neighbor vehicles to avoid resource collisions. The paper also models the behavior of counter selection and resource block reselection on resource collisions using the Discrete Time Markov Chain (DTMC). Observation of the model is used to propose a fair policy of counter selection and resource reselection in ARS. The simulation of the proposed TARS mechanism showed better performance in terms of resource collision probability and the packet delivery ratio when compared with the LTE Mode 4 standard and with a competing approach proposed by Jianhua He et al.


Subject(s)
Computer Simulation
8.
Sensors (Basel) ; 20(18)2020 Sep 21.
Article in English | MEDLINE | ID: mdl-32967124

ABSTRACT

The domain of underwater wireless sensor networks (UWSNs) had received a lot of attention recently due to its significant advanced capabilities in the ocean surveillance, marine monitoring and application deployment for detecting underwater targets. However, the literature have not compiled the state-of-the-art along its direction to discover the recent advancements which were fuelled by the underwater sensor technologies. Hence, this paper offers the newest analysis on the available evidences by reviewing studies in the past five years on various aspects that support network activities and applications in UWSN environments. This work was motivated by the need for robust and flexible solutions that can satisfy the requirements for the rapid development of the underwater wireless sensor networks. This paper identifies the key requirements for achieving essential services as well as common platforms for UWSN. It also contributes a taxonomy of the critical elements in UWSNs by devising a classification on architectural elements, communications, routing protocol and standards, security, and applications of UWSNs. Finally, the major challenges that remain open are presented as a guide for future research directions.

9.
Sensors (Basel) ; 19(1)2019 Jan 04.
Article in English | MEDLINE | ID: mdl-30621241

ABSTRACT

Multivariate data sets are common in various application areas, such as wireless sensor networks (WSNs) and DNA analysis. A robust mechanism is required to compute their similarity indexes regardless of the environment and problem domain. This study describes the usefulness of a non-metric-based approach (i.e., longest common subsequence) in computing similarity indexes. Several non-metric-based algorithms are available in the literature, the most robust and reliable one is the dynamic programming-based technique. However, dynamic programming-based techniques are considered inefficient, particularly in the context of multivariate data sets. Furthermore, the classical approaches are not powerful enough in scenarios with multivariate data sets, sensor data or when the similarity indexes are extremely high or low. To address this issue, we propose an efficient algorithm to measure the similarity indexes of multivariate data sets using a non-metric-based methodology. The proposed algorithm performs exceptionally well on numerous multivariate data sets compared with the classical dynamic programming-based algorithms. The performance of the algorithms is evaluated on the basis of several benchmark data sets and a dynamic multivariate data set, which is obtained from a WSN deployed in the Ghulam Ishaq Khan (GIK) Institute of Engineering Sciences and Technology. Our evaluation suggests that the proposed algorithm can be approximately 39.9% more efficient than its counterparts for various data sets in terms of computational time.

10.
Appl Opt ; 54(1): 37-45, 2015 Jan 01.
Article in English | MEDLINE | ID: mdl-25967004

ABSTRACT

Lens system design is an important factor in image quality. The main aspect of the lens system design methodology is the optimization procedure. Since optimization is a complex, nonlinear task, soft computing optimization algorithms can be used. There are many tools that can be employed to measure optical performance, but the spot diagram is the most useful. The spot diagram gives an indication of the image of a point object. In this paper, the spot size radius is considered an optimization criterion. Intelligent soft computing scheme support vector machines (SVMs) coupled with the firefly algorithm (FFA) are implemented. The performance of the proposed estimators is confirmed with the simulation results. The result of the proposed SVM-FFA model has been compared with support vector regression (SVR), artificial neural networks, and generic programming methods. The results show that the SVM-FFA model performs more accurately than the other methodologies. Therefore, SVM-FFA can be used as an efficient soft computing technique in the optimization of lens system designs.

11.
Int J Med Sci ; 11(5): 508-14, 2014.
Article in English | MEDLINE | ID: mdl-24688316

ABSTRACT

BACKGROUND: There is a high risk of tuberculosis (TB) disease diagnosis among conventional methods. OBJECTIVES: This study is aimed at diagnosing TB using hybrid machine learning approaches. MATERIALS AND METHODS: Patient epicrisis reports obtained from the Pasteur Laboratory in the north of Iran were used. All 175 samples have twenty features. The features are classified based on incorporating a fuzzy logic controller and artificial immune recognition system. The features are normalized through a fuzzy rule based on a labeling system. The labeled features are categorized into normal and tuberculosis classes using the Artificial Immune Recognition Algorithm. RESULTS: Overall, the highest classification accuracy reached was for the 0.8 learning rate (α) values. The artificial immune recognition system (AIRS) classification approaches using fuzzy logic also yielded better diagnosis results in terms of detection accuracy compared to other empirical methods. Classification accuracy was 99.14%, sensitivity 87.00%, and specificity 86.12%.


Subject(s)
Algorithms , Immune System , Tuberculosis/diagnosis , Artificial Intelligence , Fuzzy Logic , Humans , Tuberculosis/immunology
12.
ScientificWorldJournal ; 2014: 269357, 2014.
Article in English | MEDLINE | ID: mdl-25121114

ABSTRACT

Cloud computing is a significant shift of computational paradigm where computing as a utility and storing data remotely have a great potential. Enterprise and businesses are now more interested in outsourcing their data to the cloud to lessen the burden of local data storage and maintenance. However, the outsourced data and the computation outcomes are not continuously trustworthy due to the lack of control and physical possession of the data owners. To better streamline this issue, researchers have now focused on designing remote data auditing (RDA) techniques. The majority of these techniques, however, are only applicable for static archive data and are not subject to audit the dynamically updated outsourced data. We propose an effectual RDA technique based on algebraic signature properties for cloud storage system and also present a new data structure capable of efficiently supporting dynamic data operations like append, insert, modify, and delete. Moreover, this data structure empowers our method to be applicable for large-scale data with minimum computation cost. The comparative analysis with the state-of-the-art RDA schemes shows that the proposed scheme is secure and highly efficient in terms of the computation and communication overhead on the auditor and server.


Subject(s)
Algorithms , Computer Security , Information Management/methods , Information Storage and Retrieval/methods , Models, Theoretical , Research Design , Computer Simulation
13.
ScientificWorldJournal ; 2014: 894362, 2014.
Article in English | MEDLINE | ID: mdl-25032243

ABSTRACT

Cloud computing is currently emerging as an ever-changing, growing paradigm that models "everything-as-a-service." Virtualised physical resources, infrastructure, and applications are supplied by service provisioning in the cloud. The evolution in the adoption of cloud computing is driven by clear and distinct promising features for both cloud users and cloud providers. However, the increasing number of cloud providers and the variety of service offerings have made it difficult for the customers to choose the best services. By employing successful service provisioning, the essential services required by customers, such as agility and availability, pricing, security and trust, and user metrics can be guaranteed by service provisioning. Hence, continuous service provisioning that satisfies the user requirements is a mandatory feature for the cloud user and vitally important in cloud computing service offerings. Therefore, we aim to review the state-of-the-art service provisioning objectives, essential services, topologies, user requirements, necessary metrics, and pricing mechanisms. We synthesize and summarize different provision techniques, approaches, and models through a comprehensive literature review. A thematic taxonomy of cloud service provisioning is presented after the systematic review. Finally, future research directions and open research issues are identified.


Subject(s)
Information Storage and Retrieval/methods , Information Storage and Retrieval/standards , Internet/standards
14.
ScientificWorldJournal ; 2014: 459375, 2014.
Article in English | MEDLINE | ID: mdl-24696645

ABSTRACT

Cloud computing (CC) has recently been receiving tremendous attention from the IT industry and academic researchers. CC leverages its unique services to cloud customers in a pay-as-you-go, anytime, anywhere manner. Cloud services provide dynamically scalable services through the Internet on demand. Therefore, service provisioning plays a key role in CC. The cloud customer must be able to select appropriate services according to his or her needs. Several approaches have been proposed to solve the service selection problem, including multicriteria decision analysis (MCDA). MCDA enables the user to choose from among a number of available choices. In this paper, we analyze the application of MCDA to service selection in CC. We identify and synthesize several MCDA techniques and provide a comprehensive analysis of this technology for general readers. In addition, we present a taxonomy derived from a survey of the current literature. Finally, we highlight several state-of-the-art practical aspects of MCDA implementation in cloud computing service selection. The contributions of this study are four-fold: (a) focusing on the state-of-the-art MCDA techniques, (b) highlighting the comparative analysis and suitability of several MCDA methods, (c) presenting a taxonomy through extensive literature review, and (d) analyzing and summarizing the cloud computing service selections in different scenarios.


Subject(s)
Algorithms , Computing Methodologies , Decision Making, Computer-Assisted , Decision Support Techniques , Information Storage and Retrieval/methods , Internet
15.
ScientificWorldJournal ; 2014: 547062, 2014.
Article in English | MEDLINE | ID: mdl-25097880

ABSTRACT

Network forensics enables investigation and identification of network attacks through the retrieved digital content. The proliferation of smartphones and the cost-effective universal data access through cloud has made Mobile Cloud Computing (MCC) a congenital target for network attacks. However, confines in carrying out forensics in MCC is interrelated with the autonomous cloud hosting companies and their policies for restricted access to the digital content in the back-end cloud platforms. It implies that existing Network Forensic Frameworks (NFFs) have limited impact in the MCC paradigm. To this end, we qualitatively analyze the adaptability of existing NFFs when applied to the MCC. Explicitly, the fundamental mechanisms of NFFs are highlighted and then analyzed using the most relevant parameters. A classification is proposed to help understand the anatomy of existing NFFs. Subsequently, a comparison is given that explores the functional similarities and deviations among NFFs. The paper concludes by discussing research challenges for progressive network forensics in MCC.


Subject(s)
Computer Systems , Forensic Sciences/methods , Information Storage and Retrieval/methods
16.
ScientificWorldJournal ; 2014: 712826, 2014.
Article in English | MEDLINE | ID: mdl-25136682

ABSTRACT

Big Data has gained much attention from the academia and the IT industry. In the digital and computing world, information is generated and collected at a rate that rapidly exceeds the boundary range. Currently, over 2 billion people worldwide are connected to the Internet, and over 5 billion individuals own mobile phones. By 2020, 50 billion devices are expected to be connected to the Internet. At this point, predicted data production will be 44 times greater than that in 2009. As information is transferred and shared at light speed on optic fiber and wireless networks, the volume of data and the speed of market growth increase. However, the fast growth rate of such large data generates numerous challenges, such as the rapid growth of data, transfer speed, diverse data, and security. Nonetheless, Big Data is still in its infancy stage, and the domain has not been reviewed in general. Hence, this study comprehensively surveys and classifies the various attributes of Big Data, including its nature, definitions, rapid growth rate, volume, management, analysis, and security. This study also proposes a data life cycle that uses the technologies and terminologies of Big Data. Future research directions in this field are determined based on opportunities and several open issues in Big Data domination. These research directions facilitate the exploration of the domain and the development of optimal techniques to address Big Data.


Subject(s)
Electronic Data Processing , Internet , Access to Information , Information Dissemination , Information Storage and Retrieval
17.
Multimed Tools Appl ; : 1-51, 2023 Feb 24.
Article in English | MEDLINE | ID: mdl-36855614

ABSTRACT

Because mobile technology and the widespread usage of mobile devices have swiftly and radically evolved, several training centers have started to offer mobile training (m-training) via mobile devices. Thus, designing suitable m-training course content for training employees via mobile device applications has become an important professional development issue to allow employees to obtain knowledge and improve their skills in the rapidly changing mobile environment. Previous studies have identified challenges in this domain. One important challenge is that no solid theoretical framework serves as a foundation to provide instructional design guidelines for interactive m-training course content that motivates and attracts trainees to the training process via mobile devices. This study proposes a framework for designing interactive m-training course content using mobile augmented reality (MAR). A mixed-methods approach was adopted. Key elements were extracted from the literature to create an initial framework. Then, the framework was validated by interviewing experts, and it was tested by trainees. This integration led us to evaluate and prove the validity of the proposed framework. The framework follows a systematic approach guided by six key elements and offers a clear instructional design guideline checklist to ensure the design quality of interactive m-training course content. This study contributes to the knowledge by establishing a framework as a theoretical foundation for designing interactive m-training course content. Additionally, it supports the m-training domain by assisting trainers and designers in creating interactive m-training courses to train employees, thus increasing their engagement in m-training. Recommendations for future studies are proposed.

18.
PeerJ Comput Sci ; 9: e1656, 2023.
Article in English | MEDLINE | ID: mdl-38077568

ABSTRACT

Background: Software process improvement (SPI) is an indispensable phenomenon in the evolution of a software development company that adopts global software development (GSD) or in-house development. Several software development companies do not only adhere to in-house development but also go for the GSD paradigm. Both development approaches are of paramount significance because of their respective advantages. Many studies have been conducted to find the SPI success factors in the case of companies that opt for in-house development. Still, less attention has been paid to the SPI success factors in the case of the GSD environment for large-scale software companies. Factors that contribute to the SPI success of small as well as medium-sized companies have been identified, but large-scale companies have still been overlooked. The research aims to identify the success factors of SPI for both development approaches (GSD and in-house) in the case of large-scale software companies. Methods: Two systematic literature reviews have been performed. An industrial survey has been conducted to detect additional SPI success factors for both development environments. In the subsequent step, a comparison has been made to find similar SPI success factors in both development environments. Lastly, another industrial survey is conducted to compare the common SPI success factors of GSD and in-house software development, in the case of large-scale companies, to divulge which SPI success factor carries more value in which development environment. For this reason, parametric (Pearson correlation) and non-parametric (Kendall's Tau correlation and the Spearman correlation) tests have been performed. Results: The 17 common SPI factors have been identified. The pinpointed common success factors expedite and contribute to SPI in both environments in the case of large-scale companies.

19.
IEEE Access ; 10: 35094-35105, 2022.
Article in English | MEDLINE | ID: mdl-35582498

ABSTRACT

In the current era, data is growing exponentially due to advancements in smart devices. Data scientists apply a variety of learning-based techniques to identify underlying patterns in the medical data to address various health-related issues. In this context, automated disease detection has now become a central concern in medical science. Such approaches can reduce the mortality rate through accurate and timely diagnosis. COVID-19 is a modern virus that has spread all over the world and is affecting millions of people. Many countries are facing a shortage of testing kits, vaccines, and other resources due to significant and rapid growth in cases. In order to accelerate the testing process, scientists around the world have sought to create novel methods for the detection of the virus. In this paper, we propose a hybrid deep learning model based on a convolutional neural network (CNN) and gated recurrent unit (GRU) to detect the viral disease from chest X-rays (CXRs). In the proposed model, a CNN is used to extract features, and a GRU is used as a classifier. The model has been trained on 424 CXR images with 3 classes (COVID-19, Pneumonia, and Normal). The proposed model achieves encouraging results of 0.96, 0.96, and 0.95 in terms of precision, recall, and f1-score, respectively. These findings indicate how deep learning can significantly contribute to the early detection of COVID-19 in patients through the analysis of X-ray scans. Such indications can pave the way to mitigate the impact of the disease. We believe that this model can be an effective tool for medical practitioners for early diagnosis.

20.
JMIR Res Protoc ; 11(1): e27935, 2022 Jan 28.
Article in English | MEDLINE | ID: mdl-35089146

ABSTRACT

BACKGROUND: Walking recovery post stroke can be slow and incomplete. Determining effective stroke rehabilitation frequency requires the assessment of neuroplasticity changes. Neurobiological signals from electroencephalogram (EEG) can measure neuroplasticity through incremental changes of these signals after rehabilitation. However, changes seen with a different frequency of rehabilitation require further investigation. It is hypothesized that the association between the incremental changes from EEG signals and the improved functional outcome measure scores are greater in higher rehabilitation frequency, implying enhanced neuroplasticity changes. OBJECTIVE: The purpose of this study is to identify the changes in the neurobiological signals from EEG, to associate these with functional outcome measures scores, and to compare their associations in different therapy frequency for gait rehabilitation among subacute stroke individuals. METHODS: A randomized, single-blinded, controlled study among patients with subacute stroke will be conducted with two groups: an intervention group (IG) and a control group (CG). Each participant in the IG and CG will receive therapy sessions three times a week (high frequency) and once a week (low frequency), respectively, for a total of 12 consecutive weeks. Each session will last for an hour with strengthening, balance, and gait training. The main variables to be assessed are the 6-Minute Walk Test (6MWT), Motor Assessment Scale (MAS), Berg Balance Scale (BBS), Modified Barthel Index (MBI), and quantitative EEG indices in the form of delta to alpha ratio (DAR) and delta-plus-theta to alpha-plus-beta ratio (DTABR). These will be measured at preintervention (R0) and postintervention (R1). Key analyses are to determine the changes in the 6MWT, MAS, BBS, MBI, DAR, and DTABR at R0 and R1 for the CG and IG. The changes in the DAR and DTABR will be analyzed for association with the changes in the 6MWT, MAS, BBS, and MBI to measure neuroplasticity changes for both the CG and IG. RESULTS: We have recruited 18 participants so far. We expect to publish our results in early 2023. CONCLUSIONS: These associations are expected to be positive in both groups, with a higher correlation in the IG compared to the CG, reflecting enhanced neuroplasticity changes and objective evaluation on the dose-response relationship. INTERNATIONAL REGISTERED REPORT IDENTIFIER (IRRID): DERR1-10.2196/27935.

SELECTION OF CITATIONS
SEARCH DETAIL