RESUMEN
The integration of cutting-edge technologies such as the Internet of Things (IoT), robotics, and machine learning (ML) has the potential to significantly enhance the productivity and profitability of traditional fish farming. Farmers using traditional fish farming methods incur enormous economic costs owing to labor-intensive schedule monitoring and care, illnesses, and sudden fish deaths. Another ongoing issue is automated fish species recommendation based on water quality. On the one hand, the effective monitoring of abrupt changes in water quality may minimize the daily operating costs and boost fish productivity, while an accurate automatic fish recommender may aid the farmer in selecting profitable fish species for farming. In this paper, we present AquaBot, an IoT-based system that can automatically collect, monitor, and evaluate the water quality and recommend appropriate fish to farm depending on the values of various water quality indicators. A mobile robot has been designed to collect parameter values such as the pH, temperature, and turbidity from all around the pond. To facilitate monitoring, we have developed web and mobile interfaces. For the analysis and recommendation of suitable fish based on water quality, we have trained and tested several ML algorithms, such as the proposed custom ensemble model, random forest (RF), support vector machine (SVM), decision tree (DT), K-nearest neighbor (KNN), logistic regression (LR), bagging, boosting, and stacking, on a real-time pond water dataset. The dataset has been preprocessed with feature scaling and dataset balancing. We have evaluated the algorithms based on several performance metrics. In our experiment, our proposed ensemble model has delivered the best result, with 94% accuracy, 94% precision, 94% recall, a 94% F1-score, 93% MCC, and the best AUC score for multi-class classification. Finally, we have deployed the best-performing model in a web interface to provide cultivators with recommendations for suitable fish farming. Our proposed system is projected to not only boost production and save money but also reduce the time and intensity of the producer's manual labor.
Asunto(s)
Aprendizaje Automático , Estanques , Calidad del Agua , Animales , Peces , Algoritmos , Monitoreo del Ambiente/métodos , Máquina de Vectores de Soporte , Acuicultura/métodos , Internet de las Cosas , Explotaciones PesquerasRESUMEN
Connected and autonomous vehicles (CAVs) have witnessed significant attention from industries, and academia for research and developments towards the on-road realisation of the technology. State-of-the-art CAVs utilise existing navigation systems for mobility and travel path planning. However, reliable connectivity to navigation systems is not guaranteed, particularly in urban road traffic environments with high-rise buildings, nearby roads and multi-level flyovers. In this connection, this paper presents TAKEN-Traffic Knowledge-based Navigation for enabling CAVs in urban road traffic environments. A traffic analysis model is proposed for mining the sensor-oriented traffic data to generate a precise navigation path for the vehicle. A knowledge-sharing method is developed for collecting and generating new traffic knowledge from on-road vehicles. CAVs navigation is executed using the information enabled by traffic knowledge and analysis. The experimental performance evaluation results attest to the benefits of TAKEN in the precise navigation of CAVs in urban traffic environments.
Asunto(s)
Vehículos Autónomos , Vehículos a Motor , Viaje , Accidentes de TránsitoRESUMEN
Effective resource allocation is crucial in operating systems to prevent deadlocks, especially when resources are limited and non-shareable. Traditional methods like the Banker's algorithm provide solutions but suffer from limitations such as static process handling, high time complexity, and a lack of real-time adaptability. To address these challenges, we propose the Dynamic Banker's Deadlock Avoidance Algorithm (DBDAA). The DBDAA introduces real-time processing for safety checks, significantly improving system efficiency and reducing the risk of deadlocks. Unlike conventional methods, the DBDAA dynamically includes processes in safety checks, considerably decreasing the number of comparisons required to determine safe states. This optimization reduces the time complexity to O(n) in the best-case and O(nd) in the average and worst-case scenarios, compared to the O(n2d) complexity of the original Banker's algorithm. The integration of real-time processing ensures that all processes can immediately engage in safety checks, improving system responsiveness and making the DBDAA suitable for dynamic and time-sensitive applications. Additionally, the DBDAA introduces a primary unsafe sequence mechanism that enhances the acceptability and efficiency of the algorithm by allowing processes to participate in safety checks repeatedly after a predetermined amount of system-defined time. Experimental comparisons with existing algorithms demonstrate the superiority of the DBDAA in terms of reduced safe state prediction time and increased efficiency, making it a robust solution for deadlock avoidance in real-time systems.
Asunto(s)
Algoritmos , Factores de Tiempo , Humanos , Asignación de RecursosRESUMEN
Cloud computing is a popular, flexible, scalable, and cost-effective technology in the modern world that provides on-demand services dynamically. The dynamic execution of user requests and resource-sharing facilities require proper task scheduling among the available virtual machines, which is a significant issue and plays a crucial role in developing an optimal cloud computing environment. Round Robin is a prevalent scheduling algorithm for fair distribution of resources with a balanced contribution in minimized response time and turnaround time. This paper introduced a new enhanced round-robin approach for task scheduling in cloud computing systems. The proposed algorithm generates and keeps updating a dynamic quantum time for process execution, considering the available number of process in the system and their burst length. Since our method dynamically runs processes, it is appropriate for a real-time environment like cloud computing. The notable part of this approach is the capability of scheduling tasks with asymmetric distribution of burst time, avoiding the convoy effect. The experimental result indicates that the proposed algorithm has outperformed the existing improved round-robin task scheduling approaches in terms of minimized average waiting time, average turnaround time, and number of context switches. Comparing the method against five other enhanced round robin approaches, it reduced average waiting times by 15.77% and context switching by 20.68% on average. After executing the experiment and comparative study, it can be concluded that the proposed enhanced round-robin scheduling algorithm is optimal, acceptable, and relatively better suited for cloud computing environments.
Asunto(s)
Algoritmos , Nube Computacional , Factores de TiempoRESUMEN
BACKGROUND: Pulmonary Tuberculosis (PTB) is a significant global health issue due to its high incidence, drug resistance, contagious nature, and impact on people with compromised immune systems. As mentioned by the World Health Organization (WHO), TB is responsible for more global fatalities than any other infectious illness. On the other side, WHO also claims that noncommunicable diseases (NCDs) kill 41 million people yearly worldwide. In this regard, several studies suggest that PTB and NCDs are linked in various ways and that people with PTB are more likely to acquire NCDs. At the same time, NCDs can increase susceptibility to active TB infection. Furthermore, because of potential drug interactions and therapeutic challenges, treating individuals with both PTB and NCDs can be difficult. This study focuses on seven NCDs (lung cancer (LC), diabetes mellitus (DM), Parkinson's disease (PD), silicosis (SI), chronic kidney disease (CKD), cardiovascular disease (CVD), and rheumatoid arthritis (RA)) and rigorously presents the genetic relationship with PTB regarding shared genes and outlines possible treatment plans. OBJECTIVES: BlueThis study aims to identify the drug components that can regulate abnormal gene expression in NCDs. The study will reveal hub genes, potential biomarkers, and drug components associated with hub genes through statistical measures. This will contribute to targeted therapeutic interventions. METHODS: Numerous investigations, including protein-protein interaction (PPI), gene regulatory network (GRN), enrichment analysis, physical interaction, and protein-chemical interaction, have been carried out to demonstrate the genetic correlation between PTB and NCDs. During the study, nine shared genes such as TNF, IL10, NLRP3, IL18, IFNG, HMGB1, CXCL8, IL17A, and NFKB1 were discovered between TB and the above-mentioned NCDs, and five hub genes (NFKB1, TNF, CXCL8, NLRP3, and IL10) were selected based on degree values. RESULTS AND CONCLUSION: In this study, we found that all of the hub genes are linked with the 10 drug components, and it was observed that aspirin CTD 00005447 was mostly associated with all the other hub genes. This bio-informatics study may help researchers better understand the cause of PTB and its relationship with NCDs, and eventually, this can lead to exploring effective treatment plans.
Asunto(s)
Enfermedades no Transmisibles , Tuberculosis Pulmonar , Humanos , Tuberculosis Pulmonar/genética , Tuberculosis Pulmonar/tratamiento farmacológico , Redes Reguladoras de Genes , Neoplasias Pulmonares/genética , Neoplasias Pulmonares/tratamiento farmacológico , Silicosis/genética , Mapas de Interacción de Proteínas , Insuficiencia Renal Crónica/genéticaRESUMEN
The 'Learning Meta-Learning' dataset presented in this paper contains both categorical and continuous data of adult learners for 7 meta-learning parameters: age, gender, degree of illusion of competence, sleep duration, chronotype, experience of the imposter phenomenon, and multiple intelligences. Convenience sampling and Simple Random Sampling methods are used to structure the anonymous online survey data collection voluntarily for LML dataset creation. The responses from the 54 survey questionnaires contain raw data from 1021 current students from 11 universities in Bangladesh. The entire dataset is stored in an excel file and the entire questionnaire is accessible at (10.5281/zenodo.8112213) In this article mean and standard deviation for the participant's baseline attributes are given for scale parameters, and frequency and percentage are calculated for categorical parameters. Academic curriculum, courses as well as professional training materials can be reviewed and redesigned with a focus on the diversity of learners. How the designed courses will be learned by learners along with how they will be taught is a significant point for education in any discipline. As the survey questionnaires are set for adult learners and only current university students have participated in this survey, this dataset is appropriate for study andragogy and heutagogy but not pedagogy.
RESUMEN
In the domain of vision-based applications, the importance of text cannot be underestimated due to its natural capacity to provide accurate and comprehensive information. The application of scene text editing systems enables the modification and enhancement of textual material included in natural images while maintaining the integrity of the overall visual layout. The complexity of keeping the original background context and font styles when altering, however, is an extremely difficult challenge considering the changed image must perfectly blend with the original without being altered. This article contains significant simulated data on the dynamic features of digital image editing, advertising, content development, and related fields. The system comprises key components such as 2D simulated text on the styled image (is), text image (it), masking of text (maskt), real background image (tb), real sample image (tf), text skeleton (tsk), and text styled image (tt). The source dataset contains diverse components such as background images, color variations, fonts, and text content, while the synthetic dataset consists of 49,000 randomly generated images. The dataset provides both researchers and practitioners with a rich resource for identifying and evaluating these dynamic features. The dataset is publicly accessible via the link: https://data.mendeley.com/datasets/h9kry9y46s/3.
RESUMEN
BACKGROUND: According to the World Health Organization (WHO), dementia is the seventh leading reason of death among all illnesses and one of the leading causes of disability among the world's elderly people. Day by day the number of Alzheimer's patients is rising. Considering the increasing rate and the dangers, Alzheimer's disease should be diagnosed carefully. Machine learning is a potential technique for Alzheimer's diagnosis but general users do not trust machine learning models due to the black-box nature. Even, some of those models do not provide the best performance because of using only neuroimaging data. OBJECTIVE: To solve these issues, this paper proposes a novel explainable Alzheimer's disease prediction model using a multimodal dataset. This approach performs a data-level fusion using clinical data, MRI segmentation data, and psychological data. However, currently, there is very little understanding of multimodal five-class classification of Alzheimer's disease. METHOD: For predicting five class classifications, 9 most popular Machine Learning models are used. These models are Random Forest (RF), Logistic Regression (LR), Decision Tree (DT), Multi-Layer Perceptron (MLP), K-Nearest Neighbor (KNN), Gradient Boosting (GB), Adaptive Boosting (AdaB), Support Vector Machine (SVM), and Naive Bayes (NB). Among these models RF has scored the highest value. Besides for explainability, SHapley Additive exPlanation (SHAP) is used in this research work. RESULTS AND CONCLUSIONS: The performance evaluation demonstrates that the RF classifier has a 10-fold cross-validation accuracy of 98.81% for predicting Alzheimer's disease, cognitively normal, non-Alzheimer's dementia, uncertain dementia, and others. In addition, the study utilized Explainable Artificial Intelligence based on the SHAP model and analyzed the causes of prediction. To the best of our knowledge, we are the first to present this multimodal (Clinical, Psychological, and MRI segmentation data) five-class classification of Alzheimer's disease using Open Access Series of Imaging Studies (OASIS-3) dataset. Besides, a novel Alzheimer's patient management architecture is also proposed in this work.
Asunto(s)
Enfermedad de Alzheimer , Anciano , Humanos , Enfermedad de Alzheimer/diagnóstico por imagen , Enfermedad de Alzheimer/terapia , Inteligencia Artificial , Teorema de Bayes , Análisis por Conglomerados , ConocimientoRESUMEN
Deceleration is considered a commonly practised means to assess Foetal Heart Rate (FHR) through visual inspection and interpretation of patterns in Cardiotocography (CTG). The precision of deceleration classification relies on the accurate estimation of corresponding event points (EP) from the FHR and the Uterine Contraction Pressure (UCP). This work proposes a deceleration classification pipeline by comparing four machine learning (ML) models, namely, Multilayer Perceptron (MLP), Random Forest (RF), Naïve Bayes (NB), and Simple Logistics Regression. Towards an automated classification of deceleration from EP using the pipeline, it systematically compares three approaches to create feature sets from the detected EP: (1) a novel fuzzy logic (FL)-based approach, (2) expert annotation by clinicians, and (3) calculated using National Institute of Child Health and Human Development guidelines. The classification results were validated using different popular statistical metrics, including receiver operating characteristic curve, intra-class correlation coefficient, Deming regression, and Bland-Altman Plot. The highest classification accuracy (97.94%) was obtained with MLP when the EP was annotated with the proposed FL approach compared to RF, which obtained 63.92% with the clinician-annotated EP. The results indicate that the FL annotated feature set is the optimal one for classifying deceleration from FHR.
Asunto(s)
Desaceleración , Frecuencia Cardíaca Fetal , Embarazo , Femenino , Niño , Humanos , Frecuencia Cardíaca Fetal/fisiología , Teorema de Bayes , Cardiotocografía/métodos , Aprendizaje AutomáticoRESUMEN
Brain signals are recorded using different techniques to aid an accurate understanding of brain function and to treat its disorders. Untargeted internal and external sources contaminate the acquired signals during the recording process. Often termed as artefacts, these contaminations cause serious hindrances in decoding the recorded signals; hence, they must be removed to facilitate unbiased decision-making for a given investigation. Due to the complex and elusive manifestation of artefacts in neuronal signals, computational techniques serve as powerful tools for their detection and removal. Machine learning (ML) based methods have been successfully applied in this task. Due to ML's popularity, many articles are published every year, making it challenging to find, compare and select the most appropriate method for a given experiment. To this end, this paper presents ABOT (Artefact removal Benchmarking Online Tool) as an online benchmarking tool which allows users to compare existing ML-driven artefact detection and removal methods from the literature. The characteristics and related information about the existing methods have been compiled as a knowledgebase (KB) and presented through a user-friendly interface with interactive plots and tables for users to search it using several criteria. Key characteristics extracted from over 120 articles from the literature have been used in the KB to help compare the specific ML models. To comply with the FAIR (Findable, Accessible, Interoperable and Reusable) principle, the source code and documentation of the toolbox have been made available via an open-access repository.
RESUMEN
Novel Coronavirus 2019 disease or COVID-19 is a viral disease caused by severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2). The use of chest X-rays (CXRs) has become an important practice to assist in the diagnosis of COVID-19 as they can be used to detect the abnormalities developed in the infected patients' lungs. With the fast spread of the disease, many researchers across the world are striving to use several deep learning-based systems to identify the COVID-19 from such CXR images. To this end, we propose an inverted bell-curve-based ensemble of deep learning models for the detection of COVID-19 from CXR images. We first use a selection of models pretrained on ImageNet dataset and use the concept of transfer learning to retrain them with CXR datasets. Then the trained models are combined with the proposed inverted bell curve weighted ensemble method, where the output of each classifier is assigned a weight, and the final prediction is done by performing a weighted average of those outputs. We evaluate the proposed method on two publicly available datasets: the COVID-19 Radiography Database and the IEEE COVID Chest X-ray Dataset. The accuracy, F1 score and the AUC ROC achieved by the proposed method are 99.66%, 99.75% and 99.99%, respectively, in the first dataset, and, 99.84%, 99.81% and 99.99%, respectively, in the other dataset. Experimental results ensure that the use of transfer learning-based models and their combination using the proposed ensemble method result in improved predictions of COVID-19 in CXRs.
RESUMEN
For the welfare of self-development and the country's economic evolution, people invest their youth and money in different cultivation and sustainable production business sectors. The crops or fruits get all the attention for this purpose, but currently, the commercial cultivation of flowers is becoming a numerous beneficial investment. As a consequence, the rose(Genus Rosa) is one of the most beautiful and commercially demanding flowers among different flowers. However, insecticide resistance is considered one of the lion's share issues facing agricultural production of roses by decreasing plants' growth and the quality as well as the quantity of healthy-looking flowers. Apart from this, due to different natural and environmental issues, rose's quality and production level are losing their fame. Additionally, the cultivators of this sector are not educated enough to identify the initial affection of different diseases of leaves with beard eyes. Besides, the lack of communication skills to consult with an agriculturist timely turns the situation worst more than the estimation of the production. With this concern, early detection of diseases that affected different parts of roses, such as leaves, is crucial. Recently, image processing techniques and machine learning classifiers have been primarily applied to recognize multiple diseases. This article presents an extensive dataset of rose leaves images, both diseases affected and diseases free are classified into three classes (Blackspot, Downy Mildew, and Fresh Leaf). The dataset is composed of the collected images which were captured during the seasonal time of diseases affection with the consultation of a domain expert and the dataset is accessible at https://data.mendeley.com/datasets/7z67nyc57w/2.
RESUMEN
Recent technological advancements in data acquisition tools allowed life scientists to acquire multimodal data from different biological application domains. Categorized in three broad types (i.e. images, signals, and sequences), these data are huge in amount and complex in nature. Mining such enormous amount of data for pattern recognition is a big challenge and requires sophisticated data-intensive machine learning techniques. Artificial neural network-based learning systems are well known for their pattern recognition capabilities, and lately their deep architectures-known as deep learning (DL)-have been successfully applied to solve many complex pattern recognition problems. To investigate how DL-especially its different architectures-has contributed and been utilized in the mining of biological data pertaining to those three types, a meta-analysis has been performed and the resulting resources have been critically analysed. Focusing on the use of DL to analyse patterns in data from diverse biological domains, this work investigates different DL architectures' applications to these data. This is followed by an exploration of available open access data sources pertaining to the three data types along with popular open-source DL tools applicable to these data. Also, comparative investigations of these tools from qualitative, quantitative, and benchmarking perspectives are provided. Finally, some open research challenges in using DL to mine biological data are outlined and a number of possible future perspectives are put forward.
RESUMEN
Coronavirus disease (COVID-19) has infected over more than 28.3 million people around the globe and killed 913K people worldwide as on 11 September 2020. With this pandemic, to combat the spreading of COVID-19, effective testing methodologies and immediate medical treatments are much required. Chest X-rays are the widely available modalities for immediate diagnosis of COVID-19. Hence, automation of detection of COVID-19 from chest X-ray images using machine learning approaches is of greater demand. A model for detecting COVID-19 from chest X-ray images is proposed in this paper. A novel concept of cluster-based one-shot learning is introduced in this work. The introduced concept has an advantage of learning from a few samples against learning from many samples in case of deep leaning architectures. The proposed model is a multi-class classification model as it classifies images of four classes, viz., pneumonia bacterial, pneumonia virus, normal, and COVID-19. The proposed model is based on ensemble of Generalized Regression Neural Network (GRNN) and Probabilistic Neural Network (PNN) classifiers at decision level. The effectiveness of the proposed model has been demonstrated through extensive experimentation on a publicly available dataset consisting of 306 images. The proposed cluster-based one-shot learning has been found to be more effective on GRNN and PNN ensembled model to distinguish COVID-19 images from that of the other three classes. It has also been experimentally observed that the model has a superior performance over contemporary deep learning architectures. The concept of one-shot cluster-based learning is being first of its kind in literature, expected to open up several new dimensions in the field of machine learning which require further researching for various applications.
RESUMEN
BACKGROUND: Over the last decade, mobile health applications (mHealth App) have evolved exponentially to assess and support our health and well-being. OBJECTIVE: This paper presents an Artificial Intelligence (AI)-enabled mHealth app rating tool, called ACCU3RATE, which takes multidimensional measures such as user star rating, user review and features declared by the developer to generate the rating of an app. However, currently, there is very little conceptual understanding on how user reviews affect app rating from a multi-dimensional perspective. This study applies AI-based text mining technique to develop more comprehensive understanding of user feedback based on several important factors, determining the mHealth app ratings. METHOD: Based on the literature, six variables were identified that influence the mHealth app rating scale. These factors are user star rating, user text review, user interface (UI) design, functionality, security and privacy, and clinical approval. Natural Language Toolkit package is used for interpreting text and to identify the App users' sentiment. Additional considerations were accessibility, protection and privacy, UI design for people living with physical disability. Moreover, the details of clinical approval, if exists, were taken from the developer's statement. Finally, we fused all the inputs using fuzzy logic to calculate the new app rating score. RESULTS AND CONCLUSIONS: ACCU3RATE concentrates on heart related Apps found in the play store and App gallery. The findings indicate the efficacy of the proposed method as opposed to the current device scale. This study has implications for both App developers and consumers who are using mHealth Apps to monitor and track their health. The performance evaluation shows that the proposed mHealth scale has shown excellent reliability as well as internal consistency of the scale, and high inter-rater reliability index. It has also been noticed that the fuzzy based rating scale, as in ACCU3RATE, matches more closely to the rating performed by experts.
Asunto(s)
Inteligencia Artificial , Aplicaciones Móviles , Telemedicina , HumanosRESUMEN
A novel strain of Coronavirus, identified as the Severe Acute Respiratory Syndrome-2 (SARS-CoV-2), outbroke in December 2019 causing the novel Corona Virus Disease (COVID-19). Since its emergence, the virus has spread rapidly and has been declared a global pandemic. As of the end of January 2021, there are almost 100 million cases worldwide with over 2 million confirmed deaths. Widespread testing is essential to reduce further spread of the disease, but due to a shortage of testing kits and limited supply, alternative testing methods are being evaluated. Recently researchers have found that chest X-Ray (CXR) images provide salient information about COVID-19. An intelligent system can help the radiologists to detect COVID-19 from these CXR images which can come in handy at remote locations in many developing nations. In this work, we propose a pipeline that uses CXR images to detect COVID-19 infection. The features from the CXR images were extracted and the relevant features were then selected using Hybrid Social Group Optimization algorithm. The selected features were then used to classify the CXR images using a number of classifiers. The proposed pipeline achieves a classification accuracy of 99.65% using support vector classifier, which outperforms other state-of-the-art deep learning algorithms for binary and multi-class classification.
RESUMEN
Neuronal signals generally represent activation of the neuronal networks and give insights into brain functionalities. They are considered as fingerprints of actions and their processing across different structures of the brain. These recordings generate a large volume of data that are susceptible to noise and artifacts. Therefore, the review of these data to ensure high quality by automatically detecting and removing the artifacts is imperative. Toward this aim, this work proposes a custom-developed automatic artifact removal toolbox named, SANTIA (SigMate Advanced: a Novel Tool for Identification of Artifacts in Neuronal Signals). Developed in Matlab, SANTIA is an open-source toolbox that applies neural network-based machine learning techniques to label and train models to detect artifacts from the invasive neuronal signals known as local field potentials.
RESUMEN
The use of deoxyribonucleic acid (DNA) hybridization to detect disease-related gene expression is a valuable diagnostic tool. An ion-sensitive field-effect transistor (ISFET) with a graphene layer has been utilized for detecting DNA hybridization. Silicene is a two-dimensional silicon allotrope with structural properties similar to graphene. Thus, it has recently experienced intensive scientific research interest due to its unique electrical, mechanical, and sensing characteristics. In this paper, we proposed an ISFET structure with silicene and electrolyte layers for the label-free detection of DNA hybridization. When DNA hybridization occurs, it changes the ion concentration in the surface layer of the silicene and the pH level of the electrolyte solution. The process also changes the quantum capacitance of the silicene layer and the electrical properties of the ISFET device. The quantum capacitance and the corresponding resonant frequency readout of the silicene and graphene are compared. The performance evaluation found that the changes in quantum capacitance, resonant frequency, and tuning ratio indicate that the sensitivity of silicene is much more effective than graphene.
Asunto(s)
Sondas de ADN , Técnicas Biosensibles , Simulación por Computador , ADN/química , Capacidad Eléctrica , Grafito/química , Silicio/química , Transistores ElectrónicosRESUMEN
The coronavirus disease (COVID-19) caused by a novel coronavirus, SARS-CoV-2, has been declared a global pandemic. Due to its infection rate and severity, it has emerged as one of the major global threats of the current generation. To support the current combat against the disease, this research aims to propose a machine learning-based pipeline to detect COVID-19 infection using lung computed tomography scan images (CTI). This implemented pipeline consists of a number of sub-procedures ranging from segmenting the COVID-19 infection to classifying the segmented regions. The initial part of the pipeline implements the segmentation of the COVID-19-affected CTI using social group optimization-based Kapur's entropy thresholding, followed by k-means clustering and morphology-based segmentation. The next part of the pipeline implements feature extraction, selection, and fusion to classify the infection. Principle component analysis-based serial fusion technique is used in fusing the features and the fused feature vector is then employed to train, test, and validate four different classifiers namely Random Forest, K-Nearest Neighbors (KNN), Support Vector Machine with Radial Basis Function, and Decision Tree. Experimental results using benchmark datasets show a high accuracy (> 91%) for the morphology-based segmentation task; for the classification task, the KNN offers the highest accuracy among the compared classifiers (> 87%). However, this should be noted that this method still awaits clinical validation, and therefore should not be used to clinically diagnose ongoing COVID-19 infection.
RESUMEN
Here we provide evidence with an exploratory pilot study that through the use of a Gamma 40 Hz entrainment frequency, mood, memory and cognition can be improved with respect to a 9-participant cohort. Participants constituted towards three binaural entrainment frequency groups: the 40 Hz, 25 Hz and 100 Hz. Participants attended a total of eight entrainment frequency sessions twice over the duration of a 4-week period. Additionally, participants were assessed based on their cognitive abilities, mood as well as memory, where the cognitive and memory assessments occurred before and after a 5-min binaural beat stimulation. The mood assessment scores were collected from sessions 1, 4 and 8, respectively. With respect to the Gamma 40 Hz entrainment frequency population, we observed a mean improvement in cognitive scores, elevating from 75% average to 85% average upon conclusion of the experimentation at weak statistical significance ([Formula: see text] = 0.10, p = 0.076). Similarly, memory score improvements at a greater significance ([Formula: see text] = 0.05, p = 0.0027) were noted, elevating from an average of 87% to 95%. In pertinence to the mood scores, a negative correlation across all populations were noted, inferring an overall increase in mood due to lower scores correlating with elevated mood. Finally, correlation analysis revealed a stronger R[Formula: see text] value (0.9838) within the 40 Hz group between sessions as well as mood score when compared across the entire frequency group cohort.