Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 17 de 17
Filtrar
Más filtros













Base de datos
Intervalo de año de publicación
1.
Sci Rep ; 14(1): 6420, 2024 Mar 17.
Artículo en Inglés | MEDLINE | ID: mdl-38494519

RESUMEN

In the ongoing battle against adversarial attacks, adopting a suitable strategy to enhance model efficiency, bolster resistance to adversarial threats, and ensure practical deployment is crucial. To achieve this goal, a novel four-component methodology is introduced. First, introducing a pioneering batch-cumulative approach, the exponential particle swarm optimization (ExPSO) algorithm was developed for meticulous parameter fine-tuning within each batch. A cumulative updating loss function was employed for overall optimization, demonstrating remarkable superiority over traditional optimization techniques. Second, weight compression is applied to streamline the deep neural network (DNN) parameters, boosting the storage efficiency and accelerating inference. It also introduces complexity to deter potential attackers, enhancing model accuracy in adversarial settings. This study compresses the generative pre-trained transformer (GPT) by 65%, saving time and memory without causing performance loss. Compared to state-of-the-art methods, the proposed method achieves the lowest perplexity (14.28), the highest accuracy (93.72%), and an 8 × speedup in the central processing unit. The integration of the preceding two components involves the simultaneous training of multiple versions of the compressed GPT. This training occurs across various compression rates and different segments of a dataset and is ultimately associated with a novel multi-expert architecture. This enhancement significantly fortifies the model's resistance to adversarial attacks by introducing complexity into attackers' attempts to anticipate the model's prediction integration process. Consequently, this leads to a remarkable average performance improvement of 25% across 14 different attack scenarios and various datasets, surpassing the capabilities of current state-of-the-art methods.

2.
Sci Rep ; 14(1): 4217, 2024 02 20.
Artículo en Inglés | MEDLINE | ID: mdl-38378760

RESUMEN

Brain disorders pose a substantial global health challenge, persisting as a leading cause of mortality worldwide. Electroencephalogram (EEG) analysis is crucial for diagnosing brain disorders, but it can be challenging for medical practitioners to interpret complex EEG signals and make accurate diagnoses. To address this, our study focuses on visualizing complex EEG signals in a format easily understandable by medical professionals and deep learning algorithms. We propose a novel time-frequency (TF) transform called the Forward-Backward Fourier transform (FBFT) and utilize convolutional neural networks (CNNs) to extract meaningful features from TF images and classify brain disorders. We introduce the concept of eye-naked classification, which integrates domain-specific knowledge and clinical expertise into the classification process. Our study demonstrates the effectiveness of the FBFT method, achieving impressive accuracies across multiple brain disorders using CNN-based classification. Specifically, we achieve accuracies of 99.82% for epilepsy, 95.91% for Alzheimer's disease (AD), 85.1% for murmur, and 100% for mental stress using CNN-based classification. Furthermore, in the context of naked-eye classification, we achieve accuracies of 78.6%, 71.9%, 82.7%, and 91.0% for epilepsy, AD, murmur, and mental stress, respectively. Additionally, we incorporate a mean correlation coefficient (mCC) based channel selection method to enhance the accuracy of our classification further. By combining these innovative approaches, our study enhances the visualization of EEG signals, providing medical professionals with a deeper understanding of TF medical images. This research has the potential to bridge the gap between image classification and visual medical interpretation, leading to better disease detection and improved patient care in the field of neuroscience.


Asunto(s)
Enfermedad de Alzheimer , Epilepsia , Humanos , Redes Neurales de la Computación , Algoritmos , Encéfalo , Epilepsia/diagnóstico , Electroencefalografía/métodos
3.
Sci Rep ; 14(1): 482, 2024 01 04.
Artículo en Inglés | MEDLINE | ID: mdl-38177624

RESUMEN

Regular monitoring of glycated hemoglobin (HbA1c) levels is important for the proper management of diabetes. Studies demonstrated that lower levels of HbA1c play an essential role in reducing or delaying microvascular difficulties that arise from diabetes. In addition, there is an association between elevated HbA1c levels and the development of diabetes-related comorbidities. The advanced prediction of HbA1c enables patients and physicians to make changes to treatment plans and lifestyle to avoid elevated HbA1c levels, which can consequently lead to irreversible health complications. Despite the impact of such prediction capabilities, no work in the literature or industry has investigated the futuristic prediction of HbA1c using current blood glucose (BG) measurements. For the first time in the literature, this work proposes a novel FSL-derived algorithm for the long-term prediction of clinical HbA1c measures. More importantly, the study specifically targeted the pediatric Type-1 diabetic population, as an early prediction of elevated HbA1c levels could help avert severe life-threatening complications in these young children. Short-term CGM time-series data are processed using both novel image transformation approaches, as well as using conventional signal processing methods. The derived images are then fed into a convolutional neural network (CNN) adapted from a few-shot learning (FSL) model for feature extraction, and all the derived features are fused together. A novel normalized FSL-distance (FSLD) metric is proposed for accurately separating the features of different HbA1c levels. Finally, a K-nearest neighbor (KNN) model with majority voting is implemented for the final classification task. The proposed FSL-derived algorithm provides a prediction accuracy of 93.2%.


Asunto(s)
Diabetes Mellitus Tipo 1 , Niño , Humanos , Preescolar , Hemoglobina Glucada , Glucemia , Automonitorización de la Glucosa Sanguínea/métodos , Factores de Tiempo
4.
Sci Rep ; 13(1): 22181, 2023 Dec 13.
Artículo en Inglés | MEDLINE | ID: mdl-38092811

RESUMEN

Urban activities, particularly vehicle traffic, are contributing significantly to environmental pollution with detrimental effects on public health. The ability to anticipate air quality in advance is critical for public authorities and the general public to plan and manage these activities, which ultimately help in minimizing the adverse impact on the environment and public health effectively. Thanks to recent advancements in Artificial Intelligence and sensor technology, forecasting air quality is possible through the consideration of various environmental factors. This paper presents our novel solution for air quality prediction and its correlation with different environmental factors and urban activities, such as traffic density. To this aim, we propose a multi-modal framework by integrating real-time data from different environmental sensors and traffic density extracted from Closed Circuit Television footage. The framework effectively addresses data inconsistencies arising from sensor and camera malfunctions within a streaming dataset. The dataset exhibits real-world complexities, including abrupt camera or station activations/deactivations, noise interference, and outliers. The proposed system tackles the challenge of predicting air quality at locations having no sensors or experiencing sensor failures by training a joint model on the data obtained from nearby stations/sensors using a Particle Swarm Optimization (PSO)-based merit fusion of the sensor data. The proposed methodology is evaluated using various variants of the LSTM model including Bi-directional LSTM, CNN-LSTM, and Convolutions LSTM (ConvLSTM) obtaining an improvement of 48%, 67%, and 173% for short-term, medium-term, and long-term periods, respectively, over the ARIMA model.

5.
Heliyon ; 9(11): e21621, 2023 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-37954292

RESUMEN

Among many types of wearable sensors, MOFs-based wearable sensors have recently been explored in both commercialization and research. There has been much effort in various aspects of the development of MOF-based wearable sensors including but not limited to miniaturization, size control, safety, improvements in conformal and flexible features, improvements in the analytical performance and long-term storage of these devices. Recent progress in the design and deployment of MOFs-based wearable sensors are covered in this paper, as are the remaining obstacles and prospects. This work also highlights the enormous potential for synergistic effects of MOFs used in combination with other nanomaterials for healthcare applications and raise attention toward the economic aspect and market diffusion of MOFs-based wearable sensors.

6.
Sci Rep ; 13(1): 13303, 2023 08 16.
Artículo en Inglés | MEDLINE | ID: mdl-37587137

RESUMEN

In machine learning, an informative dataset is crucial for accurate predictions. However, high dimensional data often contains irrelevant features, outliers, and noise, which can negatively impact model performance and consume computational resources. To tackle this challenge, the Bird's Eye View (BEV) feature selection technique is introduced. This approach is inspired by the natural world, where a bird searches for important features in a sparse dataset, similar to how a bird search for sustenance in a sprawling jungle. BEV incorporates elements of Evolutionary Algorithms with a Genetic Algorithm to maintain a population of top-performing agents, Dynamic Markov Chain to steer the movement of agents in the search space, and Reinforcement Learning to reward and penalize agents based on their progress. The proposed strategy in this paper leads to improved classification performance and a reduced number of features compared to conventional methods, as demonstrated by outperforming state-of-the-art feature selection techniques across multiple benchmark datasets.


Asunto(s)
Algoritmos , Benchmarking , Evolución Biológica , Aprendizaje Automático , Cadenas de Markov
7.
Heliyon ; 9(5): e15745, 2023 May.
Artículo en Inglés | MEDLINE | ID: mdl-37159716

RESUMEN

Objective: The study aims to identify typical interplay between the use of social media apps on smartphones and Problematic Internet Usage (PIU). Method: Our study utilizes data from a smartphone app that objectively monitors user usage, including the apps used and the start and finish times of each app session. This study included 334 participants who declared a need to be aware of their smartphone usage and control it. Problematic Internet Usage (PIU) was measured using the Problematic Internet Use Questionnaire-Short Form-6 (PIUQ-SF6). The total PIU score can range from 6 to 30, with a score above 15 indicating that a person is at risk of PIU. Time spent on Social Media (SM) apps of Facebook, WhatsApp, and Instagram, and whether people used each of these apps were studied along with the total PIU score. K-Prototype clustering was utilized for the analysis. Results: Four distinct clusters, typifying the relationship between social media use and PIU, were identified. All the individuals in Cluster 1 (Light SM Use Cluster; Cluster size = 270, 80.84% of total dataset) spent between 0 and 109.01 min on Instagram, between 0 and 69.84 min on Facebook, and between 0 and 86.42 min on WhatsApp and its median PIU score was 17. Those who were in cluster 2 (Highly Visual SM Cluster; Cluster size = 23, 6.89% of total dataset) all used Instagram, and each member spent between 110 and 307.63 min on Instagram daily. The cluster median PIU score and average daily usage of Instagram were respectively 20 and 159.66 min. Those who were in Cluster 3 (Conversational SM Cluster; Cluster size = 19, 5.69% of total dataset) all used WhatsApp, and spent between 76.68 and 225.22 min on WhatsApp daily. The cluster median PIU score and average time spent per day on WhatsApp were 20 and 132.65 min, respectively. Those who were in Cluster 4 (Social Networking Cluster; (Cluster size = 22, 6.59% of total dataset) all used Facebook, and each spent between 73.09 and 272.85 min daily on Facebook. The cluster median PIU score and average time spent per day on Facebook were 18 and 133.61 min respectively. Conclusion: The clusters indicate that those who use a particular social media app spend significantly less time on other social media apps. This indicates that problematic attachment to social media occurs primarily for one of three reasons: visual content and reels, conversations with peers, or surfing network content and news. This finding will help tailor interventions to fit each cluster, for example by strengthening interpersonal skills and resistance to peer pressure in the case of Cluster 3 and increasing impulse control in the case of Cluster 2.

8.
Digit Health ; 9: 20552076231152175, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-36714545

RESUMEN

Objective: This study aims to explore the user archetypes of health apps based on average usage and psychometrics. Methods: The study utilized a dataset collected through a dedicated smartphone application and contained usage data, i.e. the timestamps of each app session from October 2020 to April 2021. The dataset had 129 participants for mental health apps usage and 224 participants for physical health apps usage. Average daily launches, extraversion, neuroticism, and satisfaction with life were the determinants of the mental health apps clusters, whereas average daily launches, conscientiousness, neuroticism, and satisfaction with life were for physical health apps. Results: Two clusters of mental health apps users were identified using k-prototypes clustering: help-seeking and maintenance users and three clusters of physical health apps users were identified: happy conscious occasional, happy neurotic occasional, and unhappy neurotic frequent users. Conclusion: The findings from this study helped to understand the users of health apps based on the frequency of usage, personality, and satisfaction with life. Further, with these findings, apps can be tailored to optimize user experience and satisfaction which may help to increase user retention. Policymakers may also benefit from these findings since understanding the populations' needs may help to better invest in effective health technology.

9.
Environ Sci Pollut Res Int ; 29(55): 82709-82728, 2022 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-36223015

RESUMEN

Coronavirus disease 2019 (COVID-19) has delayed global economic growth, which has affected the economic life globally. On the one hand, numerous elements in the environment impact the transmission of this new coronavirus. Every country in the Middle East and North Africa (MENA) area has a different population density, air quality and contaminants, and water- and land-related conditions, all of which influence coronavirus transmission. The World Health Organization (WHO) has advocated fast evaluations to guide policymakers with timely evidence to respond to the situation. This review makes four unique contributions. One, many data about the transmission of the new coronavirus in various sorts of settings to provide clear answers to the current dispute over the virus's transmission were reviewed. Two, highlight the most significant application of machine learning to forecast and diagnose severe acute respiratory syndrome coronavirus (SARS-CoV-2). Three, our insights provide timely and accurate information along with compelling suggestions and methodical directions for investigators. Four, the present study provides decision-makers and community leaders with information on the effectiveness of environmental controls for COVID-19 dissemination.


Asunto(s)
COVID-19 , Humanos , COVID-19/epidemiología , SARS-CoV-2 , Aprendizaje Automático , Organización Mundial de la Salud , África del Norte/epidemiología
10.
Big Data ; 10(1): 65-80, 2022 02.
Artículo en Inglés | MEDLINE | ID: mdl-34227852

RESUMEN

In image registration, the search space used to compute the optimal transformation between the images depends on the group of pixels in the vicinity. Favorable results can be achieved by significantly increasing the number of neighboring pixels in the search space; however, this strategy increases the computational load, thus making it challenging to realize the most desirable solution in a reasonable amount of time. To address the mentioned problem, the genetic algorithm is used to find the optimum solution and the solution lies in finding the best chromosomes. In rigid image registration problem, chromosomes contain a set of three parameters, x-translation, y-translation, and rotation. The genetic algorithm iteratively improves chromosomes from generation to generation and selects the best one having the best fittest value. Chromosomes with high fitness value are the ones with an optimal solution where the template image best aligns reference image. Fitness function in the genetic algorithm for image registration problem uses similarity measure index measure to find the amount of similarity between two images. The best fittest value is the one with a high similarity measure that shows the best-aligned template and reference image. Here we used the structural similarity index measure in fitness function that helps in evaluating the best chromosome, even for the compressed images with low quality, intensity nonuniformity (INU), and noise degradation. Building on the genetic algorithm, we propose a novel approach called multistage forward path regenerative genetic algorithm (MFRGA), abbreviated as MFRGA, with reducing search space at each stage. Compared with the single stage of genetic algorithm, our approach proved to be more reliable and accurate in terms of finding true rigid image transformation for alignment. At each increasing stage of MFRGA, results are computed with decreasing search space and increasing precision levels. Moreover, to prove the robustness of our algorithm, we utilized compressed images of brain magnetic resonant imaging that vary in compression qualities ranging from 10 to 100. Furthermore, we added noise levels of 1%, 3%, 5%, 7%, and 9% with an INU of 20% and 40%, respectively, provided by the online BrainWeb simulator. We achieved the monomodal rigid image registration that proves to be successful using MFRGA, even when the noise is critical, the compression quality is the least, and the intensity is nonuniform.


Asunto(s)
Algoritmos , Encéfalo , Encéfalo/diagnóstico por imagen , Fenómenos Magnéticos
11.
Front Public Health ; 10: 970694, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-36726636

RESUMEN

Qatar is a peninsular country with predominantly hot and humid weather, with 88% of the total population being immigrants. As such, it leaves the country liable to the introduction and dissemination of vector-borne diseases, in part due to the presence of native arthropod vectors. Qatar's weather is expected to become warmer with the changing climatic conditions across the globe. Environmental factors such as humidity and temperature contribute to the breeding and distribution of different types of mosquito species in a given region. If proper and timely precautions are not taken, a high rate of particular mosquito species can result in the transmission of various vector-borne diseases. In this study, we analyzed the environmental impact on the probability of occurrence of different mosquito species collected from several different sites in Qatar. The Naive Bayes model was used to calculate the posterior probability for various mosquito species. Further, the resulting Naive Bayes predictions were used to define the favorable environmental circumstances for identified mosquito species. The findings of this study will help in the planning and implementation of an active surveillance system and preventive measures to curb the spread of mosquitoes in Qatar.


Asunto(s)
Culicidae , Enfermedades Transmitidas por Vectores , Animales , Mosquitos Vectores , Teorema de Bayes , Qatar , Tiempo (Meteorología)
12.
Biology (Basel) ; 10(6)2021 May 24.
Artículo en Inglés | MEDLINE | ID: mdl-34073810

RESUMEN

Epidemiological Modeling supports the evaluation of various disease management activities. The value of epidemiological models lies in their ability to study various scenarios and to provide governments with a priori knowledge of the consequence of disease incursions and the impact of preventive strategies. A prevalent method of modeling the spread of pandemics is to categorize individuals in the population as belonging to one of several distinct compartments, which represents their health status with regard to the pandemic. In this work, a modified SIR epidemic model is proposed and analyzed with respect to the identification of its parameters and initial values based on stated or recorded case data from public health sources to estimate the unreported cases and the effectiveness of public health policies such as social distancing in slowing the spread of the epidemic. The analysis aims to highlight the importance of unreported cases for correcting the underestimated basic reproduction number. In many epidemic outbreaks, the number of reported infections is likely much lower than the actual number of infections which can be calculated from the model's parameters derived from reported case data. The analysis is applied to the COVID-19 pandemic for several countries in the Gulf region and Europe.

13.
PLoS One ; 15(10): e0238746, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-33002015

RESUMEN

The paper investigates a new scheme for generating lifetime probability distributions. The scheme is called Exponential- H family of distribution. The paper presents an application of this family by using the Weibull distribution, the new distribution is then called New Flexible Exponential distribution or in short NFE. Various statistical properties are derived, such as quantile function, order statistics, moments, etc. Two real-life data sets and a simulation study have been performed so that to assure the flexibility of the proposed model. It has been declared that the proposed distribution offers nice results than Exponential, Weibull Exponential, and Exponentiated Exponential distribution.


Asunto(s)
Teoría de la Probabilidad , Distribuciones Estadísticas , Accidentes de Tránsito/estadística & datos numéricos , Aeronaves , Simulación por Computador , Análisis de Falla de Equipo/estadística & datos numéricos , Humanos , Tablas de Vida , Funciones de Verosimilitud , Modelos Estadísticos , Modelos de Riesgos Proporcionales
14.
PLoS One ; 15(9): e0239746, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-32986785

RESUMEN

This research work aims to develop a deep learning-based crop classification framework for remotely sensed time series data. Tobacco is a major revenue generating crop of Khyber Pakhtunkhwa (KP) province of Pakistan, with over 90% of the country's Tobacco production. In order to analyze the performance of the developed classification framework, a pilot sub-region named Yar Hussain is selected for experimentation work. Yar Hussain is a tehsil of district Swabi, within KP province of Pakistan, having highest contribution to the gross production of the KP Tobacco crop. KP generally consists of a diverse crop land with different varieties of vegetation, having similar phenology which makes crop classification a challenging task. In this study, a temporal convolutional neural network (TempCNNs) model is implemented for crop classification, while considering remotely sensed imagery of the selected pilot region with specific focus on the Tobacco crop. In order to improve the performance of the proposed classification framework, instead of using the prevailing concept of utilizing a single satellite imagery, both Sentinel-2 and Planet-Scope imageries are stacked together to assist in providing more diverse features to the proposed classification framework. Furthermore, instead of using a single date satellite imagery, multiple satellite imageries with respect to the phenological cycle of Tobacco crop are temporally stacked together which resulted in a higher temporal resolution of the employed satellite imagery. The developed framework is trained using the ground truth data. The final output is obtained as an outcome of the SoftMax function of the developed model in the form of probabilistic values, for the classification of the selected classes. The proposed deep learning-based crop classification framework, while utilizing multi-satellite temporally stacked imagery resulted in an overall classification accuracy of 98.15%. Furthermore, as the developed classification framework evolved with specific focus on Tobacco crop, it resulted in best Tobacco crop classification accuracy of 99%.


Asunto(s)
Agricultura/métodos , Aprendizaje Profundo , Nicotiana/clasificación , Imágenes Satelitales/métodos , Verduras/clasificación , Exactitud de los Datos , Humanos , Pakistán , Triticum/clasificación
15.
Stud Health Technol Inform ; 262: 232-235, 2019 Jul 04.
Artículo en Inglés | MEDLINE | ID: mdl-31349310

RESUMEN

Promoter region of protein-coding genes are gradually being well understood, yet no comparable studies exist for the promoter of long non-coding RNA (lncRNA) genes which has emerged as a global potential regulator in multiple cellular process and different diseases for human. To understand the difference in the transcriptional regulation pattern of these genes, previously, we proposed a machine learning based model to classify the promoter of protein-coding genes and lncRNA genes. In this study, we are presenting DeepCNPP (deep coding non-coding promoter predictor), an improved model based on deep learning (DL) framework to classify the promoter of lncRNA genes and protein-coding genes. We used convolution neural network (CNN) based deep network to classify the promoter of these two broad categories of human genes. Our computational model, built upon the sequence information only, was able to classify these two groups of promoters from human at a rate of 83.34% accuracy and outperformed the existing model. Further analysis and interpretation of the output from DeepCNPP architecture will enable us to understand the difference in transcription regulatory pattern for these two groups of genes.


Asunto(s)
Aprendizaje Profundo , Regiones Promotoras Genéticas , ARN Largo no Codificante , Biología Computacional , Humanos , Aprendizaje Automático , Modelos Teóricos
16.
Comput Biol Med ; 42(1): 123-8, 2012 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-22115076

RESUMEN

This paper presents a method for breast cancer diagnosis in digital mammogram images. Multi-resolution representations, wavelet or curvelet, are used to transform the mammogram images into a long vector of coefficients. A matrix is constructed by putting wavelet or curvelet coefficients of each image in row vector, where the number of rows is the number of images, and the number of columns is the number of coefficients. A feature extraction method is developed based on the statistical t-test method. The method is ranking the features (columns) according to its capability to differentiate the classes. Then, a dynamic threshold is applied to optimize the number of features, which can achieve the maximum classification accuracy rate. The method depends on extracting the features that can maximize the ability to discriminate between different classes. Thus, the dimensionality of data features is reduced and the classification accuracy rate is improved. Support vector machine (SVM) is used to classify between the normal and abnormal tissues and to distinguish between benign and malignant tumors. The proposed method is validated using 5-fold cross validation. The obtained classification accuracy rates demonstrate that the proposed method could contribute to the successful detection of breast cancer.


Asunto(s)
Neoplasias de la Mama/diagnóstico por imagen , Mamografía/métodos , Interpretación de Imagen Radiográfica Asistida por Computador/métodos , Femenino , Humanos , Reproducibilidad de los Resultados , Máquina de Vectores de Soporte , Análisis de Ondículas
17.
Comput Biol Med ; 40(4): 384-91, 2010 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-20163793

RESUMEN

This paper presents a comparative study between wavelet and curvelet transform for breast cancer diagnosis in digital mammogram. Using multiresolution analysis, mammogram images are decomposed into different resolution levels, which are sensitive to different frequency bands. A set of the biggest coefficients from each decomposition level is extracted. Then a supervised classifier system based on Euclidian distance is constructed. The performance of the classifier is evaluated using a 2 x 5-fold cross validation followed by a statistical analysis. The experimental results suggest that curvelet transform outperforms wavelet transform and the difference is statistically significant.


Asunto(s)
Neoplasias de la Mama/diagnóstico , Aumento de la Imagen/métodos , Interpretación de Imagen Asistida por Computador/métodos , Mamografía , Algoritmos , Femenino , Humanos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA