Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 8 de 8
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
BMC Med Inform Decis Mak ; 24(1): 179, 2024 Jun 24.
Artigo em Inglês | MEDLINE | ID: mdl-38915001

RESUMO

With the outbreak of COVID-19 in 2020, countries worldwide faced significant concerns and challenges. Various studies have emerged utilizing Artificial Intelligence (AI) and Data Science techniques for disease detection. Although COVID-19 cases have declined, there are still cases and deaths around the world. Therefore, early detection of COVID-19 before the onset of symptoms has become crucial in reducing its extensive impact. Fortunately, wearable devices such as smartwatches have proven to be valuable sources of physiological data, including Heart Rate (HR) and sleep quality, enabling the detection of inflammatory diseases. In this study, we utilize an already-existing dataset that includes individual step counts and heart rate data to predict the probability of COVID-19 infection before the onset of symptoms. We train three main model architectures: the Gradient Boosting classifier (GB), CatBoost trees, and TabNet classifier to analyze the physiological data and compare their respective performances. We also add an interpretability layer to our best-performing model, which clarifies prediction results and allows a detailed assessment of effectiveness. Moreover, we created a private dataset by gathering physiological data from Fitbit devices to guarantee reliability and avoid bias.The identical set of models was then applied to this private dataset using the same pre-trained models, and the results were documented. Using the CatBoost tree-based method, our best-performing model outperformed previous studies with an accuracy rate of 85% on the publicly available dataset. Furthermore, this identical pre-trained CatBoost model produced an accuracy of 81% when applied to the private dataset. You will find the source code in the link: https://github.com/OpenUAE-LAB/Covid-19-detection-using-Wearable-data.git .


Assuntos
Inteligência Artificial , COVID-19 , Diagnóstico Precoce , Humanos , COVID-19/diagnóstico , Frequência Cardíaca/fisiologia , Dispositivos Eletrônicos Vestíveis
2.
Sci Rep ; 13(1): 18885, 2023 Nov 02.
Artigo em Inglês | MEDLINE | ID: mdl-37919406

RESUMO

Software defect prediction (SDP) plays a significant role in detecting the most likely defective software modules and optimizing the allocation of testing resources. In practice, though, project managers must not only identify defective modules, but also rank them in a specific order to optimize the resource allocation and minimize testing costs, especially for projects with limited budgets. This vital task can be accomplished using Learning to Rank (LTR) algorithm. This algorithm is a type of machine learning methodology that pursues two important tasks: prediction and learning. Although this algorithm is commonly used in information retrieval, it also presents high efficiency for other problems, like SDP. The LTR approach is mainly used in defect prediction to predict and rank the most likely buggy modules based on their bug count or bug density. This research paper conducts a comprehensive comparison study on the behavior of eight selected LTR models using two target variables: bug count and bug density. It also studies the effect of using imbalance learning and feature selection on the employed LTR models. The models are empirically evaluated using Fault Percentile Average. Our results show that using bug count as ranking criteria produces higher scores and more stable results across multiple experiment settings. Moreover, using imbalance learning has a positive impact for bug density, but on the other hand it leads to a negative impact for bug count. Lastly, using the feature selection does not show significant improvement for bug density, while there is no impact when bug count is used. Therefore, we conclude that using feature selection and imbalance learning with LTR does not come up with superior or significant results.

3.
Diagnostics (Basel) ; 13(19)2023 Sep 28.
Artigo em Inglês | MEDLINE | ID: mdl-37835814

RESUMO

Despite the declining COVID-19 cases, global healthcare systems still face significant challenges due to ongoing infections, especially among fully vaccinated individuals, including adolescents and young adults (AYA). To tackle this issue, cost-effective alternatives utilizing technologies like Artificial Intelligence (AI) and wearable devices have emerged for disease screening, diagnosis, and monitoring. However, many AI solutions in this context heavily rely on supervised learning techniques, which pose challenges such as human labeling reliability and time-consuming data annotation. In this study, we propose an innovative unsupervised framework that leverages smartwatch data to detect and monitor COVID-19 infections. We utilize longitudinal data, including heart rate (HR), heart rate variability (HRV), and physical activity measured via step count, collected through the continuous monitoring of volunteers. Our goal is to offer effective and affordable solutions for COVID-19 detection and monitoring. Our unsupervised framework employs interpretable clusters of normal and abnormal measures, facilitating disease progression detection. Additionally, we enhance result interpretation by leveraging the language model Davinci GPT-3 to gain deeper insights into the underlying data patterns and relationships. Our results demonstrate the effectiveness of unsupervised learning, achieving a Silhouette score of 0.55. Furthermore, validation using supervised learning techniques yields high accuracy (0.884 ± 0.005), precision (0.80 ± 0.112), and recall (0.817 ± 0.037). These promising findings indicate the potential of unsupervised techniques for identifying inflammatory markers, contributing to the development of efficient and reliable COVID-19 detection and monitoring methods. Our study shows the capabilities of AI and wearables, reflecting the pursuit of low-cost, accessible solutions for addressing health challenges related to inflammatory diseases, thereby opening new avenues for scalable and widely applicable health monitoring solutions.

4.
Heliyon ; 9(1): e12859, 2023 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-36704292

RESUMO

In the past years, high entropy alloys (HEAs) witnessed great interest because of their superior properties. Phase prediction using machine learning (ML) methods was one of the main research themes in HEAs in the past three years. Although various ML-based phase prediction works exhibited high accuracy, only a few studied the variables that drive the phase formation in HEAs. Those (the previously mentioned work) did that by incorporating domain knowledge in the feature engineering part of the ML framework. In this work, we tackle this problem from a different direction by predicting the phase of HEAs, based only on the concentration of the alloy constituent elements. Then, pruned tree models and linear correlation are used to develop simple primitive prediction rules that are used with self-organizing maps (SOMs) and constructed Euclidean spaces to formulate the problem of discovering the phase formation drivers as an optimization problem. In addition, genetic algorithm (GA) optimization results reveal that the phase formation is affected by the electron affinity, molar volume, and resistivity of the constituent elements. Moreover, one of the primitive prediction rules reveals that the FCC phase formation in the AlCoCrFeNiTiCu family of high entropy alloys can be predicted with 87% accuracy by only knowing the concentration of Al and Cu.

5.
Neural Comput Appl ; 34(18): 16019-16032, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35529091

RESUMO

Social media is becoming a source of news for many people due to its ease and freedom of use. As a result, fake news has been spreading quickly and easily regardless of its credibility, especially in the last decade. Fake news publishers take advantage of critical situations such as the Covid-19 pandemic and the American presidential elections to affect societies negatively. Fake news can seriously impact society in many fields including politics, finance, sports, etc. Many studies have been conducted to help detect fake news in English, but research conducted on fake news detection in the Arabic language is scarce. Our contribution is twofold: first, we have constructed a large and diverse Arabic fake news dataset. Second, we have developed and evaluated transformer-based classifiers to identify fake news while utilizing eight state-of-the-art Arabic contextualized embedding models. The majority of these models had not been previously used for Arabic fake news detection. We conduct a thorough analysis of the state-of-the-art Arabic contextualized embedding models as well as comparison with similar fake news detection systems. Experimental results confirm that these state-of-the-art models are robust, with accuracy exceeding 98%.

6.
Sensors (Basel) ; 22(7)2022 Mar 24.
Artigo em Inglês | MEDLINE | ID: mdl-35408114

RESUMO

Creating deepfake multimedia, and especially deepfake videos, has become much easier these days due to the availability of deepfake tools and the virtually unlimited numbers of face images found online. Research and industry communities have dedicated time and resources to develop detection methods to expose these fake videos. Although these detection methods have been developed over the past few years, synthesis methods have also made progress, allowing for the production of deepfake videos that are harder and harder to differentiate from real videos. This research paper proposes an improved optical flow estimation-based method to detect and expose the discrepancies between video frames. Augmentation and modification are experimented upon to try to improve the system's overall accuracy. Furthermore, the system is trained on graphics processing units (GPUs) and tensor processing units (TPUs) to explore the effects and benefits of each type of hardware in deepfake detection. TPUs were found to have shorter training times compared to GPUs. VGG-16 is the best performing model when used as a backbone for the system, as it achieved around 82.0% detection accuracy when trained on GPUs and 71.34% accuracy on TPUs.


Assuntos
Fluxo Óptico , Computadores , Enganação
7.
Artif Intell Med ; 127: 102276, 2022 05.
Artigo em Inglês | MEDLINE | ID: mdl-35430037

RESUMO

Cancer is one of the most dangerous diseases to humans, and yet no permanent cure has been developed for it. Breast cancer is one of the most common cancer types. According to the National Breast Cancer Foundation, in 2020 alone, more than 276,000 new cases of invasive breast cancer and more than 48,000 non-invasive cases were diagnosed in the US. To put these figures in perspective, 64% of these cases are diagnosed early in the disease's cycle, giving patients a 99% chance of survival. Artificial intelligence and machine learning have been used effectively in detection and treatment of several dangerous diseases, helping in early diagnosis and treatment, and thus increasing the patient's chance of survival. Deep learning has been designed to analyze the most important features affecting detection and treatment of serious diseases. For example, breast cancer can be detected using genes or histopathological imaging. Analysis at the genetic level is very expensive, so histopathological imaging is the most common approach used to detect breast cancer. In this research work, we systematically reviewed previous work done on detection and treatment of breast cancer using genetic sequencing or histopathological imaging with the help of deep learning and machine learning. We also provide recommendations to researchers who will work in this field.


Assuntos
Inteligência Artificial , Neoplasias da Mama , Neoplasias da Mama/diagnóstico , Neoplasias da Mama/genética , Feminino , Humanos , Aprendizado de Máquina
8.
Comput Intell Neurosci ; 2019: 8367214, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-30915110

RESUMO

Software effort estimation plays a critical role in project management. Erroneous results may lead to overestimating or underestimating effort, which can have catastrophic consequences on project resources. Machine-learning techniques are increasingly popular in the field. Fuzzy logic models, in particular, are widely used to deal with imprecise and inaccurate data. The main goal of this research was to design and compare three different fuzzy logic models for predicting software estimation effort: Mamdani, Sugeno with constant output, and Sugeno with linear output. To assist in the design of the fuzzy logic models, we conducted regression analysis, an approach we call "regression fuzzy logic." State-of-the-art and unbiased performance evaluation criteria such as standardized accuracy, effect size, and mean balanced relative error were used to evaluate the models, as well as statistical tests. Models were trained and tested using industrial projects from the International Software Benchmarking Standards Group (ISBSG) dataset. Results showed that data heteroscedasticity affected model performance. Fuzzy logic models were found to be very sensitive to outliers. We concluded that when regression analysis was used to design the model, the Sugeno fuzzy inference system with linear output outperformed the other models.


Assuntos
Lógica Fuzzy , Aprendizado de Máquina , Redes Neurais de Computação , Análise de Regressão , Software , Algoritmos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA