Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 37
Filtrar
1.
PeerJ Comput Sci ; 10: e2031, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38855236

RESUMEN

Neurodegenerative conditions significantly impact patient quality of life. Many conditions do not have a cure, but with appropriate and timely treatment the advance of the disease could be diminished. However, many patients only seek a diagnosis once the condition progresses to a point at which the quality of life is significantly impacted. Effective non-invasive and readily accessible methods for early diagnosis can considerably enhance the quality of life of patients affected by neurodegenerative conditions. This work explores the potential of convolutional neural networks (CNNs) for patient gain freezing associated with Parkinson's disease. Sensor data collected from wearable gyroscopes located at the sole of the patient's shoe record walking patterns. These patterns are further analyzed using convolutional networks to accurately detect abnormal walking patterns. The suggested method is assessed on a public real-world dataset collected from parents affected by Parkinson's as well as individuals from a control group. To improve the accuracy of the classification, an altered variant of the recent crayfish optimization algorithm is introduced and compared to contemporary optimization metaheuristics. Our findings reveal that the modified algorithm (MSCHO) significantly outperforms other methods in accuracy, demonstrated by low error rates and high Cohen's Kappa, precision, sensitivity, and F1-measures across three datasets. These results suggest the potential of CNNs, combined with advanced optimization techniques, for early, non-invasive diagnosis of neurodegenerative conditions, offering a path to improve patient quality of life.

2.
PeerJ Comput Sci ; 10: e1979, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38855242

RESUMEN

This article uses the Aczel-Alsina t-norm and t-conorm to make several new linguistic interval-valued intuitionistic fuzzy aggregation operators. First, we devised some rules for how linguistic interval-valued intuitionistic fuzzy numbers should work. Then, using these rules as a guide, we created a set of operators, such as linguistic interval-valued intuitionistic fuzzy Aczel-Alsina weighted averaging (LIVIFAAWA) operator, linguistic interval-valued intuitionistic fuzzy Aczel-Alsina weighted geometric (LIVIFAAWG) operator, linguistic interval-valued intuitionistic fuzzy Aczel-Alsina ordered weighted averaging (LIVIFAAOWA) operator, linguistic interval-valued intuitionistic fuzzy Aczel-Alsina ordered weighted geometric (LIVIFAAOWG) operator, linguistic interval-valued intuitionistic fuzzy Aczel-Alsina hybrid weighted averaging (LIVIFAAHWA) operator and linguistic interval-valued intuitionistic fuzzy Aczel-Alsina hybrid weighted geometric (LIVIFAAHWG) operators are created. Several desirable qualities of the newly created operators are thoroughly studied. Moreover, a multi-criteria group decision-making (MCGDM) method is proposed based on the developed operators. The proposed operators are then applied to real-world decision-making situations to demonstrate their applicability and validity to the reader. Finally, the suggested model is contrasted with the currently employed method of operation.

4.
Sci Total Environ ; 929: 172195, 2024 Jun 15.
Artículo en Inglés | MEDLINE | ID: mdl-38631643

RESUMEN

Toluene is a neurotoxic aromatic hydrocarbon and one of the major representatives of volatile organic compounds, known for its abundance, adverse health effects, and role in the formation of other atmospheric pollutants like ozone. This research introduces the enhanced version of the reptile search metaheuristics algorithm which has been utilized to tune the extreme gradient boosting hyperparameters, to investigate toluene atmospheric behavior patterns and interactions with other polluting species within defined environmental conditions. The study is based on a two-year database encompassing concentrations of inorganic gaseous contaminants every hour (NO, NO2, NOx, and O3), particulate matter fractions (PM1, PM2.5, and PM10), m,p-xylene, toluene, benzene, total non-methane hydrocarbons, and meteorological data. The experimental outcomes were validated against the results of extreme gradient boosting models optimized by seven other recent powerful metaheuristics algorithms. The best-performing model has been interpreted by employing Shapley additive explanations method. In the study, we have focused on the relationship between toluene and benzene, as its most important predictor, and provided a detailed description of environmental conditions which directed their interactions.

5.
Sci Rep ; 14(1): 3666, 2024 Feb 14.
Artículo en Inglés | MEDLINE | ID: mdl-38351176

RESUMEN

EDXRF spectrometry is a well-established and often-used analytical technique in examining materials from which cultural heritage objects are made. The analytical results are traditionally subjected to additional multivariate analysis for archaeometry studies to reduce the initial data's dimensionality based on informative features. Nowadays, artificial intelligence (AI) techniques are used more for this purpose. Different soft computing techniques are used to improve speed and accuracy. Choosing the most suitable AI method can increase the sustainability of the analytical process and postprocessing activities. An autoencoder neural network has been designed and used as a dimension reduction tool of initial [Formula: see text] data collected in the raw EDXRF spectra, containing information about the selected points' elemental composition on the canvas paintings' surface. The autoencoder network design enables the best possible reconstruction of the original EDXRF spectrum and the most informative feature extraction, which has been used for dimension reduction. Such configuration allows for efficient classification algorithms and their performances. The autoencoder neural network approach is more sustainable, especially in processing time consumption and experts' manual work.

6.
Sci Rep ; 14(1): 4309, 2024 02 21.
Artículo en Inglés | MEDLINE | ID: mdl-38383690

RESUMEN

Parkinson's disease (PD) is a progressively debilitating neurodegenerative disorder that primarily affects the dopaminergic system in the basal ganglia, impacting millions of individuals globally. The clinical manifestations of the disease include resting tremors, muscle rigidity, bradykinesia, and postural instability. Diagnosis relies mainly on clinical evaluation, lacking reliable diagnostic tests and being inherently imprecise and subjective. Early detection of PD is crucial for initiating treatments that, while unable to cure the chronic condition, can enhance the life quality of patients and alleviate symptoms. This study explores the potential of utilizing long-short term memory neural networks (LSTM) with attention mechanisms to detect Parkinson's disease based on dual-task walking test data. Given that the performance of networks is significantly inductance by architecture and training parameter choices, a modified version of the recently introduced crayfish optimization algorithm (COA) is proposed, specifically tailored to the requirements of this investigation. The proposed optimizer is assessed on a publicly accessible real-world clinical gait in Parkinson's disease dataset, and the results demonstrate its promise, achieving an accuracy of 87.4187 % for the best-constructed models.


Asunto(s)
Enfermedad de Parkinson , Humanos , Enfermedad de Parkinson/diagnóstico , Memoria a Corto Plazo , Redes Neurales de la Computación , Ganglios Basales , Marcha
7.
PeerJ Comput Sci ; 10: e1795, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38259888

RESUMEN

Renewable energy plays an increasingly important role in our future. As fossil fuels become more difficult to extract and effectively process, renewables offer a solution to the ever-increasing energy demands of the world. However, the shift toward renewable energy is not without challenges. While fossil fuels offer a more reliable means of energy storage that can be converted into usable energy, renewables are more dependent on external factors used for generation. Efficient storage of renewables is more difficult often relying on batteries that have a limited number of charge cycles. A robust and efficient system for forecasting power generation from renewable sources can help alleviate some of the difficulties associated with the transition toward renewable energy. Therefore, this study proposes an attention-based recurrent neural network approach for forecasting power generated from renewable sources. To help networks make more accurate forecasts, decomposition techniques utilized applied the time series, and a modified metaheuristic is introduced to optimized hyperparameter values of the utilized networks. This approach has been tested on two real-world renewable energy datasets covering both solar and wind farms. The models generated by the introduced metaheuristics were compared with those produced by other state-of-the-art optimizers in terms of standard regression metrics and statistical analysis. Finally, the best-performing model was interpreted using SHapley Additive exPlanations.

8.
Sensors (Basel) ; 23(24)2023 Dec 17.
Artículo en Inglés | MEDLINE | ID: mdl-38139724

RESUMEN

Monitoring heart electrical activity is an effective way of detecting existing and developing conditions. This is usually performed as a non-invasive test using a network of up to 12 sensors (electrodes) on the chest and limbs to create an electrocardiogram (ECG). By visually observing these readings, experienced professionals can make accurate diagnoses and, if needed, request further testing. However, the training and experience needed to make accurate diagnoses are significant. This work explores the potential of recurrent neural networks for anomaly detection in ECG readings. Furthermore, to attain the best possible performance for these networks, training parameters, and network architectures are optimized using a modified version of the well-established particle swarm optimization algorithm. The performance of the optimized models is compared to models created by other contemporary optimizers, and the results show significant potential for real-world applications. Further analyses are carried out on the best-performing models to determine feature importance.


Asunto(s)
Algoritmos , Redes Neurales de la Computación , Electrocardiografía/métodos
9.
Sci Rep ; 13(1): 22470, 2023 12 18.
Artículo en Inglés | MEDLINE | ID: mdl-38110422

RESUMEN

A drop in physical activity and a deterioration in the capacity to undertake daily life activities are both connected with ageing and have negative effects on physical and mental health. An Elderly and Visually Impaired Human Activity Monitoring (EV-HAM) system that keeps tabs on a person's routine and steps in if a change in behaviour or a crisis might greatly help an elderly person or a visually impaired. These individuals may find greater freedom with the help of an EVHAM system. As the backbone of human-centric applications like actively supported living and in-home monitoring for the elderly and visually impaired, an EVHAM system is essential. Big data-driven product design is flourishing in this age of 5G and the IoT. Recent advancements in processing power and software architectures have also contributed to the emergence and development of artificial intelligence (AI). In this context, the digital twin has emerged as a state-of-the-art technology that bridges the gap between the real and virtual worlds by evaluating data from several sensors using artificial intelligence algorithms. Although promising findings have been reported by Wi-Fi-based human activity identification techniques so far, their effectiveness is vulnerable to environmental variations. Using the environment-independent fingerprints generated from the Wi-Fi channel state information (CSI), we introduce Wi-Sense. This human activity identification system employs a Deep Hybrid convolutional neural network (DHCNN). The proposed system begins by collecting the CSI with a regular Wi-Fi Network Interface Controller. Wi-Sense uses the CSI ratio technique to lessen the effect of noise and the phase offset. The t- Distributed Stochastic Neighbor Embedding (t-SNE) is used to eliminate unnecessary data further. The data dimension is decreased, and the negative effects on the environment are eliminated in this process. The resulting spectrogram of the processed data exposes the activity's micro-Doppler fingerprints as a function of both time and location. These spectrograms are put to use in the training of a DHCNN. Based on our findings, EVHAM can accurately identify these actions 99% of the time.


Asunto(s)
Inteligencia Artificial , Redes Neurales de la Computación , Anciano , Humanos , Algoritmos , Envejecimiento , Macrodatos
10.
Front Physiol ; 14: 1267011, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-38033337

RESUMEN

Electroencephalography (EEG) serves as a diagnostic technique for measuring brain waves and brain activity. Despite its precision in capturing brain electrical activity, certain factors like environmental influences during the test can affect the objectivity and accuracy of EEG interpretations. Challenges associated with interpretation, even with advanced techniques to minimize artifact influences, can significantly impact the accurate interpretation of EEG findings. To address this issue, artificial intelligence (AI) has been utilized in this study to analyze anomalies in EEG signals for epilepsy detection. Recurrent neural networks (RNNs) are AI techniques specifically designed to handle sequential data, making them well-suited for precise time-series tasks. While AI methods, including RNNs and artificial neural networks (ANNs), hold great promise, their effectiveness heavily relies on the initial values assigned to hyperparameters, which are crucial for their performance for concrete assignment. To tune RNN performance, the selection of hyperparameters is approached as a typical optimization problem, and metaheuristic algorithms are employed to further enhance the process. The modified hybrid sine cosine algorithm has been developed and used to further improve hyperparameter optimization. To facilitate testing, publicly available real-world EEG data is utilized. A dataset is constructed using captured data from healthy and archived data from patients confirmed to be affected by epilepsy, as well as data captured during an active seizure. Two experiments have been conducted using generated dataset. In the first experiment, models were tasked with the detection of anomalous EEG activity. The second experiment required models to segment normal, anomalous activity as well as detect occurrences of seizures from EEG data. Considering the modest sample size (one second of data, 158 data points) used for classification models demonstrated decent outcomes. Obtained outcomes are compared with those generated by other cutting-edge metaheuristics and rigid statistical validation, as well as results' interpretation is performed.

11.
PeerJ Comput Sci ; 9: e1565, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37810356

RESUMEN

Wall segmentation is a special case of semantic segmentation, and the task is to classify each pixel into one of two classes: wall and no-wall. The segmentation model returns a mask showing where objects like windows and furniture are located, as well as walls. This article proposes the module's structure for semantic segmentation of walls in 2D images, which can effectively address the problem of wall segmentation. The proposed model achieved higher accuracy and faster execution than other solutions. An encoder-decoder architecture of the segmentation module was used. Dilated ResNet50/101 network was used as an encoder, representing ResNet50/101 network in which dilated convolutional layers replaced the last convolutional layers. The ADE20K dataset subset containing only interior images, was used for model training, while only its subset was used for model evaluation. Three different approaches to model training were analyzed in the research. On the validation dataset, the best approach based on the proposed structure with the ResNet101 network resulted in an average accuracy at the pixel level of 92.13% and an intersection over union (IoU) of 72.58%. Moreover, all proposed approaches can be applied to recognize other objects in the image to solve specific tasks.

12.
Environ Sci Pollut Res Int ; 30(37): 87286-87299, 2023 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-37422560

RESUMEN

Effective end-of-life vehicle (ELV) management is crucial for minimizing the environmental and health impacts of Indonesia's growing automotive industry. However, proper ELV management has received limited attention. To bridge this gap, we conducted a qualitative study to identify barriers to effective ELV management in Indonesia's automotive sector. Through in-depth interviews with key stakeholders and a strengths, weaknesses, opportunities, and threats analysis, we identified internal and external factors influencing ELV management. Our findings reveal major barriers, including inadequate government regulation and enforcement, insufficient infrastructure and technology, low education and awareness, and a lack of financial incentives. We also identified internal factors such as limited infrastructure, inadequate strategic planning, and challenges in waste management and cost collection methods. Based on these findings, we recommend a comprehensive and integrated approach to ELV management involving enhanced coordination among government, industry, and stakeholders. The government should enforce regulations and provide financial incentives to encourage proper ELV management practices. Industry players should invest in technology and infrastructure to support effective ELV treatment. By addressing these barriers and implementing our recommendations, policymakers can develop sustainable ELV management policies and decisions in Indonesia's fast-paced automotive sector. Our study contributes valuable insights to guide the development of effective strategies for ELV management and sustainability in Indonesia.


Asunto(s)
Reciclaje , Administración de Residuos , Indonesia , Reciclaje/métodos , Tecnología , Industrias
13.
PeerJ Comput Sci ; 9: e1405, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37409075

RESUMEN

An ever increasing number of electronic devices integrated into the Internet of Things (IoT) generates vast amounts of data, which gets transported via network and stored for further analysis. However, besides the undisputed advantages of this technology, it also brings risks of unauthorized access and data compromise, situations where machine learning (ML) and artificial intelligence (AI) can help with detection of potential threats, intrusions and automation of the diagnostic process. The effectiveness of the applied algorithms largely depends on the previously performed optimization, i.e., predetermined values of hyperparameters and training conducted to achieve the desired result. Therefore, to address very important issue of IoT security, this article proposes an AI framework based on the simple convolutional neural network (CNN) and extreme machine learning machine (ELM) tuned by modified sine cosine algorithm (SCA). Not withstanding that many methods for addressing security issues have been developed, there is always a possibility for further improvements and proposed research tried to fill in this gap. The introduced framework was evaluated on two ToN IoT intrusion detection datasets, that consist of the network traffic data generated in Windows 7 and Windows 10 environments. The analysis of the results suggests that the proposed model achieved superior level of classification performance for the observed datasets. Additionally, besides conducting rigid statistical tests, best derived model is interpreted by SHapley Additive exPlanations (SHAP) analysis and results findings can be used by security experts to further enhance security of IoT systems.

14.
Sci Rep ; 13(1): 9725, 2023 06 15.
Artículo en Inglés | MEDLINE | ID: mdl-37322046

RESUMEN

Pancreatic cancer is associated with higher mortality rates due to insufficient diagnosis techniques, often diagnosed at an advanced stage when effective treatment is no longer possible. Therefore, automated systems that can detect cancer early are crucial to improve diagnosis and treatment outcomes. In the medical field, several algorithms have been put into use. Valid and interpretable data are essential for effective diagnosis and therapy. There is much room for cutting-edge computer systems to develop. The main objective of this research is to predict pancreatic cancer early using deep learning and metaheuristic techniques. This research aims to create a deep learning and metaheuristic techniques-based system to predict pancreatic cancer early by analyzing medical imaging data, mainly CT scans, and identifying vital features and cancerous growths in the pancreas using Convolutional Neural Network (CNN) and YOLO model-based CNN (YCNN) models. Once diagnosed, the disease cannot be effectively treated, and its progression is unpredictable. That's why there's been a push in recent years to implement fully automated systems that can sense cancer at a prior stage and improve diagnosis and treatment. The paper aims to evaluate the effectiveness of the novel YCNN approach compared to other modern methods in predicting pancreatic cancer. To predict the vital features from the CT scan and the proportion of cancer feasts in the pancreas using the threshold parameters booked as markers. This paper employs a deep learning approach called a Convolutional Neural network (CNN) model to predict pancreatic cancer images. In addition, we use the YOLO model-based CNN (YCNN) to aid in the categorization process. Both biomarkers and CT image dataset is used for testing. The YCNN method was shown to perform well by a cent percent of accuracy compared to other modern techniques in a thorough review of comparative findings.


Asunto(s)
Aprendizaje Profundo , Neoplasias Pancreáticas , Humanos , Redes Neurales de la Computación , Algoritmos , Tomografía Computarizada por Rayos X/métodos , Neoplasias Pancreáticas/diagnóstico por imagen , Neoplasias Pancreáticas
15.
Toxics ; 11(4)2023 Apr 21.
Artículo en Inglés | MEDLINE | ID: mdl-37112620

RESUMEN

Polycyclic aromatic hydrocarbons (PAHs) refer to a group of several hundred compounds, among which 16 are identified as priority pollutants, due to their adverse health effects, frequency of occurrence, and potential for human exposure. This study is focused on benzo(a)pyrene, being considered an indicator of exposure to a PAH carcinogenic mixture. For this purpose, we have applied the XGBoost model to a two-year database of pollutant concentrations and meteorological parameters, with the aim to identify the factors which were mostly associated with the observed benzo(a)pyrene concentrations and to describe types of environments that supported the interactions between benzo(a)pyrene and other polluting species. The pollutant data were collected at the energy industry center in Serbia, in the vicinity of coal mining areas and power stations, where the observed benzo(a)pyrene maximum concentration for a study period reached 43.7 ngm-3. The metaheuristics algorithm has been used to optimize the XGBoost hyperparameters, and the results have been compared to the results of XGBoost models tuned by eight other cutting-edge metaheuristics algorithms. The best-produced model was later on interpreted by applying Shapley Additive exPlanations (SHAP). As indicated by mean absolute SHAP values, the temperature at the surface, arsenic, PM10, and total nitrogen oxide (NOx) concentrations appear to be the major factors affecting benzo(a)pyrene concentrations and its environmental fate.

16.
Heliyon ; 9(4): e15378, 2023 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-37101631

RESUMEN

With the whirlwind evolution of technology, the quantity of stored data within datasets is rapidly expanding. As a result, extracting crucial and relevant information from said datasets is a gruelling task. Feature selection is a critical preprocessing task for machine learning to reduce the excess data in a set. This research presents a novel quasi-reflection learning arithmetic optimization algorithm - firefly search, an enhanced version of the original arithmetic optimization algorithm. Quasi-reflection learning mechanism was implemented for enhancement of population diversity, while firefly algorithm metaheuristics were used to improve the exploitation abilities of the original arithmetic optimization algorithm. The aim of this wrapper-based method is to tackle a specific classification problem by selecting an optimal feature subset. The proposed algorithm is tested and compared with various well-known methods on ten unconstrained benchmark functions, then on twenty-one standard datasets gathered from the University of California, Irvine Repository and Arizona State University. Additionally, the proposed approach is applied to the Corona disease dataset. The experimental results verify the improvements of the presented method and their statistical significance.

17.
Microprocess Microsyst ; 98: 104778, 2023 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-36785847

RESUMEN

Feature selection is one of the most important challenges in machine learning and data science. This process is usually performed in the data preprocessing phase, where the data is transformed to a proper format for further operations by machine learning algorithm. Many real-world datasets are highly dimensional with many irrelevant, even redundant features. These kinds of features do not improve classification accuracy and can even shrink down performance of a classifier. The goal of feature selection is to find optimal (or sub-optimal) subset of features that contain relevant information about the dataset from which machine learning algorithms can derive useful conclusions. In this manuscript, a novel version of firefly algorithm (FA) is proposed and adapted for feature selection challenge. Proposed method significantly improves performance of the basic FA, and also outperforms other state-of-the-art metaheuristics for both, benchmark bound-constrained and practical feature selection tasks. Method was first validated on standard unconstrained benchmarks and later it was applied for feature selection by using 21 standard University of California, Irvine (UCL) datasets. Moreover, presented approach was also tested for relatively novel COVID-19 dataset for predicting patients health, and one microcontroller microarray dataset. Results obtained in all practical simulations attest robustness and efficiency of proposed algorithm in terms of convergence, solutions' quality and classification accuracy. More precisely, the proposed approach obtained the best classification accuracy on 13 out of 21 total datasets, significantly outperforming other competitor methods.

18.
Artículo en Inglés | MEDLINE | ID: mdl-36591535

RESUMEN

The Coronavirus, known as COVID-19, which appeared in 2019 in China, has significantly affected the global health and become a huge burden on health institutions all over the world. These effects are continuing today. One strategy for limiting the virus's transmission is to have an early diagnosis of suspected cases and take appropriate measures before the disease spreads further. This work aims to diagnose and show the probability of getting infected by the disease according to textual clinical data. In this work, we used five machine learning techniques (GWO_MLP, GWO_CMLP, MGWO_MLP, FDO_MLP, FDO_CMLP) all of which aim to classify Covid-19 patients into two categories (Positive and Negative). Experiments showed promising results for all used models. The applied methods showed very similar performance, typically in terms of accuracy. However, in each tested dataset, FDO_MLP and FDO_CMLP produced the best results with 100% accuracy. The other models' results varied from one experiment to the other. It is concluded that the models on which the FDO algorithm was used as a learning algorithm had the possibility of obtaining higher accuracy. However, it is found that FDO has the longest runtime compared to the other algorithms. The link to the Covid 19 models is found here: https://github.com/Tarik4Rashid4/covid19models.

19.
Sci Rep ; 13(1): 1004, 2023 01 18.
Artículo en Inglés | MEDLINE | ID: mdl-36653424

RESUMEN

Industrial Internet of Things (IIoT)-based systems have become an important part of industry consortium systems because of their rapid growth and wide-ranging application. Various physical objects that are interconnected in the IIoT network communicate with each other and simplify the process of decision-making by observing and analyzing the surrounding environment. While making such intelligent decisions, devices need to transfer and communicate data with each other. However, as devices involved in IIoT networks grow and the methods of connections diversify, the traditional security frameworks face many shortcomings, including vulnerabilities to attack, lags in data, sharing data, and lack of proper authentication. Blockchain technology has the potential to empower safe data distribution of big data generated by the IIoT. Prevailing data-sharing methods in blockchain only concentrate on the data interchanging among parties, not on the efficiency in sharing, and storing. Hence an element-based K-harmonic means clustering algorithm (CA) is proposed for the effective sharing of data among the entities along with an algorithm named underweight data block (UDB) for overcoming the obstacle of storage space. The performance metrics considered for the evaluation of the proposed framework are the sum of squared error (SSE), time complexity with respect to different m values, and storage complexity with CPU utilization. The results have experimented with MATLAB 2018a simulation environment. The proposed model has better sharing, and storing based on blockchain technology, which is appropriate IIoT.


Asunto(s)
Cadena de Bloques , Industrias , Algoritmos , Benchmarking , Macrodatos , Seguridad Computacional
20.
PeerJ Comput Sci ; 8: e1086, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-36262154

RESUMEN

Recently, deepfake technology has become a popularly used technique for swapping faces in images or videos that create forged data to mislead society. Detecting the originality of the video is a critical process due to the negative pattern of the image. In the detection of forged images or videos, various image processing techniques were implemented. Existing methods are ineffective in detecting new threats or false images. This article has proposed You Only Look Once-Local Binary Pattern Histogram (YOLO-LBPH) to detect fake videos. YOLO is used to detect the face in an image or a frame of a video. The spatial features are extracted from the face image using a EfficientNet-B5 method. Spatial feature extractions are fed as input in the Local Binary Pattern Histogram to extract temporal features. The proposed YOLO-LBPH is implemented using the large scale deepfake forensics (DF) dataset known as CelebDF-FaceForensics++(c23), which is a combination of FaceForensics++(c23) and Celeb-DF. As a result, the precision score is 86.88% in the CelebDF-FaceForensics++(c23) dataset, 88.9% in the DFFD dataset, 91.35% in the CASIA-WebFace data. Similarly, the recall is 92.45% in the Celeb-DF-Face Forensics ++(c23) dataset, 93.76% in the DFFD dataset, and 94.35% in the CASIA-Web Face dataset.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...