Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 722
Filtrar
1.
Sci Rep ; 14(1): 22105, 2024 Sep 27.
Artigo em Inglês | MEDLINE | ID: mdl-39333306

RESUMO

Due to the high cost of equipment and the constraints of shooting conditions, obtaining aerial infrared images of specific targets is very challenging. Most methods using Generative Adversarial Networks for translating visible images to infrared greatly depend on registered data and struggle to handle the diversity and complexity of scenes in aerial infrared targets. This paper proposes a one side end-to-end unpaired aerial visible-to-infrared image translation algorithm, termed AerialIRGAN. AerialIRGAN introduces a dual-encoder structure, where one encoder is designed based on the Segment Anything Model to extract deep semantic features from visible images, and the other encoder is designed based on UniRepLKNet to capture small-scale patterns and sparse patterns from visible images. Subsequently, AerialIRGAN constructs a bridging module to deeply integrate the features of both encoders and their corresponding decoders. Finally, AerialIRGAN proposes a structural appearance consistency loss to guide the synthetic infrared images to maintain the structure of the source image while possessing distinct infrared characteristics. The experimental results show that compared to the existing typical infrared image generation algorithms, the proposed method can generate higher-quality infrared images and achieve better performance in both subjective visual description and objective metric evaluation.

2.
Heliyon ; 10(17): e37163, 2024 Sep 15.
Artigo em Inglês | MEDLINE | ID: mdl-39296212

RESUMO

As facial modification technology advances rapidly, it poses a challenge to methods used to detect fake faces. The advent of deep learning and AI-based technologies has led to the creation of counterfeit photographs that are more difficult to discern apart from real ones. Existing Deep fake detection systems excel at spotting fake content with low visual quality and are easily recognized by visual artifacts. The study employed a unique active forensic strategy Compact Ensemble-based discriminators architecture using Deep Conditional Generative Adversarial Networks (CED-DCGAN), for identifying real-time deep fakes in video conferencing. DCGAN focuses on video-deep fake detection on features since technologies for creating convincing fakes are improving rapidly. As a first step towards recognizing DCGAN-generated images, split real-time video images into frames containing essential elements and then use that bandwidth to train an ensemble-based discriminator as a classifier. Spectra anomalies are produced by up-sampling processes, standard procedures in GAN systems for making large amounts of fake data films. The Compact Ensemble discriminator (CED) concentrates on the most distinguishing feature between the natural and synthetic images, giving the generators a robust training signal. As empirical results on publicly available datasets show, the suggested algorithms outperform state-of-the-art methods and the proposed CED-DCGAN technique successfully detects high-fidelity deep fakes in video conferencing and generalizes well when comparing with other techniques. Python tool is used for implementing this proposed study and the accuracy obtained for proposed work is 98.23 %.

3.
PeerJ Comput Sci ; 10: e2181, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-39314737

RESUMO

Synthetic images ar---e created using computer graphics modeling and artificial intelligence techniques, referred to as deepfakes. They modify human features by using generative models and deep learning algorithms, posing risks violations of social media regulations and spread false information. To address these concerns, the study proposed an improved generative adversarial network (GAN) model which improves accuracy while differentiating between real and fake images focusing on data augmentation and label smoothing strategies for GAN training. The study utilizes a dataset containing human faces and employs DCGAN (deep convolutional generative adversarial network) as the base model. In comparison with the traditional GANs, the proposed GAN outperform in terms of frequently used metrics i.e., Fréchet Inception Distance (FID) and accuracy. The model effectiveness is demonstrated through evaluation on the Flickr-Faces Nvidia dataset and Fakefaces d--ataset, achieving an FID score of 55.67, an accuracy of 98.82%, and an F1-score of 0.99 in detection. This study optimizes the model parameters to achieve optimal parameter settings. This study fine-tune the model parameters to reach optimal settings, thereby reducing risks in synthetic image generation. The article introduces an effective framework for both image manipulation and detection.

4.
Sensors (Basel) ; 24(18)2024 Sep 23.
Artigo em Inglês | MEDLINE | ID: mdl-39338901

RESUMO

Advancements in wireless communication and automation have revolutionized mobility systems, notably through autonomous vehicles and unmanned aerial vehicles (UAVs). UAV spatial coordinates, determined via Global Positioning System (GPS) signals, are susceptible to cyberattacks due to unencrypted and unauthenticated transmissions with GPS spoofing being a significant threat. To mitigate these vulnerabilities, intrusion detection systems (IDSs) for UAVs have been developed and enhanced using machine learning (ML) algorithms. However, Adversarial Machine Learning (AML) has introduced new risks by exploiting ML models. This study presents a UAV-IDS employing AML methodology to enhance the detection and classification of GPS spoofing attacks. The key contribution is the development of an AML detection model that significantly improves UAV system robustness and security. Our findings indicate that the model achieves a detection accuracy of 98%, demonstrating its effectiveness in managing large-scale datasets and complex tasks. This study emphasizes the importance of physical layer security for enhancing IDSs in UAVs by introducing a novel detection model centered on an adversarial training defense method and advanced deep learning techniques.

5.
Front Physiol ; 15: 1408832, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-39219839

RESUMO

Introduction: Lung image segmentation plays an important role in computer-aid pulmonary disease diagnosis and treatment. Methods: This paper explores the lung CT image segmentation method by generative adversarial networks. We employ a variety of generative adversarial networks and used their capability of image translation to perform image segmentation. The generative adversarial network is employed to translate the original lung image into the segmented image. Results: The generative adversarial networks-based segmentation method is tested on real lung image data set. Experimental results show that the proposed method outperforms the state-of-the-art method. Discussion: The generative adversarial networks-based method is effective for lung image segmentation.

6.
Heliyon ; 10(16): e36665, 2024 Aug 30.
Artigo em Inglês | MEDLINE | ID: mdl-39262956

RESUMO

In the evolving landscape of deep learning technologies, the emergence of Deepfakes and synthetic media is becoming increasingly prominent within digital media production. This research addresses the limitations inherent in existing face image generation algorithms based on Generative Adversarial Networks (GAN), particularly the challenges of domain irrelevancy and inadequate facial detail representation. The study introduces an enhanced face image generation algorithm, aiming to refine the CycleGAN framework. The enhancement involves a two-fold strategy: firstly, the generator's architecture is refined through the integration of an attention mechanism and adaptive residual blocks, enabling the extraction of more nuanced facial features. Secondly, the discriminator's accuracy in distinguishing real from synthetic images is improved by incorporating a relative loss concept into the loss function. Additionally, this study presents a novel model training approach that incorporates age constraints, thereby mitigating the effects of age variations on the synthesized images. The effectiveness of the proposed algorithm is empirically validated through comparative analysis with existing methodologies, utilizing the CelebA dataset. The results demonstrate that the proposed algorithm significantly enhances the realism of generated face images, outperforming current methods in terms of Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity Index Measure (SSIM), while also achieving notable improvements in subjective visual quality. The implementation of this advanced method is anticipated to substantially elevate the efficiency and quality of digital media production, contributing positively to the broader field of digital media creation.

7.
Neural Netw ; 180: 106665, 2024 Aug 28.
Artigo em Inglês | MEDLINE | ID: mdl-39241437

RESUMO

In brain-computer interface (BCI), building accurate electroencephalogram (EEG) classifiers for specific mental tasks is critical for BCI performance. The classifiers are developed by machine learning (ML) and deep learning (DL) techniques, requiring a large dataset for training to build reliable and accurate models. However, collecting large enough EEG datasets is difficult due to intra-/inter-subject variabilities and experimental costs. This leads to the data scarcity problem, which causes overfitting issues to training samples, resulting in reducing generalization performance. To solve the EEG data scarcity problem and improve the performance of the EEG classifiers, we propose a novel EEG data augmentation (DA) framework using conditional generative adversarial networks (cGANs). An experimental study is implemented with two public EEG datasets, including motor imagery (MI) tasks (BCI competition IV IIa and III IVa), to validate the effectiveness of the proposed EEG DA method for the EEG classifiers. To evaluate the proposed cGAN-based DA method, we tested eight EEG classifiers for the experiment, including traditional MLs and state-of-the-art DLs with three existing EEG DA methods. Experimental results showed that most DA methods with proper DA proportion in the training dataset had higher classification performances than without DA. Moreover, applying the proposed DA method showed superior classification performance improvement than the other DA methods. This shows that the proposed method is a promising EEG DA method for enhancing the performances of the EEG classifiers in MI-based BCIs.

8.
Digit Health ; 10: 20552076241277440, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-39229464

RESUMO

Objective: Convolutional neural networks (CNNs) have achieved state-of-the-art results in various medical image segmentation tasks. However, CNNs often assume that the source and target dataset follow the same probability distribution and when this assumption is not satisfied their performance degrades significantly. This poses a limitation in medical image analysis, where including information from different imaging modalities can bring large clinical benefits. In this work, we present an unsupervised Structure Aware Cross-modality Domain Adaptation (StAC-DA) framework for medical image segmentation. Methods: StAC-DA implements an image- and feature-level adaptation in a sequential two-step approach. The first step performs an image-level alignment, where images from the source domain are translated to the target domain in pixel space by implementing a CycleGAN-based model. The latter model includes a structure-aware network that preserves the shape of the anatomical structure during translation. The second step consists of a feature-level alignment. A U-Net network with deep supervision is trained with the transformed source domain images and target domain images in an adversarial manner to produce probable segmentations for the target domain. Results: The framework is evaluated on bidirectional cardiac substructure segmentation. StAC-DA outperforms leading unsupervised domain adaptation approaches, being ranked first in the segmentation of the ascending aorta when adapting from Magnetic Resonance Imaging (MRI) to Computed Tomography (CT) domain and from CT to MRI domain. Conclusions: The presented framework overcomes the limitations posed by differing distributions in training and testing datasets. Moreover, the experimental results highlight its potential to improve the accuracy of medical image segmentation across diverse imaging modalities.

9.
Front Microbiol ; 15: 1453870, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-39224212

RESUMO

The synthesis of pseudo-healthy images, involving the generation of healthy counterparts for pathological images, is crucial for data augmentation, clinical disease diagnosis, and understanding pathology-induced changes. Recently, Generative Adversarial Networks (GANs) have shown substantial promise in this domain. However, the heterogeneity of intracranial infection symptoms caused by various infections complicates the model's ability to accurately differentiate between pathological and healthy regions, leading to the loss of critical information in healthy areas and impairing the precise preservation of the subject's identity. Moreover, for images with extensive lesion areas, the pseudo-healthy images generated by these methods often lack distinct organ and tissue structures. To address these challenges, we propose a three-stage method (localization, inpainting, synthesis) that achieves nearly perfect preservation of the subject's identity through precise pseudo-healthy synthesis of the lesion region and its surroundings. The process begins with a Segmentor, which identifies the lesion areas and differentiates them from healthy regions. Subsequently, a Vague-Filler fills the lesion areas to construct a healthy outline, thereby preventing structural loss in cases of extensive lesions. Finally, leveraging this healthy outline, a Generative Adversarial Network integrated with a contextual residual attention module generates a more realistic and clearer image. Our method was validated through extensive experiments across different modalities within the BraTS2021 dataset, achieving a healthiness score of 0.957. The visual quality of the generated images markedly exceeded those produced by competing methods, with enhanced capabilities in repairing large lesion areas. Further testing on the COVID-19-20 dataset showed that our model could effectively partially reconstruct images of other organs.

10.
EClinicalMedicine ; 75: 102779, 2024 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-39252864

RESUMO

Background: Adolescent idiopathic scoliosis (AIS) is the most common spinal disorder in children, characterized by insidious onset and rapid progression, which can lead to severe consequences if not detected in a timely manner. Currently, the diagnosis of AIS primarily relies on X-ray imaging. However, due to limitations in healthcare access and concerns over radiation exposure, this diagnostic method cannot be widely adopted. Therefore, we have developed and validated a screening system using deep learning technology, capable of generating virtual X-ray images (VXI) from two-dimensional Red Green Blue (2D-RGB) images captured by a smartphone or camera to assist spine surgeons in the rapid, accurate, and non-invasive assessment of AIS. Methods: We included 2397 patients with AIS and 48 potential patients with AIS who visited four medical institutions in mainland China from June 11th 2014 to November 28th 2023. Participants data included standing full-spine X-ray images captured by radiology technicians and 2D-RGB images taken by spine surgeons using a camera. We developed a deep learning model based on conditional generative adversarial networks (cGAN) called Swin-pix2pix to generate VXI on retrospective training (n = 1842) and validation (n = 100) dataset, then validated the performance of VXI in quantifying the curve type and severity of AIS on retrospective internal (n = 100), external (n = 135), and prospective test datasets (n = 268). The prospective test dataset included 268 participants treated in Nanjing, China, from April 19th, 2023, to November 28th, 2023, comprising 220 patients with AIS and 48 potential patients with AIS. Their data underwent strict quality control to ensure optimal data quality and consistency. Findings: Our Swin-pix2pix model generated realistic VXI, with the mean absolute error (MAE) for predicting the main and secondary Cobb angles of AIS significantly lower than other baseline cGAN models, at 3.2° and 3.1° on prospective test dataset. The diagnostic accuracy for scoliosis severity grading exceeded that of two spine surgery experts, with accuracy of 0.93 (95% CI [0.91, 0.95]) in main curve and 0.89 (95% CI [0.87, 0.91]) in secondary curve. For main curve position and curve classification, the predictive accuracy of the Swin-pix2pix model also surpassed that of the baseline cGAN models, with accuracy of 0.93 (95% CI [0.90, 0.95]) for thoracic curve and 0.97 (95% CI [0.96, 0.98]), achieving satisfactory results on three external datasets as well. Interpretation: Our developed Swin-pix2pix model holds promise for using a single photo taken with a smartphone or camera to rapidly assess AIS curve type and severity without radiation, enabling large-scale screening. However, limited data quality and quantity, a homogeneous participant population, and rotational errors during imaging may affect the applicability and accuracy of the system, requiring further improvement in the future. Funding: National Key R&D Program of China, Natural Science Foundation of Jiangsu Province, China Postdoctoral Science Foundation, Nanjing Medical Science and Technology Development Foundation, Jiangsu Provincial Key Research and Development Program, and Jiangsu Provincial Medical Innovation Centre of Orthopedic Surgery.

11.
Artigo em Inglês | MEDLINE | ID: mdl-39268356

RESUMO

The reconstruction kernel in computed tomography (CT) generation determines the texture of the image. Consistency in reconstruction kernels is important as the underlying CT texture can impact measurements during quantitative image analysis. Harmonization (i.e., kernel conversion) minimizes differences in measurements due to inconsistent reconstruction kernels. Existing methods investigate harmonization of CT scans in single or multiple manufacturers. However, these methods require paired scans of hard and soft reconstruction kernels that are spatially and anatomically aligned. Additionally, a large number of models need to be trained across different kernel pairs within manufacturers. In this study, we adopt an unpaired image translation approach to investigate harmonization between and across reconstruction kernels from different manufacturers by constructing a multipath cycle generative adversarial network (GAN). We use hard and soft reconstruction kernels from the Siemens and GE vendors from the National Lung Screening Trial dataset. We use 50 scans from each reconstruction kernel and train a multipath cycle GAN. To evaluate the effect of harmonization on the reconstruction kernels, we harmonize 50 scans each from Siemens hard kernel, GE soft kernel and GE hard kernel to a reference Siemens soft kernel (B30f) and evaluate percent emphysema. We fit a linear model by considering the age, smoking status, sex and vendor and perform an analysis of variance (ANOVA) on the emphysema scores. Our approach minimizes differences in emphysema measurement and highlights the impact of age, sex, smoking status and vendor on emphysema quantification.

12.
Diagnostics (Basel) ; 14(17)2024 Sep 04.
Artigo em Inglês | MEDLINE | ID: mdl-39272741

RESUMO

The current study proposed and evaluated "residual squeeze and excitation attention gate" (rSEAG), a novel network that can improve image quality by reducing distortion attributed to artifacts. This method was established by modifying the Cycle Generative Adversarial Network (cycleGAN)-based generator network using projection data for pre-reconstruction processing in digital breast tomosynthesis. Residual squeeze and excitation were installed in the bridge of the generator network, and the attention gate was installed in the skip connection between the encoder and decoder. Based on the radiation dose index (exposure index and division index) incident on the detector, the cases approved by the ethics committee and used for the study were classified as reference (675 projection images) and object (675 projection images). For the cases, unsupervised data containing a mixture of cases with and without masses were used. The cases were trained using cycleGAN with rSEAG and the conventional networks (ResUNet and U-Net). For testing, predictive processing was performed on cases (60 projection images) that were not used for learning. Images were generated using filtered backprojection reconstruction (kernel: Ramachandran and Lakshminarayanan) from projection data for testing data and without pre-reconstruction processing data (evaluation: in-focus plane). The distortion was evaluated using perception-based image quality evaluation (PIQE) analysis, texture analysis (feature: "Homogeneity" and "Contrast"), and a statistical model with a Gumbel distribution. PIQE has a low rSEAG value. Texture analysis showed that rSEAG and a network without cycleGAN were similar in terms of the "Contrast" feature. In dense breasts, ResUNet had the lowest "Contrast" feature and U-Net had differences between cases. The maximal variations in the Gumbel plot, rSEAG reduced the high-frequency ripple artifacts. In this study, rSEAG could improve distortion and reduce ripple artifacts.

13.
ACS Appl Mater Interfaces ; 16(37): 49673-49686, 2024 Sep 18.
Artigo em Inglês | MEDLINE | ID: mdl-39231373

RESUMO

In this paper, a multineural network fusion freestyle metasurface on-demand design method is proposed. The on-demand design method involves rapidly generating corresponding metasurface patterns based on the user-defined spectrum. The generated patterns are then input into a simulator to predict their corresponding S-parameter spectrogram, which is subsequently analyzed against the real S-parameter spectrogram to verify whether the generated metasurface patterns meet the desired requirements. The methodology is based on three neural network models: a Wasserstein Generative Adversarial Network model with a U-net architecture (U-WGAN) for inverse structural design, a Variational Autoencoder (VAE) model for compression, and an LSTM + Attention model for forward S-parameter spectrum prediction validation. The U-WGAN is utilized for on-demand reverse structural design, aiming to rapidly discover high-fidelity metasurface patterns that meet specific electromagnetic spectrum responses. The VAE, as a probabilistic generation model, serves as a bridge, mapping input data to latent space and transforming it into latent variable data, providing crucial input for a forward S-parameter spectrum prediction model. The LSTM + Attention network, acting as a forward S-parameter spectrum prediction model, can accurately and efficiently predict the S-parameter spectrum corresponding to the latent variable data and compare it with the real spectrum. In addition, the digits "0" and "1" are used in the design to represent vacuum and metallic materials, respectively, and a 10 × 10 cell array of freestyle metasurface patterns is constructed. The significance of the research method proposed in this paper lies in the following: (1) The freestyle metasurface design significantly expands the possibility of metamaterial design, enabling the creation of diverse metasurface structures that are difficult to achieve with traditional methods. (2) The on-demand design approach can generate high-fidelity metasurface patterns that meet the expected electromagnetic characteristics and responses. (3) The fusion of multiple neural networks demonstrates high flexibility, allowing for the adjustment of network structures and training methods based on specific design requirements and data characteristics, thus better accommodating different design problems and optimization objectives.

14.
Comput Methods Programs Biomed ; 256: 108363, 2024 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-39182250

RESUMO

BACKGROUND AND OBJECTIVE: Generative Deep Learning has emerged in recent years as a significant player in the Artificial Intelligence field. Synthesizing new data while maintaining the features of reality has revolutionized the field of Deep Learning, proving to be particularly useful in contexts where obtaining data is challenging. The objective of this study is to employ the DoppelGANger algorithm, a cutting-edge approach based on Generative Adversarial Networks for time series, to enhance patient admissions forecasting in a hospital Emergency Department. METHODS: We employed the DoppelGANger algorithm in a sequential methodology, conditioning generated time series with unique attributes to optimize data utilization. After confirming the successful creation of synthetic data with new attribute values, we adopted the Train-Synthetic-Test-Real framework to ensure the reliability of our synthetic data validation. We then augmented the original series with synthetic data to enhance the Prophet model's performance. This process was applied to two datasets derived from the original: one with four years of training followed by one year of testing, and another with three years of training and two years of testing. RESULTS: The experimental results show that the generative model outperformed Prophet on the forecasting task, improving the SMAPE from 7.30 to 6.99 with the four-year training set, and from 22.84 to 7.41 for the three-year training set, all in daily aggregations. For the data replacement task, the Prophet SMAPE values decreased to 6.84 and 7.18 for four and three-year sets on the same aggregation. Additionally, data augmentation reduced the SMAPE to 6.79 for a one-year test set and achieved 8.56 for the two-year test set, surpassing the performance achieved by the same Prophet model when trained only on real data. Results for the remaining aggregations were consistent. CONCLUSIONS: The findings of this study suggest that employing a generative algorithm to extend a training dataset can effectively enhance predictive models within the domain of Emergency Department admissions. The improvement can lead to more efficient resource allocation and patient management.


Assuntos
Algoritmos , Inteligência Artificial , Serviço Hospitalar de Emergência , Previsões , Humanos , Aprendizado Profundo , Redes Neurais de Computação , Admissão do Paciente/estatística & dados numéricos , Reprodutibilidade dos Testes
15.
Food Chem ; 461: 140919, 2024 Dec 15.
Artigo em Inglês | MEDLINE | ID: mdl-39181057

RESUMO

The authenticity of salted goose products is concerning for consumers. This study describes an integrated deep-learning framework based on a generative adversarial network and combines it with data from headspace solid phase microextraction/gas chromatography-mass spectrometry, headspace gas chromatography-ion mobility spectrometry, E-nose, E-tongue, quantitative descriptive analysis, and free amino acid and 5'-nucleotide analyses to achieve reliable discrimination of four salted goose breeds. Volatile and non-volatile compounds and sensory characteristics and intelligent sensory characteristics were analyzed. A preliminary composite dataset was generated in InfoGAN and provided to several base classifiers for training. The prediction results were fused via dynamic weighting to produce an integrated model prediction. An ablation study demonstrated that ensemble learning was indispensable to improving the generalization capability of the model. The framework has an accuracy of 95%, a root mean square error (RMSE) of 0.080, a precision of 0.9450, a recall of 0.9470, and an F1-score of 0.9460.


Assuntos
Aprendizado Profundo , Cromatografia Gasosa-Espectrometria de Massas , Gansos , Paladar , Animais , Nariz Eletrônico , Compostos Orgânicos Voláteis/química , Humanos , Quimiometria , Microextração em Fase Sólida , Cruzamento
16.
PeerJ Comput Sci ; 10: e2162, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-39145212

RESUMO

In order to analyze the influence of deep learning model on detecting denial-of-service (DoS) attacks, this article first examines the concepts and attack strategies of DoS assaults before looking into the present detection methodologies for DoS attacks. A distributed DoS attack detection system based on deep learning is established in response to the investigation's limitations. This system can quickly and accurately identify the traffic of distributed DoS attacks in the network that needs to be detected and then promptly send an alarm signal to the system. Then, a model called the Improved Conditional Wasserstein Generative Adversarial Network with Inverter (ICWGANInverter) is proposed in response to the characteristics of incomplete network traffic in DoS attacks. This model automatically learns the advanced abstract information of the original data and then employs the method of reconstruction error to identify the best classification label. It is then tested on the intrusion detection dataset NSL-KDD. The findings demonstrate that the mean square error of continuous feature reconstruction in the sub-datasets KDDTest+ and KDDTest-21 steadily increases as the noise factor increases. All of the receiver operating characteristic (ROC) curves are shown at the top of the diagonal, and the overall area under the ROC curve (AUC) values of the macro-average and micro-average are above 0.8, which demonstrates that the ICWGANInverter model has excellent detection performance in both single category attack detection and overall attack detection. This model has a greater detection accuracy than other models, reaching 87.79%. This demonstrates that the approach suggested in this article offers higher benefits for detecting DoS attacks.

17.
PeerJ Comput Sci ; 10: e2009, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-39145230

RESUMO

With the popularity of Internet applications, a large amount of Internet behavior log data is generated. Abnormal behaviors of corporate employees may lead to internet security issues and data leakage incidents. To ensure the safety of information systems, it is important to research on anomaly prediction of Internet behaviors. Due to the high cost of labeling big data manually, an unsupervised generative model-Anomaly Prediction of Internet behavior based on Generative Adversarial Networks (APIBGAN), which works only with a small amount of labeled data, is proposed to predict anomalies of Internet behaviors. After the input Internet behavior data is preprocessed by the proposed method, the data-generating generative adversarial network (DGGAN) in APIBGAN learns the distribution of real Internet behavior data by leveraging neural networks' powerful feature extraction from the data to generate Internet behavior data with random noise. The APIBGAN utilizes these labeled generated data as a benchmark to complete the distance-based anomaly prediction. Three categories of Internet behavior sampling data from corporate employees are employed to train APIBGAN: (1) Online behavior data of an individual in a department. (2) Online behavior data of multiple employees in the same department. (3) Online behavior data of multiple employees in different departments. The prediction scores of the three categories of Internet behavior data are 87.23%, 85.13%, and 83.47%, respectively, and are above the highest score of 81.35% which is obtained by the comparison method based on Isolation Forests in the CCF Big Data & Computing Intelligence Contest (CCF-BDCI). The experimental results validate that APIBGAN predicts the outlier of Internet behaviors effectively through the GAN, which is composed of a simple three-layer fully connected neural networks (FNNs). We can use APIBGAN not only for anomaly prediction of Internet behaviors but also for anomaly prediction in many other applications, which have big data infeasible to label manually. Above all, APIBGAN has broad application prospects for anomaly prediction, and our work also provides valuable input for anomaly prediction-based GAN.

18.
J Adv Res ; 2024 Aug 05.
Artigo em Inglês | MEDLINE | ID: mdl-39111628

RESUMO

INTRODUCTION: The blood-brain barrier (BBB) serves as a critical structural barrier and impedes the entry of most neurotherapeutic drugs into the brain. This poses substantial challenges for central nervous system (CNS) drug development, as there is a lack of efficient drug delivery technologies to overcome this obstacle. BBB penetrating peptides (BBBPs) hold promise in overcoming the BBB and facilitating the delivery of drug molecules to the brain. Therefore, precise identification of BBBPs has become a crucial step in CNS drug development. However, most computational methods are designed based on conventional models that inadequately capture the intricate interaction between BBBPs and the BBB. Moreover, the performance of these methods was further hampered by unbalanced datasets. OBJECTIVES: This study addresses the problem of unbalanced datasets in BBBP prediction and proposes a powerful predictor for efficiently and accurately identifying BBBPs, as well as generating analogous BBBPs. METHODS: A transformer-based deep learning model, DeepB3P, was proposed for predicting BBBP. The feedback generative adversarial network (FBGAN) model was employed to effectively generate analogous BBBPs, addressing data imbalance. RESULTS: The FBGAN model possesses the ability to generate novel BBBP-like peptides, effectively mitigating the data imbalance in BBBP prediction. Extensive experiments on benchmarking datasets demonstrated that DeepB3P outperforms other BBBP prediction models by approximately 9.09%, 4.55% and 9.41% in terms of specificity, accuracy, and Matthew's correlation coefficient, respectively. For accelerating the progress in BBBP identification and CNS drug design, the proposed DeepB3P was implemented as a webserver, which is accessible at http://cbcb.cdutcm.edu.cn/deepb3p/. CONCLUSION: The interpretable analyses provided by DeepB3P offer valuable insights and enhance downstream analyses for BBBP identification. Moreover, the BBBP-like peptides generated by FBGAN hold potential as candidates for CNS drug development.

19.
Math Biosci Eng ; 21(6): 6190-6224, 2024 Jun 12.
Artigo em Inglês | MEDLINE | ID: mdl-39176424

RESUMO

In recent years, deep learning (DL) techniques have achieved remarkable success in various fields of computer vision. This progress was attributed to the vast amounts of data utilized to train these models, as they facilitated the learning of more intricate and detailed feature information about target objects, leading to improved model performance. However, in most real-world tasks, it was challenging to gather sufficient data for model training. Insufficient datasets often resulted in models prone to overfitting. To address this issue and enhance model performance, generalization ability, and mitigate overfitting in data-limited scenarios, image data augmentation methods have been proposed. These methods generated synthetic samples to augment the original dataset, emerging as a preferred strategy to boost model performance when data was scarce. This review first introduced commonly used and highly effective image data augmentation techniques, along with a detailed analysis of their advantages and disadvantages. Second, this review presented several datasets frequently employed for evaluating the performance of image data augmentation methods and examined how advanced augmentation techniques can enhance model performance. Third, this review discussed the applications and performance of data augmentation techniques in various computer vision domains. Finally, this review provided an outlook on potential future research directions for image data augmentation methods.

20.
Sensors (Basel) ; 24(15)2024 Jul 24.
Artigo em Inglês | MEDLINE | ID: mdl-39123847

RESUMO

Recent studies have proposed methods for extracting latent sharp frames from a single blurred image. However, these methods still suffer from limitations in restoring satisfactory images. In addition, most existing methods are limited to decomposing a blurred image into sharp frames with a fixed frame rate. To address these problems, we present an Arbitrary Time Blur Decomposition Triple Generative Adversarial Network (ABDGAN) that restores sharp frames with flexible frame rates. Our framework plays a min-max game consisting of a generator, a discriminator, and a time-code predictor. The generator serves as a time-conditional deblurring network, while the discriminator and the label predictor provide feedback to the generator on producing realistic and sharp image depending on given time code. To provide adequate feedback for the generator, we propose a critic-guided (CG) loss by collaboration of the discriminator and time-code predictor. We also propose a pairwise order-consistency (POC) loss to ensure that each pixel in a predicted image consistently corresponds to the same ground-truth frame. Extensive experiments show that our method outperforms previously reported methods in both qualitative and quantitative evaluations. Compared to the best competitor, the proposed ABDGAN improves PSNR, SSIM, and LPIPS on the GoPro test set by 16.67%, 9.16%, and 36.61%, respectively. For the B-Aist++ test set, our method shows improvements of 6.99%, 2.38%, and 17.05% in PSNR, SSIM, and LPIPS, respectively, compared to the best competitive method.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA