Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 4 de 4
Filter
Add more filters










Database
Language
Publication year range
1.
Opt Express ; 31(12): 18964-18992, 2023 Jun 05.
Article in English | MEDLINE | ID: mdl-37381325

ABSTRACT

Holographic tomography (HT) is a measurement technique that generates phase images, often containing high noise levels and irregularities. Due to the nature of phase retrieval algorithms within the HT data processing, the phase has to be unwrapped before tomographic reconstruction. Conventional algorithms lack noise robustness, reliability, speed, and possible automation. In order to address these problems, this work proposes a convolutional neural network based pipeline consisting of two steps: denoising and unwrapping. Both steps are carried out under the umbrella of a U-Net architecture; however, unwrapping is aided by introducing Attention Gates (AG) and Residual Blocks (RB) to the architecture. Through the experiments, the proposed pipeline makes possible the phase unwrapping of highly irregular, noisy, and complex experimental phase images captured in HT. This work proposes phase unwrapping carried out by segmentation with a U-Net network, that is aided by a pre-processing denoising step. It also discusses the implementation of the AGs and RBs in an ablation study. What is more, this is the first deep learning based solution that is trained solely on real images acquired with HT.

2.
Sensors (Basel) ; 22(21)2022 Oct 24.
Article in English | MEDLINE | ID: mdl-36365830

ABSTRACT

Image super-resolution (ISR) technology aims to enhance resolution and improve image quality. It is widely applied to various real-world applications related to image processing, especially in medical images, while relatively little appliedto anime image production. Furthermore, contemporary ISR tools are often based on convolutional neural networks (CNNs), while few methods attempt to use transformers that perform well in other advanced vision tasks. We propose a so-called anime image super-resolution (AISR) method based on the Swin Transformer in this work. The work was carried out in several stages. First, a shallow feature extraction approach was employed to facilitate the features map of the input image's low-frequency information, which mainly approximates the distribution of detailed information in a spatial structure (shallow feature). Next, we applied deep feature extraction to extract the image semantic information (deep feature). Finally, the image reconstruction method combines shallow and deep features to upsample the feature size and performs sub-pixel convolution to obtain many feature map channels. The novelty of the proposal is the enhancement of the low-frequency information using a Gaussian filter and the introduction of different window sizes to replace the patch merging operations in the Swin Transformer. A high-quality anime dataset was constructed to curb the effects of the model robustness on the online regime. We trained our model on this dataset and tested the model quality. We implement anime image super-resolution tasks at different magnifications (2×, 4×, 8×). The results were compared numerically and graphically with those delivered by conventional convolutional neural network-based and transformer-based methods. We demonstrate the experiments numerically using standard peak signal-to-noise ratio (PSNR) and structural similarity (SSIM), respectively. The series of experiments and ablation study showcase that our proposal outperforms others.


Subject(s)
Image Processing, Computer-Assisted , Magnetic Resonance Imaging , Magnetic Resonance Imaging/methods , Image Processing, Computer-Assisted/methods , Signal-To-Noise Ratio , Neural Networks, Computer , Electric Power Supplies
3.
Sensors (Basel) ; 22(20)2022 Oct 13.
Article in English | MEDLINE | ID: mdl-36298117

ABSTRACT

Recently, the dangers associated with face generation technology have been attracting much attention in image processing and forensic science. The current face anti-spoofing methods based on Generative Adversarial Networks (GANs) suffer from defects such as overfitting and generalization problems. This paper proposes a new generation method using a one-class classification model to judge the authenticity of facial images for the purpose of realizing a method to generate a model that is as compatible as possible with other datasets and new data, rather than strongly depending on the dataset used for training. The method proposed in this paper has the following features: (a) we adopted various filter enhancement methods as basic pseudo-image generation methods for data enhancement; (b) an improved Multi-Channel Convolutional Neural Network (MCCNN) was adopted as the main network, making it possible to accept multiple preprocessed data individually, obtain feature maps, and extract attention maps; (c) as a first ingenuity in training the main network, we augmented the data using weakly supervised learning methods to add attention cropping and dropping to the data; (d) as a second ingenuity in training the main network, we trained it in two steps. In the first step, we used a binary classification loss function to ensure that known fake facial features generated by known GAN networks were filtered out. In the second step, we used a one-class classification loss function to deal with the various types of GAN networks or unknown fake face generation methods. We compared our proposed method with four recent methods. Our experiments demonstrate that the proposed method improves cross-domain detection efficiency while maintaining source-domain accuracy. These studies show one possible direction for improving the correct answer rate in judging facial image authenticity, thereby making a great contribution both academically and practically.


Subject(s)
Deep Learning , Neural Networks, Computer , Image Processing, Computer-Assisted/methods
4.
Sensors (Basel) ; 20(16)2020 Aug 15.
Article in English | MEDLINE | ID: mdl-32824187

ABSTRACT

Currently, expert systems and applied machine learning algorithms are widely used to automate network intrusion detection. In critical infrastructure applications of communication technologies, the interaction among various industrial control systems and the Internet environment intrinsic to the IoT technology makes them susceptible to cyber-attacks. Given the existence of the enormous network traffic in critical Cyber-Physical Systems (CPSs), traditional methods of machine learning implemented in network anomaly detection are inefficient. Therefore, recently developed machine learning techniques, with the emphasis on deep learning, are finding their successful implementations in the detection and classification of anomalies at both the network and host levels. This paper presents an ensemble method that leverages deep models such as the Deep Neural Network (DNN) and Long Short-Term Memory (LSTM) and a meta-classifier (i.e., logistic regression) following the principle of stacked generalization. To enhance the capabilities of the proposed approach, the method utilizes a two-step process for the apprehension of network anomalies. In the first stage, data pre-processing, a Deep Sparse AutoEncoder (DSAE) is employed for the feature engineering problem. In the second phase, a stacking ensemble learning approach is utilized for classification. The efficiency of the method disclosed in this work is tested on heterogeneous datasets, including data gathered in the IoT environment, namely IoT-23, LITNET-2020, and NetML-2020. The results of the evaluation of the proposed approach are discussed. Statistical significance is tested and compared to the state-of-the-art approaches in network anomaly detection.

SELECTION OF CITATIONS
SEARCH DETAIL