RESUMO
Cross channel scripting (XCS) is a common web application vulnerability, which is a variant of a cross-site scripting (XSS) attack. An XCS attack vector can be injected through network protocol and smart devices that have web interfaces such as routers, photo frames, and cameras. In this attack scenario, the network devices allow the web administrator to carry out various functions related to accessing the web content from the server. After the injection of malicious code into web interfaces, XCS attack vectors can be exploited in the client browser. In addition, scripted content can be injected into the networked devices through various protocols, such as network file system, file transfer protocol (FTP), and simple mail transfer protocol. In this paper, various computational techniques deployed at the client and server sides for XCS detection and mitigation are analyzed. Various web application scanners have been discussed along with specific features. Various computational tools and approaches with their respective characteristics are also discussed. Finally, shortcomings and future directions related to the existing computational techniques for XCS are presented.
Assuntos
Computação em Nuvem , Software , Algoritmos , Humanos , PublicaçõesRESUMO
Pneumothorax is a thoracic disease leading to failure of the respiratory system, cardiac arrest, or in extreme cases, death. Chest X-ray (CXR) imaging is the primary diagnostic imaging technique for the diagnosis of pneumothorax. A computerized diagnosis system can detect pneumothorax in chest radiographic images, which provide substantial benefits in disease diagnosis. In the present work, a deep learning neural network model is proposed to detect the regions of pneumothoraces in the chest X-ray images. The model incorporates a Mask Regional Convolutional Neural Network (Mask RCNN) framework and transfer learning with ResNet101 as a backbone feature pyramid network (FPN). The proposed model was trained on a pneumothorax dataset prepared by the Society for Imaging Informatics in Medicine in association with American college of Radiology (SIIM-ACR). The present work compares the operation of the proposed MRCNN model based on ResNet101 as an FPN with the conventional model based on ResNet50 as an FPN. The proposed model had lower class loss, bounding box loss, and mask loss as compared to the conventional model based on ResNet50 as an FPN. Both models were simulated with a learning rate of 0.0004 and 0.0006 with 10 and 12 epochs, respectively.
Assuntos
Aprendizado Profundo , Pneumotórax , Computadores , Humanos , Pneumotórax/diagnóstico por imagem , Tórax , Raios XRESUMO
In the last decade, the proactive diagnosis of diseases with artificial intelligence and its aligned technologies has been an exciting and fruitful area. One of the areas in medical care where constant monitoring is required is cardiovascular diseases. Arrhythmia, one of the cardiovascular diseases, is generally diagnosed by doctors using Electrocardiography (ECG), which records the heart's rhythm and electrical activity. The use of neural networks has been extensively adopted to identify abnormalities in the last few years. It is found that the probability of detecting arrhythmia increases if the denoised signal is used rather than the raw input signal. This paper compares six filters implemented on ECG signals to improve classification accuracy. Custom convolutional neural networks (CCNNs) are designed to filter ECG data. Extensive experiments are drawn by considering the six ECG filters and the proposed custom CCNN models. Comparative analysis reveals that the proposed models outperform the competitive models in various performance metrics.
Assuntos
Análise de Dados , Processamento de Sinais Assistido por Computador , Inteligência Artificial , Eletrocardiografia , Redes Neurais de ComputaçãoRESUMO
Recognizing human emotions by machines is a complex task. Deep learning models attempt to automate this process by rendering machines to exhibit learning capabilities. However, identifying human emotions from speech with good performance is still challenging. With the advent of deep learning algorithms, this problem has been addressed recently. However, most research work in the past focused on feature extraction as only one method for training. In this research, we have explored two different methods of extracting features to address effective speech emotion recognition. Initially, two-way feature extraction is proposed by utilizing super convergence to extract two sets of potential features from the speech data. For the first set of features, principal component analysis (PCA) is applied to obtain the first feature set. Thereafter, a deep neural network (DNN) with dense and dropout layers is implemented. In the second approach, mel-spectrogram images are extracted from audio files, and the 2D images are given as input to the pre-trained VGG-16 model. Extensive experiments and an in-depth comparative analysis over both the feature extraction methods with multiple algorithms and over two datasets are performed in this work. The RAVDESS dataset provided significantly better accuracy than using numeric features on a DNN.
Assuntos
Aprendizado Profundo , Fala , Algoritmos , Emoções , Humanos , Redes Neurais de ComputaçãoRESUMO
In ultrasound, wave interference is an undesirable effect that degrades the resolution of the images. We have recently shown that a wavefront of random interference can be used to reconstruct high-resolution ultrasound images. In this study, we further improve the resolution of interference-based ultrasound imaging by proposing a joint image reconstruction scheme. The proposed reconstruction scheme utilizes radio frequency (RF) signals from all elements of the sensor array in a joint optimization problem to directly reconstruct the final high-resolution image. By jointly processing array signals, we significantly improved the resolution of interference-based imaging. We compare the proposed joint reconstruction method with popular beamforming techniques and the previously proposed interference-based compound method. The simulation study suggests that, among the different reconstruction methods, the joint reconstruction method has the lowest mean-squared error (MSE), the best peak signal-to-noise ratio (PSNR), and the best signal-to-noise ratio (SNR). Similarly, the joint reconstruction method has an exceptional structural similarity index (SSIM) of 0.998. Experimental studies showed that the quality of images significantly improved when compared to other image reconstruction methods. Furthermore, we share our simulation codes as an open-source repository in support of reproducible research.
Assuntos
Processamento de Imagem Assistida por Computador , Ultrassonografia , Simulação por Computador , Razão Sinal-RuídoRESUMO
Compressive sensing (CS) spectroscopy is well known for developing a compact spectrometer which consists of two parts: compressively measuring an input spectrum and recovering the spectrum using reconstruction techniques. Our goal here is to propose a novel residual convolutional neural network (ResCNN) for reconstructing the spectrum from the compressed measurements. The proposed ResCNN comprises learnable layers and a residual connection between the input and the output of these learnable layers. The ResCNN is trained using both synthetic and measured spectral datasets. The results demonstrate that ResCNN shows better spectral recovery performance in terms of average root mean squared errors (RMSEs) and peak signal to noise ratios (PSNRs) than existing approaches such as the sparse recovery methods and the spectral recovery using CNN. Unlike sparse recovery methods, ResCNN does not require a priori knowledge of a sparsifying basis nor prior information on the spectral features of the dataset. Moreover, ResCNN produces stable reconstructions under noisy conditions. Finally, ResCNN is converged faster than CNN.
RESUMO
Dry contact electrode-based EEG acquisition is one of the easiest ways to obtain neural information from the human brain, providing many advantages such as rapid installation, and enhanced wearability. However, high contact impedance due to insufficient electrical coupling at the electrode-scalp interface still remains a critical issue. In this paper, a two-wired active dry electrode system is proposed by combining finger-shaped spring-loaded probes and active buffer circuits. The shrinkable probes and bootstrap topology-based buffer circuitry provide reliable electrical coupling with an uneven and hairy scalp and effective input impedance conversion along with low input capacitance. Through analysis of the equivalent circuit model, the proposed electrode was carefully designed by employing off-the-shelf discrete components and a low-noise zero-drift amplifier. Several electrical evaluations such as noise spectral density measurements and input capacitance estimation were performed together with simple experiments for alpha rhythm detection. The experimental results showed that the proposed electrode is capable of clear detection for the alpha rhythm activation, with excellent electrical characteristics such as low-noise of 1.131 µVRMS and 32.3% reduction of input capacitance.
Assuntos
Eletroencefalografia , Amplificadores Eletrônicos , Eletricidade , Eletrodos , Processamento de Imagem Assistida por ComputadorRESUMO
In nature, the compound eyes of arthropods have evolved towards a wide field of view (FOV), infinite depth of field and fast motion detection. However, compound eyes have inferior resolution when compared with the camera-type eyes of vertebrates, owing to inherent structural constraints such as the optical performance and the number of ommatidia. For resolution improvements, in this paper, we propose COMPUtational compound EYE (COMPU-EYE), a new design that increases acceptance angles and uses a modern digital signal processing (DSP) technique. We demonstrate that the proposed COMPU-EYE provides at least a four-fold improvement in resolution.
Assuntos
Olho Composto de Artrópodes/anatomia & histologia , Simulação por Computador , Animais , Processamento de Imagem Assistida por ComputadorRESUMO
The input numerical aperture (NA) of multimode fiber (MMF) can be effectively increased by placing turbid media at the input end of the MMF. This provides the potential for high-resolution imaging through the MMF. While the input NA is increased, the number of propagation modes in the MMF and hence the output NA remains the same. This makes the image reconstruction process underdetermined and may limit the quality of the image reconstruction. In this paper, we aim to improve the signal to noise ratio (SNR) of the image reconstruction in imaging through MMF. We notice that turbid media placed in the input of the MMF transforms the incoming waves into a better format for information transmission and information extraction. We call this transformation as holistic random (HR) encoding of turbid media. By exploiting the HR encoding, we make a considerable improvement on the SNR of the image reconstruction. For efficient utilization of the HR encoding, we employ sparse representation (SR), a relatively new signal reconstruction framework when it is provided with a HR encoded signal. This study shows for the first time to our knowledge the benefit of utilizing the HR encoding of turbid media for recovery in the optically underdetermined systems where the output NA of it is smaller than the input NA for imaging through MMF.
RESUMO
Speckle suppression is one of the most important tasks in the image transmission through turbid media. Insufficient speckle suppression requires an additional procedure such as temporal ensemble averaging over multiple exposures. In this paper, we consider the image recovery process based on the so-called transmission matrix (TM) of turbid media for the image transmission through the media. We show that the speckle left unremoved in the TM-based image recovery can be suppressed effectively via sparse representation (SR). SR is a relatively new signal reconstruction framework which works well even for ill-conditioned problems. This is the first study to show the benefit of using the SR as compared to the phase conjugation (PC) a de facto standard method to date for TM-based imaging through turbid media including a live cell through tissue slice.
Assuntos
Diagnóstico por Imagem , Processamento de Imagem Assistida por Computador/métodos , Nefelometria e Turbidimetria/métodos , Imagens de Fantasmas , HumanosRESUMO
Drug combination therapy is crucial in cancer treatment, but accurately predicting drug synergy remains a challenge due to the complexity of drug combinations. Machine learning and deep learning models have shown promise in drug combination prediction, but they suffer from issues such as gradient vanishing, overfitting, and parameter tuning. To address these problems, the deep drug synergy prediction network, named as EDNet is proposed that leverages a modified triangular mutation-based differential evolution algorithm. This algorithm evolves the initial connection weights and architecture-related attributes of the deep bidirectional mixture density network, improving its performance and addressing the aforementioned issues. EDNet automatically extracts relevant features and provides conditional probability distributions of output attributes. The performance of EDNet is evaluated over two well-known drug synergy datasets, NCI-ALMANAC and deep-synergy. The results demonstrate that EDNet outperforms the competing models. EDNet facilitates efficient drug interactions, enhancing the overall effectiveness of drug combinations for improved cancer treatment outcomes.
RESUMO
In this paper, we introduce a method for improving the resolution of miniature spectrometers. Our method is based on using filters with random transmittance. Such filters sense fine details of an input signal spectrum, which, when combined with a signal processing algorithm, aid in improving resolution. We also propose an approach for designing filters with random transmittance using optical thin-film technology. We demonstrate that the improvement in resolution is 7-fold when using the filters with random transmittance over what was achieved in our previous work.
Assuntos
Algoritmos , Filtração/instrumentação , Processamento de Sinais Assistido por Computador , Análise Espectral/instrumentação , Desenho de Equipamento , Análise de Falha de Equipamento , Filtração/métodos , Miniaturização , Sensibilidade e Especificidade , Análise Espectral/métodosRESUMO
One of the leading causes of cancer-related deaths among women is cervical cancer. Early diagnosis and treatment can minimize the complications of this cancer. Recently, researchers have designed and implemented many deep learning-based automated cervical cancer diagnosis models. However, the majority of these models suffer from over-fitting, parameter tuning, and gradient vanishing problems. To overcome these problems, in this paper a metaheuristics-based lightweight deep learning network (MLNet) is proposed. Initially, the hyper-parameters tuning problem of convolutional neural network (CNN) is defined as a multi-objective problem. Particle swarm optimization (PSO) is used to optimally define the CNN architecture. Thereafter, Dynamically hybrid niching differential evolution (DHDE) is utilized to optimize the hyper-parameters of CNN layers. Each particle of PSO and solution of DHDE together represent the possible CNN configuration. F-score is used as a fitness function. The proposed MLNet is trained and validated on three benchmark cervical cancer datasets. On the Herlev dataset, MLNet outperforms the existing models in terms of accuracy, f-measure, sensitivity, specificity, and precision by 1.6254%, 1.5178%, 1.5780%, 1.7145%, and 1.4890%, respectively. Also, on the SIPaKMeD dataset, MLNet achieves better performance than the existing models in terms of accuracy, f-measure, sensitivity, specificity, and precision by 2.1250%, 2.2455%, 1.9074%, 1.9258%, and 1.8975%, respectively. Finally, on the Mendeley LBC dataset, MLNet achieves better performance than the competitive models in terms of accuracy, f-measure, sensitivity, specificity, and precision by 1.4680%, 1.5845%, 1.3582%, 1.3926%, and 1.4125%, respectively.
Assuntos
Aprendizado Profundo , Neoplasias do Colo do Útero , Feminino , Humanos , Neoplasias do Colo do Útero/diagnóstico , Benchmarking , Exercício Físico , PescoçoRESUMO
The electrocardiogram (ECG) signals are commonly used to identify heart complications. These recordings generate large data that needed to be stored or transferred in telemedicine applications, which require more storage space and bandwidth. Therefore, a strong motivation is present to develop efficient compression algorithms for ECG signals. In the above context, this work proposes a novel compression algorithm using adaptive tunable-Q wavelet transform (TQWT) and modified dead-zone quantizer (DZQ). The parameters of TQWT and threshold values of DZQ are selected using the proposed Sparse-grey wolf optimization (Sparse-GWO) algorithm. The Sparse-GWO is proposed in this work to reduce the computation time of the original GWO. Moreover, it is also compared with some popular algorithms such as original GWO, particle swarm optimization (PSO), Hybrid PSOGWO, and Sparse-PSO. The DZQ has been utilized to perform thresholding and quantization. Then, run-length encoding (RLE) has been used to encode the quantized coefficients. The proposed work has been performed on the MIT-BIH arrhythmia database. Quality assessment performed on reconstructed signals ensure the minimal impact of compression on the morphology of reconstructed ECG signals. The compression performance of proposed algorithm is measured in terms of the following evaluation matrices: percent root-mean-square difference (PRD1), compression ratio (CR), signal-to-noise ratio (SNR), and quality score (QS1). The obtained average values are 3.21%, 20.56, 30.62 dB, and 7.79, respectively.
RESUMO
Electrocardiogram (ECG) signals are frequently used in the continuous monitoring of heart patients. These recordings generate huge data, which is difficult to store or transmit in telehealth applications. In the above context, this work proposes an efficient novel compression algorithm by integrating the tunable-Q wavelet transform (TQWT) with coronavirus herd immunity optimizer (CHIO). Additionally, this algorithm facilitates the self-adaptive nature to regulate the reconstruction quality by limiting the error parameter. CHIO is a human perception-based algorithm, used to select optimum TQWT parameters, where decomposition level of TQWT is optimized for the first time in the field of ECG compression. The obtained transform coefficients are then thresholded, quantized, and encoded to improve the compression further. The proposed work is tested on MIT-BIH arrhythmia database. The compression and optimization performance using CHIO is also compared with well-established optimization algorithms. The compression performance is measured in terms of compression ratio, signal-to-noise ratio, percent root mean square difference, quality score, and correlation coefficient.
RESUMO
With the advancement in artificial intelligence (AI) based E-healthcare applications, the role of automated diagnosis of various diseases has increased at a rapid rate. However, most of the existing diagnosis models provide results in a binary fashion such as whether the patient is infected with a specific disease or not. But there are many cases where it is required to provide suitable explanatory information such as the patient being infected from a particular disease along with the infection rate. Therefore, in this paper, to provide explanatory information to the doctors and patients, an efficient deep ensemble medical image captioning network (DCNet) is proposed. DCNet ensembles three well-known pre-trained models such as VGG16, ResNet152V2, and DenseNet201. Ensembling of these models achieves better results by preventing an over-fitting problem. However, DCNet is sensitive to its control parameters. Thus, to tune the control parameters, an evolving DCNet (EDC-Net) was proposed. Evolution process is achieved using the self-adaptive parameter control-based differential evolution (SAPCDE). Experimental results show that EDC-Net can efficiently extract the potential features of biomedical images. Comparative analysis shows that on the Open-i dataset, EDC-Net outperforms the existing models in terms of BLUE-1, BLUE-2, BLUE-3, BLUE-4, and kappa statistics (KS) by 1.258%, 1.185%, 1.289%, 1.098%, and 1.548%, respectively.
Assuntos
Inteligência Artificial , Diagnóstico por Imagem , Processamento de Imagem Assistida por Computador , HumanosRESUMO
In this paper, we present a signal processing approach to improve the resolution of a spectrometer with a fixed number of low-cost, non-ideal filters. We aim to show that the resolution can be improved beyond the limit set by the number of filters by exploiting the sparse nature of a signal spectrum. We consider an underdetermined system of linear equations as a model for signal spectrum estimation. We design a non-negative L1 norm minimization algorithm for solving the system of equations. We demonstrate that the resolution can be improved multiple times by using the proposed algorithm.
Assuntos
Algoritmos , Modelos Lineares , Processamento de Sinais Assistido por Computador , Análise Espectral/instrumentação , Análise Espectral/métodos , Simulação por Computador , MiniaturizaçãoRESUMO
Multilayer thin film (MTF) filter arrays for computational spectroscopy are fabricated using stencil lithography. The MTF filter array is a 6 × 6 square grid, and 169 identical arrays are fabricated on a single wafer. A computational spectrometer is formed by attaching the MTF filter array on a complementary metal-oxide-semiconductor (CMOS) image sensor. With a single exposure, 36 unique intensities of incident light are collected. The spectrum of the incident light is recovered using collected intensities and numerical optimization techniques. Varied light sources in the wavelength range of 500 to 849 nm are recovered with a spacing of 1 nm. The reconstructed spectra are a good match with the reference spectra, measured by a grating-based spectrometer. We also demonstrate computational pinhole spectral imaging using the MTF filter array. Adapting a spectral scanning method, we collect 36 monochromatic filtered images and reconstructed 350 monochromatic images in the wavelength range of 500 to 849 nm, with a spacing of 1 nm. These computational spectrometers could be useful for various applications that require compact size, high resolution, and wide working range.
RESUMO
The majority of the current-generation individuals all around the world are dealing with a variety of health-related issues. The most common cause of health problems has been found as depression, which is caused by intellectual difficulties. However, most people are unable to recognize such occurrences in them, and no procedures for discriminating them from normal people have been created so far. Even some advanced technologies do not support distinct classes of individuals as language writing skills vary greatly across numerous places, making the central operations cumbersome. As a result, the primary goal of the proposed research is to create a unique model that can detect a variety of diseases in humans, thereby averting a high level of depression. A machine learning method known as the Convolutional Neural Network (CNN) model has been included into this evolutionary process for extracting numerous features in three distinct units. The CNN also detects early-stage problems since it accepts input in the form of writing and sketching, both of which are turned to images. Furthermore, with this sort of image emotion analysis, ordinary reactions may be easily differentiated, resulting in more accurate prediction results. The characteristics such as reference line, tilt, length, edge, constraint, alignment, separation, and sectors are analyzed to test the usefulness of CNN for recognizing abnormalities, and the extracted features provide an enhanced value of around 74%higher than the conventional models.
Assuntos
Algoritmos , Aprendizado de Máquina , Humanos , Redes Neurais de Computação , PercepçãoRESUMO
Increasing data infringement while transmission and storage have become an apprehension for the data owners. Even the digital images transmitted over the network or stored at servers are prone to unauthorized access. However, several image steganography techniques were proposed in the literature for hiding a secret image by embedding it into cover media. But the low embedding capacity and poor reconstruction quality of images are significant limitations of these techniques. To overcome these limitations, deep learning-based image steganography techniques are proposed in the literature. Convolutional neural network (CNN) based U-Net encoder has gained significant research attention in the literature. However, its performance efficacy as compared to other CNN based encoders like V-Net and U-Net++ is not implemented for image steganography. In this paper, V-Net and U-Net++ encoders are implemented for image steganography. A comparative performance assessment of U-Net, V-Net, and U-Net++ architectures are carried out. These architectures are employed to hide the secret image into the cover image. Further, a unique, robust, and standard decoder for all architectures is designed to extract the secret image from the cover image. Based on the experimental results, it is identified that U-Net architecture outperforms the other two architectures as it reports high embedding capacity and provides better quality stego and reconstructed secret images.