Your browser doesn't support javascript.
loading
: 20 | 50 | 100
1 - 8 de 8
1.
PLoS One ; 19(3): e0300650, 2024.
Article En | MEDLINE | ID: mdl-38527025

As the demand for high-bandwidth Internet connections continues to surge, industries are exploring innovative ways to harness this connectivity, and smart agriculture stands at the forefront of this evolution. In this paper, we delve into the challenges faced by Internet Service Providers (ISPs) in efficiently managing bandwidth and traffic within their networks. We propose a synergy between two pivotal technologies, Multi-Protocol Label Switching-Traffic Engineering (MPLS-TE) and Diffserv Quality of Service (Diffserv-QoS), which have implications beyond traditional networks and resonate strongly with the realm of smart agriculture. The increasing adoption of technology in agriculture relies heavily on real-time data, remote monitoring, and automated processes. This dynamic nature requires robust and reliable high-bandwidth connections to facilitate data flow between sensors, devices, and central management systems. By optimizing bandwidth utilization through MPLS-TE and implementing traffic control mechanisms with Diffserv-QoS, ISPs can create a resilient network foundation for smart agriculture applications. The integration of MPLS-TE and Diffserv-QoS has resulted in significant enhancements in throughput and a considerable reduction in Jitter. Employment of the IPv4 header has demonstrated impressive outcomes, achieving a throughput of 5.83 Mbps and reducing Jitter to 3 msec.


Algorithms , Computer Communication Networks , Computer Simulation , Wireless Technology , Agriculture
2.
J Digit Imaging ; 35(5): 1308-1325, 2022 10.
Article En | MEDLINE | ID: mdl-35768753

Medical image fusion is a process that aims to merge the important information from images with different modalities of the same organ of the human body to create a more informative fused image. In recent years, deep learning (DL) methods have achieved significant breakthroughs in the field of image fusion because of their great efficiency. The DL methods in image fusion have become an active topic due to their high feature extraction and data representation ability. In this work, stacked sparse auto-encoder (SSAE), a general category of deep neural networks, is exploited in medical image fusion. The SSAE is an efficient technique for unsupervised feature extraction. It has high capability of complex data representation. The proposed fusion method is carried as follows. Firstly, the source images are decomposed into low- and high-frequency coefficient sub-bands with the non-subsampled contourlet transform (NSCT). The NSCT is a flexible multi-scale decomposition technique, and  it is superior to traditional decomposition techniques in several aspects. After that, the SSAE is implemented for feature extraction to obtain a sparse and deep representation from high-frequency coefficients. Then, the spatial frequencies are computed for the obtained features to be used for high-frequency coefficient fusion. After that, a maximum-based fusion rule is applied to fuse the low-frequency sub-band coefficients. The final integrated image is acquired by applying the inverse NSCT. The proposed method has been applied and assessed on various groups of medical image modalities. Experimental results prove that the proposed method could effectively merge the multimodal medical images, while preserving the detail information, perfectly.


Algorithms , Neural Networks, Computer , Humans
3.
Appl Opt ; 61(4): 875-883, 2022 Feb 01.
Article En | MEDLINE | ID: mdl-35201055

Two schemes for optical wireless modulation format recognition (MFR), based on the orthogonal-triangular decomposition (OTD) and Hough transform (HT) of the constellation diagrams, are proposed in this paper. Constellation diagrams are obtained at optical signal-to-noise ratios (OSNRs) ranging from 5 to 30 dB for seven different modulation formats (2/4/8/16-PSK and 8/16/32-QAM) as images. The first scheme depends on applying the HT of the obtained images; the second scheme is based on utilization of the decomposition of each of the obtained image matrices into an orthogonal matrix (Q) and an upper triangular matrix (R) followed by the HT. Different classifiers, including AlexNet, VGG16, and VGG19, are used for the MFR task. Model setups and results are provided to study the scheme efficiency at different levels of OSNR. The proposed schemes provide unique signatures for constellation diagrams. Moreover, it reveals that the main pattern corresponding to each constellation diagram is more distinguishable for both proposed schemes at different levels of OSNR. The obtained results achieve high accuracy at low OSNR values.

4.
Int J Numer Method Biomed Eng ; 38(1): e3530, 2022 01.
Article En | MEDLINE | ID: mdl-34506081

Deep learning is one of the most promising machine learning techniques that revolutionalized the artificial intelligence field. The known traditional and convolutional neural networks (CNNs) have been utilized in medical pattern recognition applications that depend on deep learning concepts. This is attributed to the importance of anomaly detection (AD) in automatic diagnosis systems. In this paper, the AD is performed on medical electroencephalography (EEG) signal spectrograms and medical corneal images for Internet of medical things (IoMT) systems. Deep learning based on the CNN models is employed for this task with training and testing phases. Each input image passes through a series of convolution layers with different kernel filters. For the classification task, pooling and fully-connected layers are utilized. Computer simulation experiments reveal the success and superiority of the proposed models for automated medical diagnosis in IoMT systems.


Artificial Intelligence , Neural Networks, Computer , Computer Simulation , Internet , Machine Learning
5.
Appl Opt ; 60(30): 9380-9389, 2021 Oct 20.
Article En | MEDLINE | ID: mdl-34807076

High-speed wireless communication is necessary in our personal lives, in both working and living spaces. This paper presents a scheme for wireless optical modulation format recognition (MFR) based on the Hough transform (HT). The HT is used to project constellation diagrams onto another space for efficient feature extraction. Constellation diagrams are obtained at optical signal-to-noise ratios (OSNR) ranging from 5 to 30 dB for eight different modulation formats (2/4/8/16 phase-shift keying and 8/16/32/64 QAM). Different classifiers are used for the task of MFR: AlexNet, VGG16, and VGG19. A study of the effect of varying the number of samples on the accuracy of the classifiers is provided for each modulation format. To evaluate the proposed scheme, the efficiency of the three classifiers is studied at different values of OSNR. The obtained results reveal that the proposed scheme succeeds in identifying the wireless optical modulation format blindly with a classification accuracy up to 100%, even at low OSNR values less than 10 dB.

6.
Microsc Res Tech ; 84(11): 2504-2516, 2021 Nov.
Article En | MEDLINE | ID: mdl-34121273

This article is mainly concerned with COVID-19 diagnosis from X-ray images. The number of cases infected with COVID-19 is increasing daily, and there is a limitation in the number of test kits needed in hospitals. Therefore, there is an imperative need to implement an efficient automatic diagnosis system to alleviate COVID-19 spreading among people. This article presents a discussion of the utilization of convolutional neural network (CNN) models with different learning strategies for automatic COVID-19 diagnosis. First, we consider the CNN-based transfer learning approach for automatic diagnosis of COVID-19 from X-ray images with different training and testing ratios. Different pre-trained deep learning models in addition to a transfer learning model are considered and compared for the task of COVID-19 detection from X-ray images. Confusion matrices of these studied models are presented and analyzed. Considering the performance results obtained, ResNet models (ResNet18, ResNet50, and ResNet101) provide the highest classification accuracy on the two considered datasets with different training and testing ratios, namely 80/20, 70/30, 60/40, and 50/50. The accuracies obtained using the first dataset with 70/30 training and testing ratio are 97.67%, 98.81%, and 100% for ResNet18, ResNet50, and ResNet101, respectively. For the second dataset, the reported accuracies are 99%, 99.12%, and 99.29% for ResNet18, ResNet50, and ResNet101, respectively. The second approach is the training of a proposed CNN model from scratch. The results confirm that training of the CNN from scratch can lead to the identification of the signs of COVID-19 disease.


COVID-19 , Deep Learning , COVID-19 Testing , Humans , Neural Networks, Computer , Radiography, Thoracic , SARS-CoV-2
7.
Wirel Pers Commun ; 120(2): 1543-1563, 2021.
Article En | MEDLINE | ID: mdl-33994667

Corona Virus Disease 19 (COVID-19) firstly spread in China since December 2019. Then, it spread at a high rate around the world. Therefore, rapid diagnosis of COVID-19 has become a very hot research topic. One of the possible diagnostic tools is to use a deep convolution neural network (DCNN) to classify patient images. Chest X-ray is one of the most widely-used imaging techniques for classifying COVID-19 cases. This paper presents a proposed wireless communication and classification system for X-ray images to detect COVID-19 cases. Different modulation techniques are compared to select the most reliable one with less required bandwidth. The proposed DCNN architecture consists of deep feature extraction and classification layers. Firstly, the proposed DCNN hyper-parameters are adjusted in the training phase. Then, the tuned hyper-parameters are utilized in the testing phase. These hyper-parameters are the optimization algorithm, the learning rate, the mini-batch size and the number of epochs. From simulation results, the proposed scheme outperforms other related pre-trained networks. The performance metrics are accuracy, loss, confusion matrix, sensitivity, precision, F 1 score, specificity, Receiver Operating Characteristic (ROC) curve, and Area Under the Curve (AUC). The proposed scheme achieves a high accuracy of 97.8 %, a specificity of 98.5 %, and an AUC of 98.9 %.

8.
Appl Opt ; 44(34): 7349-56, 2005 Dec 01.
Article En | MEDLINE | ID: mdl-16353806

We developed an approach to the blind multichannel reconstruction of high-resolution images. This approach is based on breaking the image reconstruction problem into three consecutive steps: a blind multichannel restoration, a wavelet-based image fusion, and a maximum entropy image interpolation. The blind restoration step depends on estimating the two-dimensional (2-D) greatest common divisor (GCD) between each observation and a combinational image generated by a weighted averaging process of the available observations. The purpose of generating this combinational image is to get a new image with a higher signal-to-noise ratio and a blurring operator that is a coprime with all the blurring operators of the available observations. The 2-D GCD is then estimated between the new image and each observation, and thus the effect of noise on the estimation process is reduced. The multiple outputs of the restoration step are then applied to the image fusion step, which is based on wavelets. The objective of this step is to integrate the data obtained from each observation into a single image, which is then interpolated to give an enhanced resolution image. A maximum entropy algorithm is derived and used in interpolating the resulting image from the fusion step. Results show that the suggested blind image reconstruction approach succeeds in estimating a high-resolution image from noisy blurred observations in the case of relatively coprime unknown blurring operators. The required computation time of the suggested approach is moderate.

...