Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 26
Filtrar
1.
Sensors (Basel) ; 22(13)2022 Jul 01.
Artículo en Inglés | MEDLINE | ID: mdl-35808473

RESUMEN

The calculation of the average sideways acceleration, based on speed and angular velocity on small roundabouts for a vehicle of up to 3.5 t gross vehicle mass, is described in this paper. Calculations of the turning radius are derived from angular velocity and an automatic selection of events, based on the lateral acceleration of the coefficient of variation within a defined time window. The calculation of the turning radius based on speed and angular velocity yields almost identical results to the calculation of the turning radius by the three-point method using GPS coordinates, as described in previous research. This means that the calculation of the turning radius, derived from the speed of GNSS/INS dual-antenna sensor and gyroscope data, yields similar results to those from the computation of the turning radius derived from the coordinates of a GNSS/INS dual-antenna sensor. The research results can be used in the development of sensors to improve road safety.

2.
Sensors (Basel) ; 22(6)2022 Mar 16.
Artículo en Inglés | MEDLINE | ID: mdl-35336468

RESUMEN

In this article, we address the determination of turning radius and lateral acceleration acting on a vehicle up to 3.5 t gross vehicle mass (GVM) and cargo in curves based on turning radius and speed. Global Navigation Satellite System with Inertial Navigation System (GNSS/INS) dual-antenna sensor is used to measure acceleration, speed, and vehicle position to determine the turning radius and determine the proper formula to calculate long average lateral acceleration acting on vehicle and cargo. The two methods for automatic selection of events were applied based on stable lateral acceleration value and on mean square error (MSE) of turning radiuses. The models of calculation of turning radius are valid for turning radius within 5-70 m for both methods of automatic selection of events with mean root mean square error (RMSE) 1.88 m and 1.32 m. The models of calculation of lateral acceleration are valid with mean RMSE of 0.022 g and 0.016 g for both methods of automatic selection of events. The results of the paper may be applied in the planning and implementation of packing and cargo securing procedures to calculate average lateral acceleration acting on vehicle and cargo based on turning radius and speed for vehicles up to 3.5 t GVM. The results can potentially be applied for the deployment of autonomous vehicles in solutions grouped under the term of Logistics 4.0.


Asunto(s)
Aceleración , Radio (Anatomía) , Comunicación Celular
3.
Sensors (Basel) ; 22(18)2022 Sep 13.
Artículo en Inglés | MEDLINE | ID: mdl-36146254

RESUMEN

Fog computing is one of the major components of future 6G networks. It can provide fast computing of different application-related tasks and improve system reliability due to better decision-making. Parallel offloading, in which a task is split into several sub-tasks and transmitted to different fog nodes for parallel computation, is a promising concept in task offloading. Parallel offloading suffers from challenges such as sub-task splitting and mapping of sub-tasks to the fog nodes. In this paper, we propose a novel many-to-one matching-based algorithm for the allocation of sub-tasks to fog nodes. We develop preference profiles for IoT nodes and fog nodes to reduce the task computation delay. We also propose a technique to address the externalities problem in the matching algorithm that is caused by the dynamic preference profiles. Furthermore, a detailed evaluation of the proposed technique is presented to show the benefits of each feature of the algorithm. Simulation results show that the proposed matching-based offloading technique outperforms other available techniques from the literature and improves task latency by 52% at high task loads.


Asunto(s)
Algoritmos , Simulación por Computador , Reproducibilidad de los Resultados
4.
Sensors (Basel) ; 22(4)2022 Feb 09.
Artículo en Inglés | MEDLINE | ID: mdl-35214219

RESUMEN

The paradigm of dynamic shared access aims to provide flexible spectrum usage. Recently, Federal Communications Commission (FCC) has proposed a new dynamic spectrum management framework for the sharing of a 3.5 GHz (3550-3700 MHz) federal band, called a citizen broadband radio service (CBRS) band, which is governed by spectrum access system (SAS). It is the responsibility of SAS to manage the set of CBRS-SAS users. The set of users are classified in three tiers: incumbent access (IA) users, primary access license (PAL) users and the general authorized access (GAA) users. In this article, dynamic channel assignment algorithm for PAL and GAA users is designed with the goal of maximizing the transmission rate and minimizing the total cost of GAA users accessing PAL reserved channels. We proposed a new mathematical model based on multi-objective optimization for the selection of PAL operators and idle PAL reserved channels allocation to GAA users considering the diversity of PAL reserved channels' attributes and the diversification of GAA users' business needs. The proposed model is estimated and validated on various performance metrics through extensive simulations and compared with existing algorithms such as Hungarian algorithm, auction algorithm and Gale-Shapley algorithm. The proposed model results indicate that overall transmission rate, net cost and data-rate per unit cost remain the same in comparison to the classical Hungarian method and auction algorithm. However, the improved model solves the resource allocation problem approximately up to four times faster with better load management, which validates the efficiency of our model.

5.
Sensors (Basel) ; 21(19)2021 Sep 29.
Artículo en Inglés | MEDLINE | ID: mdl-34640821

RESUMEN

The widespread development in wireless technologies and the advancements in multimedia communication have brought about a positive impact on the performance of wireless transceivers. We investigate the performance of our three-stage turbo detected system using state-of-the-art high efficiency video coding (HEVC), also known as the H.265 video standard. The system makes use of sphere packing (SP) modulation with the combinational gain technique of layered steered space-time code (LSSTC). The proposed three-stage system is simulated for the correlated Rayleigh fading channel and the bit-error rate (BER) curve obtained after simulation is free of any floor formation. The system employs low complexity source-bit coding (SBC) for protecting the H.265 coded stream. An intermediate recursive unity-rate code (URC) with an infinite impulse response is employed as an inner precoder. More specifically, the URC assists in the prevention of the BER floor by distributing the information across the decoders. There is an observable gain in the BER and peak signal-to-noise ratio (PSNR) performances with the increasing value of minimum Hamming distance (dH,min) using the three-stage system. Convergence analysis of the proposed system is investigated through an extrinsic information transfer (EXIT) chart. Our proposed system demonstrates better performance of about 22 dB than the benchmarker utilizing LSSTC-SP for iterative source-channel detection, but without exploiting the optimized SBC schemes.

6.
Sensors (Basel) ; 21(16)2021 Aug 13.
Artículo en Inglés | MEDLINE | ID: mdl-34450901

RESUMEN

The introduction of 5G with excessively high speeds and ever-advancing cellular device capabilities has increased the demand for high data rate wireless multimedia communication. Data compression, transmission robustness and error resilience are introduced to meet the increased demands of high data rates of today. An innovative approach is to come up with a unique setup of source bit codes (SBCs) that ensure the convergence and joint source-channel coding (JSCC) correspondingly results in lower bit error ratio (BER). The soft-bit assisted source and channel codes are optimized jointly for optimum convergence. Source bit codes assisted by iterative detection are used with a rate-1 precoder for performance evaluation of the above mentioned scheme of transmitting sata-partitioned (DP) H.264/AVC frames from source through a narrowband correlated Rayleigh fading channel. A novel approach of using sphere packing (SP) modulation aided differential space time spreading (DSTS) in combination with SBC is designed for the video transmission to cope with channel fading. Furthermore, the effects of SBC with different hamming distances d(H,min) but similar coding rates is explored on objective video quality such as peak signal to noise ratio (PSNR) and also the overall bit error ratio (BER). EXtrinsic Information Transfer Charts (EXIT) are used for analysis of the convergence behavior of SBC and its iterative scheme. Specifically, the experiments exhibit that the proposed scheme of error protection of SBC d(H,min) = 6 outperforms the SBCs having same code rate, but with d(H,min) = 3 by 3 dB with PSNR degradation of 1 dB. Furthermore, simulation results show that a gain of 27 dB Eb/N0 is achieved with SBC having code rate 1/3 compared to the benchmark Rate-1 SBC codes.

7.
Sensors (Basel) ; 21(3)2021 Jan 24.
Artículo en Inglés | MEDLINE | ID: mdl-33498805

RESUMEN

The increasing popularity of using wireless devices to handle routine tasks has increased the demand for incorporating multiple-input-multiple-output (MIMO) technology to utilize limited bandwidth efficiently. The presence of comparatively large space at the base station (BS) makes it straightforward to exploit the MIMO technology's useful properties. From a mobile handset point of view, and limited space at the mobile handset, complex procedures are required to increase the number of active antenna elements. In this paper, to address such type of issues, a four-element MIMO dual band, dual diversity, dipole antenna has been proposed for 5G-enabled handsets. The proposed antenna design relies on space diversity as well as pattern diversity to provide an acceptable MIMO performance. The proposed dipole antenna simultaneously operates at 3.6 and 4.7 sub-6 GHz bands. The usefulness of the proposed 4×4 MIMO dipole antenna has been verified by comparing the simulated and measured results using a fabricated version of the proposed antenna. A specific absorption rate (SAR) analysis has been carried out using CST Voxel (a heterogeneous biological human head) model, which shows maximum SAR value for 10 g of head tissue is well below the permitted value of 2.0 W/kg. The total efficiency of each antenna element in this structure is -2.88, -3.12, -1.92 and -2.45 dB at 3.6 GHz, while at 4.7 GHz are -1.61, -2.19, -1.72 and -1.18 dB respectively. The isolation, envelope correlation coefficient (ECC) between the adjacent ports and the loss in capacity is below the standard margin, making the structure appropriate for MIMO applications. The effect of handgrip and the housing box on the total antenna efficiency is analyzed, and only 5% variation is observed, which results from careful placement of antenna elements.

8.
Entropy (Basel) ; 23(5)2021 May 01.
Artículo en Inglés | MEDLINE | ID: mdl-34062751

RESUMEN

This article investigates the performance of various sophisticated channel coding and transmission schemes for achieving reliable transmission of a highly compressed video stream. Novel error protection schemes including Non-Convergent Coding (NCC) scheme, Non-Convergent Coding assisted with Differential Space Time Spreading (DSTS) and Sphere Packing (SP) modulation (NCDSTS-SP) scheme and Convergent Coding assisted with DSTS and SP modulation (CDSTS-SP) are analyzed using Bit Error Ratio (BER) and Peak Signal to Noise Ratio (PSNR) performance metrics. Furthermore, error reduction is achieved using sophisticated transceiver comprising SP modulation technique assisted by Differential Space Time Spreading. The performance of the iterative Soft Bit Source Decoding (SBSD) in combination with channel codes is analyzed using various error protection setups by allocating consistent overall bit-rate budget. Additionally, the iterative behavior of SBSD assisted RSC decoder is analyzed with the aid of Extrinsic Information Transfer (EXIT) Chart in order to analyze the achievable turbo cliff of the iterative decoding process. The subjective and objective video quality performance of the proposed error protection schemes is analyzed while employing H.264 advanced video coding and H.265 high efficient video coding standards, while utilizing diverse video sequences having different resolution, motion and dynamism. It was observed that in the presence of noisy channel the low resolution videos outperforms its high resolution counterparts. Furthermore, it was observed that the performance of video sequence with low motion contents and dynamism outperforms relative to video sequence with high motion contents and dynamism. More specifically, it is observed that while utilizing H.265 video coding standard, the Non-Convergent Coding assisted with DSTS and SP modulation scheme with enhanced transmission mechanism results in Eb/N0 gain of 20 dB with reference to the Non-Convergent Coding and transmission mechanism at the objective PSNR value of 42 dB. It is important to mention that both the schemes have employed identical code rate. Furthermore, the Convergent Coding assisted with DSTS and SP modulation mechanism achieved superior performance with reference to the equivalent rate Non-Convergent Coding assisted with DSTS and SP modulation counterpart mechanism, with a performance gain of 16 dB at the objective PSNR grade of 42 dB. Moreover, it is observed that the maximum achievable PSNR gain through H.265 video coding standard is 45 dB, with a PSNR gain of 3 dB with reference to the identical code rate H.264 coding scheme.

9.
Entropy (Basel) ; 23(2)2021 Feb 18.
Artículo en Inglés | MEDLINE | ID: mdl-33670499

RESUMEN

The reliable transmission of multimedia information that is coded through highly compression efficient encoders is a challenging task. This article presents the iterative convergence performance of IrRegular Convolutional Codes (IRCCs) with the aid of the multidimensional Sphere Packing (SP) modulation assisted Differential Space Time Spreading Codes (IRCC-SP-DSTS) scheme for the transmission of H.264/Advanced Video Coding (AVC) compressed video coded stream. In this article, three different regular and irregular error protection schemes are presented. In the presented Regular Error Protection (REP) scheme, all of the partitions of the video sequence are regular error protected with a rate of 3/4 IRCC. In Irregular Error Protection scheme-1 (IREP-1) the H.264/AVC partitions are prioritized as A, B & C, respectively. Whereas, in Irregular Error Protection scheme-2 (IREP-2), the H.264/AVC partitions are prioritized as B, A, and C, respectively. The performance of the iterative paradigm of an inner IRCC and outer Rate-1 Precoder is analyzed by the EXtrinsic Information Transfer (EXIT) Chart and the Quality of Experience (QoE) performance of the proposed mechanism is evaluated using the Bit Error Rate (BER) metric and Peak Signal to Noise Ratio (PSNR)-based objective quality metric. More specifically, it is concluded that the proposed IREP-2 scheme exhibits a gain of 1 dB Eb/N0 with reference to the IREP-1 and Eb/N0 gain of 0.6 dB with reference to the REP scheme over the PSNR degradation of 1 dB.

10.
Sensors (Basel) ; 20(21)2020 Oct 23.
Artículo en Inglés | MEDLINE | ID: mdl-33114043

RESUMEN

This publication describes an innovative approach to voice control of operational and technical functions in a real Smart Home (SH) environment, where, for voice control within SH, it is necessary to provide robust technological systems for building automation and for technology visualization, software for recognition of individual voice commands, and a robust system for additive noise canceling. The KNX technology for building automation is used and described in the article. The LabVIEW SW tool is used for visualization, data connectivity to the speech recognizer, connection to the sound card, and the actual mathematical calculations within additive noise canceling. For the actual recognition of commands, the SW tool for recognition within the Microsoft Windows OS is used. In the article, the least mean squares algorithm (LMS) and independent component analysis (ICA) are used for additive noise canceling from the speech signal measured in a real SH environment. Within the proposed experiments, the success rate of voice command recognition for different types of additive interference (television, vacuum cleaner, washing machine, dishwasher, and fan) in the real SH environment was compared. The recognition success rate was greater than 95% for the selected experiments.

11.
Sensors (Basel) ; 19(24)2019 Dec 09.
Artículo en Inglés | MEDLINE | ID: mdl-31835335

RESUMEN

At present, one of the primary tasks of the construction industry is to build transport infrastructure. This concerns both the construction of new bypasses of towns and the repair of existing roads, which are damaged by congestion, especially by freight transport. Whether it is a new building or a reconstruction, it is always very important to choose a suitable method of subsoil treatment. One of the most commonly used methods for soil treatment is currently compaction using vibratory rollers. This method is very effective both in terms of results and due to its low financial demands compared to other methods. Vibration is transmitted to the surrounding rock environment when compacting the subsoil using vibratory rollers. Although the intensity of these vibrations is not as pronounced as in other methods of subsoil treatment, such vibrations can have a significant effect, for example during compaction in urban areas or in an area with the presence of historical objects. Therefore, it is very advisable to monitor the effect of these vibrations on the environment during construction. This paper brings an original experimental comparative study of standard seismic instrumentation with a developed interferometric sensor for the field of monitoring vibrations generated during compaction of subsoil using vibrating rollers. The paper presents time and frequency domain results, as well as attenuation curves, which represent real attenuation of vibrations in a given rock environment. The results presented here show that a system operating on a different physical principle from the one used at present has the potential to replace the existing, very expensive, seismic equipment.

12.
Sensors (Basel) ; 19(23)2019 Nov 24.
Artículo en Inglés | MEDLINE | ID: mdl-31771275

RESUMEN

This paper presents a neural network approach for weather forecast improvement. Predicted parameters, such as air temperature or precipitation, play a crucial role not only in the transportation sector but they also influence people's everyday activities. Numerical weather models require real measured data for the correct forecast run. This data is obtained from automatic weather stations by intelligent sensors. Sensor data collection and its processing is a necessity for finding the optimal weather conditions estimation. The European Centre for Medium-Range Weather Forecasts (ECMWF) model serves as the main base for medium-range predictions among the European countries. This model is capable of providing forecast up to 10 days with horizontal resolution of 9 km. Although ECMWF is currently the global weather system with the highest horizontal resolution, this resolution is still two times worse than the one offered by limited area (regional) numeric models (e.g., ALADIN that is used in many European and north African countries). They use global forecasting model and sensor-based weather monitoring network as the input parameters (global atmospheric situation at regional model geographic boundaries, description of atmospheric condition in numerical form), and because the analysed area is much smaller (typically one country), computing power allows them to use even higher resolution for key meteorological parameters prediction. However, the forecast data obtained from regional models are available only for a specific country, and end-users cannot find them all in one place. Furthermore, not all members provide open access to these data. Since the ECMWF model is commercial, several web services offer it free of charge. Additionally, because this model delivers forecast prediction for the whole of Europe (and for the whole world, too), this attitude is more user-friendly and attractive for potential customers. Therefore, the proposed novel hybrid method based on machine learning is capable of increasing ECMWF forecast outputs accuracy to the same level as limited area models provide, and it can deliver a more accurate forecast in real-time.

13.
Data Brief ; 54: 110458, 2024 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-38711739

RESUMEN

This paper presents a dataset comprising 700 video sequences encoded in the two most popular video formats (codecs) of today, H.264 and H.265 (HEVC). Six reference sequences were encoded under different quality profiles, including several bitrates and resolutions, and were affected by various packet loss rates. Subsequently, the image quality of encoded video sequences was assessed by subjective, as well as objective, evaluation. Therefore, the enclosed spreadsheet contains results of both assessment approaches in a form of MOS (Mean Opinion Score) delivered by the absolute category ranking (ACR) procedure, SSIM (Structural Similarity Index Measure) and VMAF (Video Multimethod Assessment Fusion). All assessments are available for each test sequence. This allows a comprehensive evaluation of coding efficiency under different test scenarios without the necessity of real observers or a secure laboratory environment, as recommended by the ITU (International Telecommunication Union). As there is currently no standardized mapping function between the results of subjective and objective methods, this dataset can also be used to design and verify experimental machine learning algorithms that contribute to solving the relevant research issues.

14.
Sci Rep ; 14(1): 69, 2024 Jan 02.
Artículo en Inglés | MEDLINE | ID: mdl-38167902

RESUMEN

Pakistan falls significantly below the recommended forest coverage level of 20 to 30 percent of total area, with less than 6 percent of its land under forest cover. This deficiency is primarily attributed to illicit deforestation for wood and charcoal, coupled with a failure to embrace advanced techniques for forest estimation, monitoring, and supervision. Remote sensing techniques leveraging Sentinel-2 satellite images were employed. Both single-layer stacked images and temporal layer stacked images from various dates were utilized for forest classification. The application of an artificial neural network (ANN) supervised classification algorithm yielded notable results. Using a single-layer stacked image from Sentinel-2, an impressive 91.37% training overall accuracy and 0.865 kappa coefficient were achieved, along with 93.77% testing overall accuracy and a 0.902 kappa coefficient. Furthermore, the temporal layer stacked image approach demonstrated even better results. This method yielded 98.07% overall training accuracy, 97.75% overall testing accuracy, and kappa coefficients of 0.970 and 0.965, respectively. The random forest (RF) algorithm, when applied, achieved 99.12% overall training accuracy, 92.90% testing accuracy, and kappa coefficients of 0.986 and 0.882. Notably, with the temporal layer stacked image of the Sentinel-2 satellite, the RF algorithm reached exceptional performance with 99.79% training accuracy, 96.98% validation accuracy, and kappa coefficients of 0.996 and 0.954. In terms of forest cover estimation, the ANN algorithm identified 31.07% total forest coverage in the District Abbottabad region. In comparison, the RF algorithm recorded a slightly higher 31.17% of the total forested area. This research highlights the potential of advanced remote sensing techniques and machine learning algorithms in improving forest cover assessment and monitoring strategies.

15.
Data Brief ; 53: 110125, 2024 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-38370917

RESUMEN

The Cattle Biometrics Dataset is the result of a rigorous process of data collecting, encompassing a wide range of cattle photographs obtained from publicly accessible cattle markets and farms. The dataset provided contains a comprehensive collection of more than 8,000 annotated samples derived from several cow breeds. This dataset represents a valuable asset for conducting research in the field of biometric recognition. The diversity of cattle in this context includes a range of ages, genders, breeds, and environmental conditions. Every photograph is taken from different quality cameras is thoroughly annotated, with special attention given to the muzzle of the cattle, which is considered an excellent biometric characteristic. In addition to its obvious practical benefits, this dataset possesses significant potential for extensive reuse. Within the domain of computer vision, it serves as a catalyst for algorithmic advancements, whereas in the agricultural sector, it augments practises related to cattle management. Machine learning aficionados highly value the use of machine learning for the construction and experimentation of models, especially in the context of transfer learning. Interdisciplinary collaboration is actively encouraged, facilitating the advancement of knowledge at the intersections of agriculture, computer science, and data science. The Cattle Biometrics Dataset represents a valuable resource that has the potential to stimulate significant advancements in various academic disciplines, fostering ground breaking research and innovation.

16.
PLoS One ; 19(3): e0299127, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38536782

RESUMEN

Depression is a serious mental health disorder affecting millions of individuals worldwide. Timely and precise recognition of depression is vital for appropriate mediation and effective treatment. Electroencephalography (EEG) has surfaced as a promising tool for inspecting the neural correlates of depression and therefore, has the potential to contribute to the diagnosis of depression effectively. This study presents an EEG-based mental depressive disorder detection mechanism using a publicly available EEG dataset called Multi-modal Open Dataset for Mental-disorder Analysis (MODMA). This study uses EEG data acquired from 55 participants using 3 electrodes in the resting-state condition. Twelve temporal domain features are extracted from the EEG data by creating a non-overlapping window of 10 seconds, which is presented to a novel feature selection mechanism. The feature selection algorithm selects the optimum chunk of attributes with the highest discriminative power to classify the mental depressive disorders patients and healthy controls. The selected EEG attributes are classified using three different classification algorithms i.e., Best- First (BF) Tree, k-nearest neighbor (KNN), and AdaBoost. The highest classification accuracy of 96.36% is achieved using BF-Tree using a feature vector length of 12. The proposed mental depressive classification scheme outperforms the existing state-of-the-art depression classification schemes in terms of the number of electrodes used for EEG recording, feature vector length, and the achieved classification accuracy. The proposed framework could be used in psychiatric settings, providing valuable support to psychiatrists.


Asunto(s)
Depresión , Máquina de Vectores de Soporte , Humanos , Depresión/diagnóstico , Algoritmos , Electroencefalografía , Aprendizaje Automático
17.
PLoS One ; 19(9): e0307825, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-39241003

RESUMEN

Brain tumors pose significant global health concerns due to their high mortality rates and limited treatment options. These tumors, arising from abnormal cell growth within the brain, exhibits various sizes and shapes, making their manual detection from magnetic resonance imaging (MRI) scans a subjective and challenging task for healthcare professionals, hence necessitating automated solutions. This study investigates the potential of deep learning, specifically the DenseNet architecture, to automate brain tumor classification, aiming to enhance accuracy and generalizability for clinical applications. We utilized the Figshare brain tumor dataset, comprising 3,064 T1-weighted contrast-enhanced MRI images from 233 patients with three prevalent tumor types: meningioma, glioma, and pituitary tumor. Four pre-trained deep learning models-ResNet, EfficientNet, MobileNet, and DenseNet-were evaluated using transfer learning from ImageNet. DenseNet achieved the highest test set accuracy of 96%, outperforming ResNet (91%), EfficientNet (91%), and MobileNet (93%). Therefore, we focused on improving the performance of the DenseNet, while considering it as base model. To enhance the generalizability of the base DenseNet model, we implemented a fine-tuning approach with regularization techniques, including data augmentation, dropout, batch normalization, and global average pooling, coupled with hyperparameter optimization. This enhanced DenseNet model achieved an accuracy of 97.1%. Our findings demonstrate the effectiveness of DenseNet with transfer learning and fine-tuning for brain tumor classification, highlighting its potential to improve diagnostic accuracy and reliability in clinical settings.


Asunto(s)
Neoplasias Encefálicas , Aprendizaje Profundo , Imagen por Resonancia Magnética , Humanos , Neoplasias Encefálicas/diagnóstico por imagen , Neoplasias Encefálicas/patología , Neoplasias Encefálicas/clasificación , Imagen por Resonancia Magnética/métodos , Meningioma/diagnóstico por imagen , Meningioma/patología , Glioma/diagnóstico por imagen , Glioma/patología , Glioma/clasificación , Masculino , Femenino , Neoplasias Hipofisarias/diagnóstico por imagen , Neoplasias Hipofisarias/patología , Neoplasias Hipofisarias/clasificación
18.
Sci Rep ; 14(1): 14976, 2024 Jun 28.
Artículo en Inglés | MEDLINE | ID: mdl-38951646

RESUMEN

Software-defined networking (SDN) is a pioneering network paradigm that strategically decouples the control plane from the data and management planes, thereby streamlining network administration. SDN's centralized network management makes configuring access control list (ACL) policies easier, which is important as these policies frequently change due to network application needs and topology modifications. Consequently, this action may trigger modifications at the SDN controller. In response, the controller performs computational tasks to generate updated flow rules in accordance with modified ACL policies and installs flow rules at the data plane. Existing research has investigated reactive flow rules installation that changes in ACL policies result in packet violations and network inefficiencies. Network management becomes difficult due to deleting inconsistent flow rules and computing new flow rules per modified ACL policies. The proposed solution efficiently handles ACL policy change phenomena by automatically detecting ACL policy change and accordingly detecting and deleting inconsistent flow rules along with the caching at the controller and adding new flow rules at the data plane. A comprehensive analysis of both proactive and reactive mechanisms in SDN is carried out to achieve this. To facilitate the evaluation of these mechanisms, the ACL policies are modeled using a 5-tuple structure comprising Source, Destination, Protocol, Ports, and Action. The resulting policies are then translated into a policy implementation file and transmitted to the controller. Subsequently, the controller utilizes the network topology and the ACL policies to calculate the necessary flow rules and caches these flow rules in hash table in addition to installing them at the switches. The proposed solution is simulated in Mininet Emulator using a set of ACL policies, hosts, and switches. The results are presented by varying the ACL policy at different time instances, inter-packet delay and flow timeout value. The simulation results show that the reactive flow rule installation performs better than the proactive mechanism with respect to network throughput, packet violations, successful packet delivery, normalized overhead, policy change detection time and end-to-end delay. The proposed solution, designed to be directly used on SDN controllers that support the Pyretic language, provides a flexible and efficient approach for flow rule installation. The proposed mechanism can be employed to facilitate network administrators in implementing ACL policies. It may also be integrated with network monitoring and debugging tools to analyze the effectiveness of the policy change mechanism.

19.
Heliyon ; 10(8): e29410, 2024 Apr 30.
Artículo en Inglés | MEDLINE | ID: mdl-38644823

RESUMEN

Currently, the Internet of Things (IoT) generates a huge amount of traffic data in communication and information technology. The diversification and integration of IoT applications and terminals make IoT vulnerable to intrusion attacks. Therefore, it is necessary to develop an efficient Intrusion Detection System (IDS) that guarantees the reliability, integrity, and security of IoT systems. The detection of intrusion is considered a challenging task because of inappropriate features existing in the input data and the slow training process. In order to address these issues, an effective meta heuristic based feature selection and deep learning techniques are developed for enhancing the IDS. The Osprey Optimization Algorithm (OOA) based feature selection is proposed for selecting the highly informative features from the input which leads to an effective differentiation among the normal and attack traffic of network. Moreover, the traditional sigmoid and tangent activation functions are replaced with the Exponential Linear Unit (ELU) activation function to propose the modified Bi-directional Long Short Term Memory (Bi-LSTM). The modified Bi-LSTM is used for classifying the types of intrusion attacks. The ELU activation function makes gradients extremely large during back-propagation and leads to faster learning. This research is analysed in three different datasets such as N-BaIoT, Canadian Institute for Cybersecurity Intrusion Detection Dataset 2017 (CICIDS-2017), and ToN-IoT datasets. The empirical investigation states that the proposed framework obtains impressive detection accuracy of 99.98 %, 99.97 % and 99.88 % on the N-BaIoT, CICIDS-2017, and ToN-IoT datasets, respectively. Compared to peer frameworks, this framework obtains high detection accuracy with better interpretability and reduced processing time.

20.
PLoS One ; 18(2): e0275653, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-36758037

RESUMEN

Deep learning based data driven methods with multi-sensors spectro-temporal data are widely used for pattern identification and land-cover classification in remote sensing domain. However, adjusting the right tuning for the deep learning models is extremely important as different parameter setting can alter the performance of the model. In our research work, we have evaluated the performance of Convolutional Long Short-Term Memory (ConvLSTM) and deep learning techniques, over various hyper-parameters setting for an imbalanced dataset and the one with highest performance is utilized for land-cover classification. The parameters that are considered for experimentation are; Batch size, Number of Layers in ConvLSTM model, and No of filters in each layer of the ConvLSTM are the parameters that will be considered for our experimentation. Experiments also have been conducted on LSTM model for comparison using the same hyper-parameters. It has been found that the two layered ConvLSTM model having 16-filters and a batch size of 128 outperforms other setting scenarios, with an overall validation accuracy of 97.71%. The accuracy achieved for the LSTM is 93.9% for training and 92.7% for testing.


Asunto(s)
Memoria a Largo Plazo , Redes Neurales de la Computación
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA