Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Resultados 1 - 9 de 9
Filtrar
Más filtros

Banco de datos
Tipo del documento
Publication year range
1.
Sensors (Basel) ; 23(17)2023 Aug 22.
Artículo en Inglés | MEDLINE | ID: mdl-37687776

RESUMEN

Unmanned underwater vehicles (UUVs) are becoming increasingly important for a variety of applications, including ocean exploration, mine detection, and military surveillance. This paper aims to provide a comprehensive examination of the technologies that enable the operation of UUVs. We begin by introducing various types of unmanned vehicles capable of functioning in diverse environments. Subsequently, we delve into the underlying technologies necessary for unmanned vehicles operating in underwater environments. These technologies encompass communication, propulsion, dive systems, control systems, sensing, localization, energy resources, and supply. We also address general technical approaches and research contributions within this domain. Furthermore, we present a comprehensive overview of related work, survey methodologies employed, research inquiries, statistical trends, relevant keywords, and supporting articles that substantiate both broad and specific assertions. Expanding on this, we provide a detailed and coherent explanation of the operational framework of UUVs and their corresponding supporting technologies, with an emphasis on technical descriptions. We then evaluate the existing gaps in the performance of supporting technologies and explore the recent challenges associated with implementing the Thorp model for the distribution of shared resources, specifically in communication and energy domains. We also address the joint design of operations involving unmanned surface vehicles (USVs), unmanned aerial vehicles (UAVs), and UUVs, which necessitate collaborative research endeavors to accomplish mission objectives. This analysis highlights the need for future research efforts in these areas. Finally, we outline several critical research questions that warrant exploration in future studies.

2.
Sensors (Basel) ; 21(10)2021 May 19.
Artículo en Inglés | MEDLINE | ID: mdl-34069364

RESUMEN

In mobile edge computing (MEC), partial computational offloading can be intelligently investigated to reduce the energy consumption and service delay of user equipment (UE) by dividing a single task into different components. Some of the components execute locally on the UE while the remaining are offloaded to a mobile edge server (MES). In this paper, we investigate the partial offloading technique in MEC using a supervised deep learning approach. The proposed technique, comprehensive and energy efficient deep learning-based offloading technique (CEDOT), intelligently selects the partial offloading policy and also the size of each component of a task to reduce the service delay and energy consumption of UEs. We use deep learning to find, simultaneously, the best partitioning of a single task with the best offloading policy. The deep neural network (DNN) is trained through a comprehensive dataset, generated from our mathematical model, which reduces the time delay and energy consumption of the overall process. Due to the complexity and computation of the mathematical model in the algorithm being high, due to trained DNN the complexity and computation are minimized in the proposed work. We propose a comprehensive cost function, which depends on various delays, energy consumption, radio resources, and computation resources. Furthermore, the cost function also depends on energy consumption and delay due to the task-division-process in partial offloading. None of the literature work considers the partitioning along with the computational offloading policy, and hence, the time and energy consumption due to task-division-process are ignored in the cost function. The proposed work considers all the important parameters in the cost function and generates a comprehensive training dataset with high computation and complexity. Once we get the training dataset, then the complexity is minimized through trained DNN which gives faster decision making with low energy consumptions. Simulation results demonstrate the superior performance of the proposed technique with high accuracy of the DNN in deciding offloading policy and partitioning of a task with minimum delay and energy consumption for UE. More than 70% accuracy of the trained DNN is achieved through a comprehensive training dataset. The simulation results also show the constant accuracy of the DNN when the UEs are moving which means the decision making of the offloading policy and partitioning are not affected by the mobility of UEs.

3.
Sensors (Basel) ; 19(24)2019 Dec 12.
Artículo en Inglés | MEDLINE | ID: mdl-31842268

RESUMEN

Software-Defined Networking (SDN) has opened a promising and potential approach for future networks, which mostly requires the low-level configuration to implement different controls. With the high advantages of SDN by decomposing the network control plane from the data plane, SDN has become a crucial platform to implement Internet of Things (IoT) services. However, a static SDN controller placement cannot obtain an efficient solution in distributed and dynamic IoT networks. In this paper, we investigate an optimization framework under a well-known theory, namely submodularity optimization, to formulate and address different aspects of the controller placement problem in a distributed network, specifically in an IoT scenario. Concretely, we develop a framework that deals with a series of controller placement problems from basic to complicated use cases. Corresponding to each use case, we provide discussion and a heuristic algorithm based on the submodularity concept. Finally, we present extensive simulations conducted on our framework. The simulation results show that our proposed algorithms can outperform considered baseline methods in terms of execution time, the number of controllers, and network latency.

4.
Sensors (Basel) ; 15(4): 7658-90, 2015 Mar 30.
Artículo en Inglés | MEDLINE | ID: mdl-25831084

RESUMEN

Cognitive radio (CR) has emerged as a promising technology to solve problems related to spectrum scarcity and provides a ubiquitous wireless access environment. CR-enabled secondary users (SUs) exploit spectrum white spaces opportunistically and immediately vacate the acquired licensed channels as primary users (PUs) arrive. Accessing the licensed channels without the prior knowledge of PU traffic patterns causes severe throughput degradation due to excessive channel switching and PU-to-SU collisions. Therefore, it is significantly important to design a PU activity-aware medium access control (MAC) protocol for cognitive radio networks (CRNs). In this paper, we first propose a licensed channel usage pattern identification scheme, based on a two-state Markov model, and then estimate the future idle slots using previous observations of the channels. Furthermore, based on these past observations, we compute the rank of each available licensed channel that gives SU transmission success assessment during the estimated idle slot. Secondly, we propose a PU activity-aware distributed MAC (PAD-MAC) protocol for heterogeneous multi-channel CRNs that selects the best channel for each SU to enhance its throughput. PAD-MAC controls SU activities by allowing them to exploit the licensed channels only for the duration of estimated idle slots and enables predictive and fast channel switching. To evaluate the performance of the proposed PAD-MAC, we compare it with the distributed QoS-aware MAC (QC-MAC) and listen-before-talk MAC schemes. Extensive numerical results show the significant improvements of the PAD-MAC in terms of the SU throughput, SU channel switching rate and PU-to-SU collision rate.

5.
Sensors (Basel) ; 14(8): 14500-25, 2014 Aug 08.
Artículo en Inglés | MEDLINE | ID: mdl-25111241

RESUMEN

Wireless mesh networking is a promising technology that can support numerous multimedia applications. Multimedia applications have stringent quality of service (QoS) requirements, i.e., bandwidth, delay, jitter, and packet loss ratio. Enabling such QoS-demanding applications over wireless mesh networks (WMNs) require QoS provisioning routing protocols that lead to the network resource underutilization problem. Moreover, random topology deployment leads to have some unused network resources. Therefore, resource optimization is one of the most critical design issues in multi-hop, multi-radio WMNs enabled with multimedia applications. Resource optimization has been studied extensively in the literature for wireless Ad Hoc and sensor networks, but existing studies have not considered resource underutilization issues caused by QoS provisioning routing and random topology deployment. Finding a QoS-provisioned path in wireless mesh networks is an NP complete problem. In this paper, we propose a novel Integer Linear Programming (ILP) optimization model to reconstruct the optimal connected mesh backbone topology with a minimum number of links and relay nodes which satisfies the given end-to-end QoS demands for multimedia traffic and identification of extra resources, while maintaining redundancy. We further propose a polynomial time heuristic algorithm called Link and Node Removal Considering Residual Capacity and Traffic Demands (LNR-RCTD). Simulation studies prove that our heuristic algorithm provides near-optimal results and saves about 20% of resources from being wasted by QoS provisioning routing and random topology deployment.


Asunto(s)
Redes de Comunicación de Computadores/instrumentación , Multimedia , Tecnología Inalámbrica/instrumentación , Algoritmos , Simulación por Computador , Modelos Teóricos , Programación Lineal
6.
IEEE J Biomed Health Inform ; 28(3): 1261-1272, 2024 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-37043319

RESUMEN

The abnormal growth of malignant or nonmalignant tissues in the brain causes long-term damage to the brain. Magnetic resonance imaging (MRI) is one of the most common methods of detecting brain tumors. To determine whether a patient has a brain tumor, MRI filters are physically examined by experts after they are received. It is possible for MRI images examined by different specialists to produce inconsistent results since professionals formulate evaluations differently. Furthermore, merely identifying a tumor is not enough. To begin treatment as soon as possible, it is equally important to determine the type of tumor the patient has. In this paper, we consider the multiclass classification of brain tumors since significant work has been done on binary classification. In order to detect tumors faster, more unbiased, and reliably, we investigated the performance of several deep learning (DL) architectures including Visual Geometry Group 16 (VGG16), InceptionV3, VGG19, ResNet50, InceptionResNetV2, and Xception. Following this, we propose a transfer learning(TL) based multiclass classification model called IVX16 based on the three best-performing TL models. We use a dataset consisting of a total of 3264 images. Through extensive experiments, we achieve peak accuracy of 95.11%, 93.88%, 94.19%, 93.88%, 93.58%, 94.5%, and 96.94% for VGG16, InceptionV3, VGG19, ResNet50, InceptionResNetV2, Xception, and IVX16, respectively. Furthermore, we use Explainable AI to evaluate the performance and validity of each DL model and implement recently introduced Vison Transformer (ViT) models and compare their obtained output with the TL and ensemble model.


Asunto(s)
Neoplasias Encefálicas , Encéfalo , Humanos , Neoplasias Encefálicas/diagnóstico por imagen , Suministros de Energía Eléctrica , Aprendizaje Automático
7.
Artículo en Inglés | MEDLINE | ID: mdl-39325615

RESUMEN

Brain tumor detection has advanced significantly with the development of deep learning technology. Although multimodal data, such as Magnetic Resonance Imaging (MRI) and Computed Tomography (CT), has potential advantages in diagnostics, most existing studies rely solely on a single modality. This is because common fusion methods may lead to the loss of critical information when attempting multimodal fusion. Therefore, effectively integrating multimodal data has become a significant challenge. Additionally, medical image analysis requires large amounts of annotated data, and labeling images is a resourceintensive task that demands experienced professionals to spend a considerable amount of time. To address these challenges, this paper introduces a new unsupervised learning framework named Double-SimCLR. This framework builds on the foundation of contrastive learning and features a dual-branch structure, enabling direct and simultaneous processing of MRI and CT images for multimodal feature fusion. Given the "weak feature" characteristics of CT images (e.g., low soft tissue contrast and low resolution), we incorporated adaptive weight masking technology to enhance CT feature extraction. Moreover, we introduced a multimodal attention mechanism, which ensures that the model focuses on salient information, thereby elevating the precision and robustness of brain tumor detection. Even without substantial labeled data, experimental results demonstrate that Double-SimCLR achieves 93.458% accuracy, 92.463% precision, and a 93.058% F1-score, outperforming state-of-the-art (SOTA) models by 2.871%, 2.643%, and 3.098%, respectively.

8.
Front Plant Sci ; 14: 1224709, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37600194

RESUMEN

Plant diseases pose a major threat to agricultural production and the food supply chain, as they expose plants to potentially disruptive pathogens that can affect the lives of those who are associated with it. Deep learning has been applied in a range of fields such as object detection, autonomous vehicles, fraud detection etc. Several researchers have tried to implement deep learning techniques in precision agriculture. However, there are pros and cons to the approaches they have opted for disease detection and identification. In this survey, we have made an attempt to capture the significant advancements in machine-learning based disease detection. We have discussed prevalent datasets and techniques that have been employed as well as highlighted emerging approaches being used for plant disease detection. By exploring these advancements, we aim to present a comprehensive overview of the prominent approaches in precision agriculture, along with their associated challenges and potential improvements. This paper delves into the challenges associated with the implementation and briefly discusses the future trends. Overall, this paper presents a bird's eye view of plant disease datasets, deep learning techniques, their accuracies and the challenges associated with them. Our insights will serve as a valuable resource for researchers and practitioners in the field. We hope that this survey will inform and inspire future research efforts, ultimately leading to improved precision agriculture practices and enhanced crop health management.

9.
Biomedicines ; 9(11)2021 Nov 20.
Artículo en Inglés | MEDLINE | ID: mdl-34829962

RESUMEN

Deep learning (DL) is a distinct class of machine learning that has achieved first-class performance in many fields of study. For epigenomics, the application of DL to assist physicians and scientists in human disease-relevant prediction tasks has been relatively unexplored until very recently. In this article, we critically review published studies that employed DL models to predict disease detection, subtype classification, and treatment responses, using epigenomic data. A comprehensive search on PubMed, Scopus, Web of Science, Google Scholar, and arXiv.org was performed following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines. Among 1140 initially identified publications, we included 22 articles in our review. DNA methylation and RNA-sequencing data are most frequently used to train the predictive models. The reviewed models achieved a high accuracy ranged from 88.3% to 100.0% for disease detection tasks, from 69.5% to 97.8% for subtype classification tasks, and from 80.0% to 93.0% for treatment response prediction tasks. We generated a workflow to develop a predictive model that encompasses all steps from first defining human disease-related tasks to finally evaluating model performance. DL holds promise for transforming epigenomic big data into valuable knowledge that will enhance the development of translational epigenomics.

SELECCIÓN DE REFERENCIAS
Detalles de la búsqueda