Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 52
Filtrar
1.
Sensors (Basel) ; 23(21)2023 Nov 02.
Artículo en Inglés | MEDLINE | ID: mdl-37960626

RESUMEN

The Internet of Things (IoT) is the most abundant technology in the fields of manufacturing, automation, transportation, robotics, and agriculture, utilizing the IoT's sensors-sensing capability. It plays a vital role in digital transformation and smart revolutions in critical infrastructure environments. However, handling heterogeneous data from different IoT devices is challenging from the perspective of security and privacy issues. The attacker targets the sensor communication between two IoT devices to jeopardize the regular operations of IoT-based critical infrastructure. In this paper, we propose an artificial intelligence (AI) and blockchain-driven secure data dissemination architecture to deal with critical infrastructure security and privacy issues. First, we reduced dimensionality using principal component analysis (PCA) and explainable AI (XAI) approaches. Furthermore, we applied different AI classifiers such as random forest (RF), decision tree (DT), support vector machine (SVM), perceptron, and Gaussian Naive Bayes (GaussianNB) that classify the data, i.e., malicious or non-malicious. Furthermore, we employ an interplanetary file system (IPFS)-driven blockchain network that offers security to the non-malicious data. In addition, to strengthen the security of AI classifiers, we analyze data poisoning attacks on the dataset that manipulate sensitive data and mislead the classifier, resulting in inaccurate results from the classifiers. To overcome this issue, we provide an anomaly detection approach that identifies malicious instances and removes the poisoned data from the dataset. The proposed architecture is evaluated using performance evaluation metrics such as accuracy, precision, recall, F1 score, and receiver operating characteristic curve (ROC curve). The findings show that the RF classifier transcends other AI classifiers in terms of accuracy, i.e., 98.46%.

2.
Sensors (Basel) ; 23(15)2023 Jul 27.
Artículo en Inglés | MEDLINE | ID: mdl-37571505

RESUMEN

With the onset of 5G technology, the number of users is increasing drastically. These increased numbers of users demand better service on the network. This study examines the millimeter wave bands working frequencies. Working in the millimeter wave band has the disadvantage of interference. This study aims to analyze the impact of different interference conditions on unmanned aerial vehicle use scenarios, such as open-air gatherings and indoor-outdoor sports stadiums. Performance analysis was carried out in terms of received power and path loss readings.

3.
Sensors (Basel) ; 23(16)2023 Aug 21.
Artículo en Inglés | MEDLINE | ID: mdl-37631830

RESUMEN

Hybrid beamforming is a viable method for lowering the complexity and expense of massive multiple-input multiple-output systems while achieving high data rates on track with digital beamforming. To this end, the purpose of the research reported in this paper is to assess the effectiveness of the three architectural beamforming techniques (Analog, Digital, and Hybrid beamforming) in massive multiple-input multiple-output systems, especially hybrid beamforming. In hybrid beamforming, the antennas are connected to a single radio frequency chain, unlike digital beamforming, where each antenna has a separate radio frequency chain. The beam formation toward a particular angle depends on the channel state information. Further, massive multiple-input multiple-output is discussed in detail along with the performance parameters like bit error rate, signal-to-noise ratio, achievable sum rate, power consumption in massive multiple-input multiple-output, and energy efficiency. Finally, a comparison has been established between the three beamforming techniques.

4.
Sensors (Basel) ; 23(14)2023 Jul 15.
Artículo en Inglés | MEDLINE | ID: mdl-37514719

RESUMEN

With the development of the Internet of Things (IoT), the number of devices will also increase tremendously. However, we need more wireless communication resources. It has been shown in the literature that non-orthogonal multiple access (NOMA) offers high multiplexing gains due to the simultaneous transfer of signals, and massive multiple-input-multiple-outputs (mMIMOs) offer high spectrum efficiency due to the high antenna gain and high multiplexing gains. Therefore, a downlink mMIMO NOMA cooperative system is considered in this paper. The users at the cell edge in 5G cellular system generally suffer from poor signal quality as they are far away from the BS and expend high battery power to decode the signals superimposed through NOMA. Thus, this paper uses a cooperative relay system and proposes the mMIMO NOMA double-mode model to reduce battery expenditure and increase the cell edge user's energy efficiency and sum rate. In the mMIMO NOMA double-mode model, two modes of operation are defined. Depending on the relay's battery level, these modes are chosen to utilize the system's energy efficiency. Comprehensive numerical results show the improvement in the proposed system's average sum rate and average energy efficiency compared with a conventional system. In a cooperative NOMA system, the base station (BS) transmits a signal to a relay, and the relay forwards the signal to a cluster of users. This cluster formation depends on the user positions and geographical restrictions concerning the relay equipment. Therefore, it is vital to form user clusters for efficient and simultaneous transmission. This paper also presents a novel method for efficient cluster formation.

5.
Artif Intell Rev ; : 1-93, 2023 Mar 12.
Artículo en Inglés | MEDLINE | ID: mdl-37362891

RESUMEN

Machine learning (ML) and Deep learning (DL) models are popular in many areas, from business, medicine, industries, healthcare, transportation, smart cities, and many more. However, the conventional centralized training techniques may not apply to upcoming distributed applications, which require high accuracy and quick response time. It is mainly due to limited storage and performance bottleneck problems on the centralized servers during the execution of various ML and DL-based models. However, federated learning (FL) is a developing approach to training ML models in a collaborative and distributed manner. It allows the full potential exploitation of these models with unlimited data and distributed computing power. In FL, edge computing devices collaborate to train a global model on their private data and computational power without sharing their private data on the network, thereby offering privacy preservation by default. But the distributed nature of FL faces various challenges related to data heterogeneity, client mobility, scalability, and seamless data aggregation. Moreover, the communication channels, clients, and central servers are also vulnerable to attacks which may give various security threats. Thus, a structured vulnerability and risk assessment are needed to deploy FL successfully in real-life scenarios. Furthermore, the scope of FL is expanding in terms of its application areas, with each area facing different threats. In this paper, we analyze various vulnerabilities present in the FL environment and design a literature survey of possible threats from the perspective of different application areas. Also, we review the most recent defensive algorithms and strategies used to guard against security and privacy threats in those areas. For a systematic coverage of the topic, we considered various applications under four main categories: space, air, ground, and underwater communications. We also compared the proposed methodologies regarding the underlying approach, base model, datasets, evaluation matrices, and achievements. Lastly, various approaches' future directions and existing drawbacks are discussed in detail.

6.
Sensors (Basel) ; 23(9)2023 Apr 22.
Artículo en Inglés | MEDLINE | ID: mdl-37177403

RESUMEN

The aim of the peer-to-peer (P2P) decentralized gaming industry has shifted towards realistic gaming environment (GE) support for game players (GPs). Recent innovations in the metaverse have motivated the gaming industry to look beyond augmented reality and virtual reality engines, which improve the reality of virtual game worlds. In gaming metaverses (GMs), GPs can play, socialize, and trade virtual objects in the GE. On game servers (GSs), the collected GM data are analyzed by artificial intelligence models to personalize the GE according to the GP. However, communication with GSs suffers from high-end latency, bandwidth concerns, and issues regarding the security and privacy of GP data, which pose a severe threat to the emerging GM landscape. Thus, we proposed a scheme, Game-o-Meta, that integrates federated learning in the GE, with GP data being trained on local devices only. We envisioned the GE over a sixth-generation tactile internet service to address the bandwidth and latency issues and assure real-time haptic control. In the GM, the GP's game tasks are collected and trained on the GS, and then a pre-trained model is downloaded by the GP, which is trained using local data. The proposed scheme was compared against traditional schemes based on parameters such as GP task offloading, GP avatar rendering latency, and GS availability. The results indicated the viability of the proposed scheme.

7.
Sensors (Basel) ; 23(5)2023 Mar 01.
Artículo en Inglés | MEDLINE | ID: mdl-36904915

RESUMEN

Topic modeling is a machine learning algorithm based on statistics that follows unsupervised machine learning techniques for mapping a high-dimensional corpus to a low-dimensional topical subspace, but it could be better. A topic model's topic is expected to be interpretable as a concept, i.e., correspond to human understanding of a topic occurring in texts. While discovering corpus themes, inference constantly uses vocabulary that impacts topic quality due to its size. Inflectional forms are in the corpus. Since words frequently appear in the same sentence and are likely to have a latent topic, practically all topic models rely on co-occurrence signals between various terms in the corpus. The topics get weaker because of the abundance of distinct tokens in languages with extensive inflectional morphology. Lemmatization is often used to preempt this problem. Gujarati is one of the morphologically rich languages, as a word may have several inflectional forms. This paper proposes a deterministic finite automaton (DFA) based lemmatization technique for the Gujarati language to transform lemmas into their root words. The set of topics is then inferred from this lemmatized corpus of Gujarati text. We employ statistical divergence measurements to identify semantically less coherent (overly general) topics. The result shows that the lemmatized Gujarati corpus learns more interpretable and meaningful subjects than unlemmatized text. Finally, results show that lemmatization curtails the size of vocabulary decreases by 16% and the semantic coherence for all three measurements-Log Conditional Probability, Pointwise Mutual Information, and Normalized Pointwise Mutual Information-from -9.39 to -7.49, -6.79 to -5.18, and -0.23 to -0.17, respectively.

8.
IEEE Sens J ; 23(2): 955-968, 2023 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-36913217

RESUMEN

Recently, unmanned aerial vehicles (UAVs) are deployed in Novel Coronavirus Disease-2019 (COVID-19) vaccine distribution process. To address issues of fake vaccine distribution, real-time massive UAV monitoring and control at nodal centers (NCs), the authors propose SanJeeVni, a blockchain (BC)-assisted UAV vaccine distribution at the backdrop of sixth-generation (6G) enhanced ultra-reliable low latency communication (6G-eRLLC) communication. The scheme considers user registration, vaccine request, and distribution through a public Solana BC setup, which assures a scalable transaction rate. Based on vaccine requests at production setups, UAV swarms are triggered with vaccine delivery to NCs. An intelligent edge offloading scheme is proposed to support UAV coordinates and routing path setups. The scheme is compared against fifth-generation (5G) uRLLC communication. In the simulation, we achieve and 86% improvement in service latency, 12.2% energy reduction of UAV with 76.25% more UAV coverage in 6G-eRLLC, and a significant improvement of [Formula: see text]% in storage cost against the Ethereum network, which indicates the scheme efficacy in practical setups.

9.
Procedia Comput Sci ; 218: 1506-1515, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-36743795

RESUMEN

The Global Novel Coronavirus Disease-2019 (COVID-19) pandemic has forced social distancing norms that have been followed worldwide. Thus, traditional biometric-based attendance marking systems are replaced with contactless attendance marking schemes. However, there are limitations of manufacturing cost, spoofing attacks, and security vulnerabilities. Thus, the paper proposes a contactless camera-based attendance system with the equipped functionalities of anti-spoofing. The proposed scheme can detect liveliness, so fake attendance marking is eliminated. The proposed scheme is also scalable and cost-effective, with generic solutions adaptable to schools, colleges, or other places where attendance is required. The system also eliminates the limitation of one-entry by multiple face-marking systems that allow simultaneous attendance marking. In performance analysis, parameters like image precision, storage cost, retrieval latency, and analysis of the anti-spoofing module is presented against existing schemes. An accuracy of 95.85% is reported for the model, with a significant improvement of 33.52% in storage cost through the Firebase database, which outperforms existing state-of-the-art schemes.

10.
Sensors (Basel) ; 23(4)2023 Feb 10.
Artículo en Inglés | MEDLINE | ID: mdl-36850606

RESUMEN

A cognitive radio network (CRN) is an intelligent network that can detect unoccupied spectrum space without interfering with the primary user (PU). Spectrum scarcity arises due to the stable channel allocation, which the CRN handles. Spectrum handoff management is a critical problem that must be addressed in the CRN to ensure indefinite connection and profitable use of unallocated spectrum space for secondary users (SUs). Spectrum handoff (SHO) has some disadvantages, i.e., communication delay and power consumption. To overcome these drawbacks, a reduction in handoff should be a priority. This study proposes the use of dynamic spectrum access (DSA) to check for available channels for SU during handoff using a metaheuristic algorithm depending on machine learning. The simulation results show that the proposed "support vector machine-based red deer algorithm" (SVM-RDA) is resilient and has low complexity. The suggested algorithm's experimental setup offers several handoffs, unsuccessful handoffs, handoff delay, throughput, signal-to-noise ratio (SNR), SU bandwidth, and total spectrum bandwidth. This study provides an improved system performance during SHO. The inferred technique anticipates handoff delay and minimizes the handoff numbers. The results show that the recommended method is better at making predictions with fewer handoffs compared to the other three.

11.
Sensors (Basel) ; 23(2)2023 Jan 14.
Artículo en Inglés | MEDLINE | ID: mdl-36679767

RESUMEN

Mobile applications have rapidly grown over the past few decades to offer futuristic applications, such as autonomous vehicles, smart farming, and smart city. Such applications require ubiquitous, real-time, and secure communications to deliver services quickly. Toward this aim, sixth-generation (6G) wireless technology offers superior performance with high reliability, enhanced transmission rate, and low latency. However, managing the resources of the aforementioned applications is highly complex in the precarious network. An adversary can perform various network-related attacks (i.e., data injection or modification) to jeopardize the regular operation of the smart applications. Therefore, incorporating blockchain technology in the smart application can be a prominent solution to tackle security, reliability, and data-sharing privacy concerns. Motivated by the same, we presented a case study on public safety applications that utilizes the essential characteristics of artificial intelligence (AI), blockchain, and a 6G network to handle data integrity attacks on the crime data. The case study is assessed using various performance parameters by considering blockchain scalability, packet drop ratio, and training accuracy. Lastly, we explored different research challenges of adopting blockchain in the 6G wireless network.


Asunto(s)
Inteligencia Artificial , Cadena de Bloques , Reproducibilidad de los Resultados , Inteligencia , Agricultura , Seguridad Computacional
12.
IEEE Access ; 10: 74131-74151, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-36345376

RESUMEN

Recently, healthcare stakeholders have orchestrated steps to strengthen and curb the COVID-19 wave. There has been a surge in vaccinations to curb the virus wave, but it is crucial to strengthen our healthcare resources to fight COVID-19 and like pandemics. Recent researchers have suggested effective forecasting models for COVID-19 transmission rate, spread, and the number of positive cases, but the focus on healthcare resources to meet the current spread is not discussed. Motivated from the gap, in this paper, we propose a scheme, ABV-CoViD (Availibility of Beds and Ventilators for COVID-19 patients), that forms an ensemble forecasting model to predict the availability of beds and ventilators (ABV) for the COVID-19 patients. The scheme considers a region-wise demarcation for the allotment of beds and ventilators (BV), termed resources, based on region-wise ABV and COVID-19 positive patients (inside the hospitals occupying the BV resource). We consider an integration of artificial neural network (ANN) and auto-regressive integrated neural network (ARIMA) model to address both the linear and non-linear dependencies. We also consider the effective wave spread of COVID-19 on external patients (not occupying the BV resources) through a [Formula: see text]- ARNN model, which gives us long-term complex dependencies of BV resources in the future time window. We have considered the COVID-19 healthcare dataset on 3 USA regions (Illinois, Michigan, and Indiana) for testing our ensemble forecasting scheme from January 2021 to May 2022. We evaluated our scheme in terms of statistical performance metrics and validated that ensemble methods have higher accuracy. In simulation, for linear modelling, we considered the [Formula: see text] model, and [Formula: see text] model for ANN modelling. We considered the [Formula: see text](12,6) forecasting. On a population of 2,93,90,897, the obtained mean absolute error (MAE) on average for 3 regions is 170.5514. The average root means square error (RMSE) of [Formula: see text]-ARNN is 333.18, with an accuracy of 98.876%, which shows the scheme's efficacy in ABV measurement over conventional and manual resource allocation schemes.

13.
Comput Electr Eng ; 103: 108352, 2022 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-36068837

RESUMEN

The proliferating outbreak of COVID-19 raises global health concerns and has brought many countries to a standstill. Several restrain strategies are imposed to suppress and flatten the mortality curve, such as lockdowns, quarantines, etc. Artificial Intelligence (AI) techniques could be a promising solution to leverage these restraint strategies. However, real-time decision-making necessitates a cloud-oriented AI solution to control the pandemic. Though many cloud-oriented solutions exist, they have not been fully exploited for real-time data accessibility and high prediction accuracy. Motivated by these facts, this paper proposes a cloud-oriented AI-based scheme referred to as D-espy (i.e., Disease-espy) for disease detection and prevention. The proposed D-espy scheme performs a comparative analysis between Autoregressive Integrated Moving Average (ARIMA), Vanilla Long Short Term Memory (LSTM), and Stacked LSTM techniques, which signify the dominance of Stacked LSTM in terms of prediction accuracy. Then, a Medical Resource Distribution (MRD) mechanism is proposed for the optimal distribution of medical resources. Next, a three-phase analysis of the COVID-19 spread is presented, which can benefit the governing bodies in deciding lockdown relaxation. Results show the efficacy of the D-espy scheme concerning 96.2% of prediction accuracy compared to the existing approaches.

14.
Biosensors (Basel) ; 12(8)2022 Jul 29.
Artículo en Inglés | MEDLINE | ID: mdl-36004975

RESUMEN

Parkinson's disease (PSD) is a neurological disorder of the brain where nigrostriatal integrity functions lead to motor and non-motor-based symptoms. Doctors can assess the patient based on the patient's history and symptoms; however, the symptoms are similar in various neurodegenerative diseases, such as progressive supranuclear palsy (PSP), multiple system atrophy-parkinsonian type (MSA), essential tremor, and Parkinson's tremor. Thus, sometimes it is difficult to identify a patient's disease based on his or her symptoms. To address the issue, we have used neuroimaging biomarkers to analyze dopamine deficiency in the brains of subjects. We generated the different patterns of dopamine levels inside the brain, which identified the severity of the disease and helped us to measure the disease progression of the patients. For the classification of the subjects, we used machine learning (ML) algorithms for a multivariate classification of the subjects using neuroimaging biomarkers data. In this paper, we propose a stacked machine learning (ML)-based classification model to identify the HC and PSD subjects. In this stacked model, meta learners can learn and combine the predictions from various ML algorithms, such as K-nearest neighbor (KNN), random forest algorithm (RFA), and Gaussian naive Bayes (GANB) to achieve a high performance model. The proposed model showed 92.5% accuracy, outperforming traditional schemes.


Asunto(s)
Enfermedad de Parkinson , Parálisis Supranuclear Progresiva , Teorema de Bayes , Biomarcadores , Dopamina , Femenino , Humanos , Masculino , Enfermedad de Parkinson/diagnóstico por imagen , Parálisis Supranuclear Progresiva/diagnóstico
15.
Sensors (Basel) ; 22(13)2022 Jun 26.
Artículo en Inglés | MEDLINE | ID: mdl-35808325

RESUMEN

In Smart Grid (SG), Transactive Energy Management (TEM) is one of the most promising approaches to boost consumer participation in energy generation, energy management, and establishing decentralized energy market models using Peer-to-Peer (P2P). In P2P, a prosumer produces electric energy at their place using Renewable Energy Resources (RES) such as solar energy, wind energy, etc. Then, this generated energy is traded with consumers (who need the energy) in a nearby locality. P2P facilitates energy exchange in localized micro-energy markets of the TEM system. Such decentralized P2P energy management could cater to diverse prosumers and utility business models. However, the existing P2P approaches suffer from several issues such as single-point-of-failure, network bandwidth, scalability, trust, and security issues. To handle the aforementioned issues, this paper proposes a Decentralized and Transparent P2P Energy Trading (DT-P2PET) scheme using blockchain. The proposed DT-P2PET scheme aims to reduce the grid's energy generation and management burden while also increasing profit for both consumers and prosumers through a dynamic pricing mechanism. The DT-P2PET scheme uses Ethereum-blockchain-based Smart Contracts (SCs) and InterPlanetary File System (IPFS) for the P2P energy trading. Furthermore, a recommender mechanism is also introduced in this study to increase the number of prosumers. The Ethereum SCs are designed and deployed to perform P2P in real time in the proposed DT-P2PET scheme. The DT-P2PET scheme is evaluated based on the various parameters such as profit generation (for prosumer and consumer both), data storage cost, network bandwidth, and data transfer rate in contrast to the existing approaches.


Asunto(s)
Cadena de Bloques , Comercio , Sistemas de Computación , Almacenamiento y Recuperación de la Información
16.
Sci Rep ; 12(1): 12247, 2022 07 18.
Artículo en Inglés | MEDLINE | ID: mdl-35851092

RESUMEN

The next whooping revolution after the Internet is its scion, the Internet of Things (IoT), which has facilitated every entity the power to connect to the web. However, this magnifying depth of the digital pool oil the wheels for the attackers to penetrate. Thus, these threats and attacks have become a prime concern among researchers. With promising features, Machine Learning (ML) has been the solution throughout to detect these threats. But, the general ML-based solutions have been declining with the practical implementation to detect unknown threats due to changes in domains, different distributions, long training time, and lack of labelled data. To tackle the aforementioned issues, Transfer Learning (TL) has emerged as a viable solution. Motivated by the facts, this article aims to leverage TL-based strategies to get better the learning classifiers to detect known and unknown threats targeting IoT systems. TL transfers the knowledge attained while learning a task to expedite the learning of new similar tasks/problems. This article proposes a learning-based threat model for attack detection in the Smart Home environment (SALT). It uses the knowledge of known threats in the source domain (labelled data) to detect the unknown threats in the target domain (unlabelled data). The proposed scheme addresses the workable differences in feature space distribution or the ratio of attack instances to a normal one, or both. The proposed threat model would show the implying competence of ML with the TL scheme to improve the robustness of learning classifiers besides the threat variants to detect known and unknown threats. The performance analysis shows that traditional schemes underperform for unknown threat variants with accuracy dropping to 39% and recall to 56.


Asunto(s)
Aprendizaje Automático
17.
Sensors (Basel) ; 22(11)2022 May 26.
Artículo en Inglés | MEDLINE | ID: mdl-35684668

RESUMEN

Integrating information and communication technology (ICT) and energy grid infrastructures introduces smart grids (SG) to simplify energy generation, transmission, and distribution. The ICT is embedded in selected parts of the grid network, which partially deploys SG and raises various issues such as energy losses, either technical or non-technical (i.e., energy theft). Therefore, energy theft detection plays a crucial role in reducing the energy generation burden on the SG and meeting the consumer demand for energy. Motivated by these facts, in this paper, we propose a deep learning (DL)-based energy theft detection scheme, referred to as GrAb, which uses a data-driven analytics approach. GrAb uses a DL-based long short-term memory (LSTM) model to predict the energy consumption using smart meter data. Then, a threshold calculator is used to calculate the energy consumption. Both the predicted energy consumption and the threshold value are passed to the support vector machine (SVM)-based classifier to categorize the energy losses into technical, non-technical (energy theft), and normal consumption. The proposed data-driven theft detection scheme identifies various forms of energy theft (e.g., smart meter data manipulation or clandestine connections). Experimental results show that the proposed scheme (GrAb) identifies energy theft more accurately compared to the state-of-the-art approaches.


Asunto(s)
Aprendizaje Profundo , Redes de Comunicación de Computadores , Fenómenos Físicos , Máquina de Vectores de Soporte , Robo
18.
Sensors (Basel) ; 22(11)2022 May 31.
Artículo en Inglés | MEDLINE | ID: mdl-35684802

RESUMEN

The emerging need for high data rate, low latency, and high network capacity encourages wireless networks (WNs) to build intelligent and dynamic services, such as intelligent transportation systems, smart homes, smart cities, industrial automation, etc. However, the WN is impeded by several security threats, such as data manipulation, denial-of-service, injection, man-in-the-middle, session hijacking attacks, etc., that deteriorate the security performance of the aforementioned WN-based intelligent services. Toward this goal, various security solutions, such as cryptography, artificial intelligence (AI), access control, authentication, etc., are proposed by the scientific community around the world; however, they do not have full potential in tackling the aforementioned security issues. Therefore, it necessitates a technology, i.e., a blockchain, that offers decentralization, immutability, transparency, and security to protect the WN from security threats. Motivated by these facts, this paper presents a WNs survey in the context of security and privacy issues with blockchain-based solutions. First, we analyzed the existing research works and highlighted security requirements, security issues in a different generation of WN (4G, 5G, and 6G), and a comparative analysis of existing security solutions. Then, we showcased the influence of blockchain technology and prepared an exhaustive taxonomy for blockchain-enabled security solutions in WN. Further, we also proposed a blockchain and a 6G-based WN architecture to highlight the importance of blockchain technology in WN. Moreover, the proposed architecture is evaluated against different performance metrics, such as scalability, packet loss ratio, and latency. Finally, we discuss various open issues and research challenges for blockchain-based WNs solutions.


Asunto(s)
Cadena de Bloques , Inteligencia Artificial , Humanos , Motivación , Privacidad , Tecnología
19.
BMC Gastroenterol ; 22(1): 118, 2022 Mar 10.
Artículo en Inglés | MEDLINE | ID: mdl-35272611

RESUMEN

BACKGROUND: The natural history and incidence of hepatocellular carcinoma (HCC) arising from indeterminate liver lesions are not well described. We aimed to define the incidence of HCC in a cohort of patients undergoing surveillance by magnetic resonance imaging (MRI) and estimate any associations with incident HCC. METHODS: We performed a retrospective follow-up study, identifying MRI scans in which indeterminate lesions had been reported between January 2006 and January 2017. Subsequent MRI scan reports were reviewed for incident HCC arising from indeterminate lesions, data were extracted from electronic patient records and survival analysis performed to estimate associations with baseline factors. RESULTS: One hundred and nine patients with indeterminate lesions on MRI were identified. HCC developed in 19 (17%) patients over mean follow up of 4.6 years. Univariate Cox proportional hazards analysis found incident HCC to be significantly associated with baseline low platelet count (hazard ratio (HR) = 7.3 (95% confidence intervals (CI) 2.1-24.9), high serum alpha-fetoprotein level (HR = 2.7 (95% CI 1.0-7.1)) and alcohol consumption above fourteen units weekly (HR = 3.1 (95% CI 1.1-8.7)). Multivariate analysis, however, found that only low platelet count was independently associated with HCC (HR = 5.5 (95% CI 0.6-5.1)). CONCLUSIONS: HCC arises in approximately one fifth of indeterminate liver lesions over 4.6 years and is associated with a low platelet count at the time of first diagnosis of an indeterminate lesion. Incidence of HCC was more common in people with viral hepatitis and in those consuming > 14 units of alcohol per week. Our data may be used to support a strategy of enhanced surveillance in patients with indeterminate lesions.


Asunto(s)
Carcinoma Hepatocelular , Neoplasias Hepáticas , Carcinoma Hepatocelular/complicaciones , Estudios de Seguimiento , Humanos , Neoplasias Hepáticas/complicaciones , Imagen por Resonancia Magnética/métodos , Estudios Retrospectivos
20.
Medicina (Kaunas) ; 58(2)2022 Feb 18.
Artículo en Inglés | MEDLINE | ID: mdl-35208634

RESUMEN

A coronavirus outbreak caused by a novel virus known as SARS-CoV-2 originated towards the latter half of 2019. COVID-19's abrupt emergence and unchecked global expansion highlight the inability of the current healthcare services to respond to public health emergencies promptly. This paper reviews the different aspects of human life comprehensively affected by COVID-19. It then discusses various tools and technologies from the leading domains and their integration into people's lives to overcome issues resulting from pandemics. This paper further focuses on providing a detailed review of existing and probable Artificial Intelligence (AI), Internet of Things (IoT), Augmented Reality (AR), Virtual Reality (VR), and Blockchain-based solutions. The COVID-19 pandemic brings several challenges from the viewpoint of the nation's healthcare, security, privacy, and economy. AI offers different predictive services and intelligent strategies for detecting coronavirus signs, promoting drug development, remote healthcare, classifying fake news detection, and security attacks. The incorporation of AI in the COVID-19 outbreak brings robust and reliable solutions to enhance the healthcare systems, increases user's life expectancy, and boosts the nation's economy. Furthermore, AR/VR helps in distance learning, factory automation, and setting up an environment of work from home. Blockchain helps in protecting consumer's privacy, and securing the medical supply chain operations. IoT is helpful in remote patient monitoring, distant sanitising via drones, managing social distancing (using IoT cameras), and many more in combating the pandemic. This study covers an up-to-date analysis on the use of blockchain technology, AI, AR/VR, and IoT for combating COVID-19 pandemic considering various applications. These technologies provide new emerging initiatives and use cases to deal with the COVID-19 pandemic. Finally, we discuss challenges and potential research paths that will promote further research into future pandemic outbreaks.


Asunto(s)
COVID-19 , Pandemias , Inteligencia Artificial , Humanos , SARS-CoV-2 , Tecnología
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA