Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 8 de 8
Filtrar
Mais filtros

Base de dados
Tipo de documento
Intervalo de ano de publicação
1.
Sensors (Basel) ; 23(10)2023 May 12.
Artigo em Inglês | MEDLINE | ID: mdl-37430623

RESUMO

Connected and automated vehicles (CAVs) require multiple tasks in their seamless maneuverings. Some essential tasks that require simultaneous management and actions are motion planning, traffic prediction, traffic intersection management, etc. A few of them are complex in nature. Multi-agent reinforcement learning (MARL) can solve complex problems involving simultaneous controls. Recently, many researchers applied MARL in such applications. However, there is a lack of extensive surveys on the ongoing research to identify the current problems, proposed methods, and future research directions in MARL for CAVs. This paper provides a comprehensive survey on MARL for CAVs. A classification-based paper analysis is performed to identify the current developments and highlight the various existing research directions. Finally, the challenges in current works are discussed, and some potential areas are given for exploration to overcome those challenges. Future readers will benefit from this survey and can apply the ideas and findings in their research to solve complex problems.

2.
Sensors (Basel) ; 22(12)2022 Jun 08.
Artigo em Inglês | MEDLINE | ID: mdl-35746138

RESUMO

In this paper, we have demonstrated a robust in-cabin monitoring system (IMS) for safety, security, surveillance, and monitoring, including privacy concerns for personal and shared autonomous vehicles (AVs). It consists of a set of monitoring cameras and an onboard device (OBD) equipped with artificial intelligence (AI). Hereafter, this combination of a camera and an OBD is referred to as the AI camera. We have investigated the issues for mobility services in higher levels of autonomous driving, what needs to be monitored, how to monitor, etc. Our proposed IMS is an on-device AI system that indigenously has improved the privacy of the users. Furthermore, we have enlisted the essential actions to be considered in an IMS and developed an appropriate database (DB). Our DB consists of multifaced scenarios important for monitoring the in-cabin of the higher-level AVs. Moreover, we have compared popular AI models applied for object and occupant recognition. In addition, our DB is available on request to support the research on the development of seamless monitoring of the in-cabin higher levels of autonomous driving for the assurance of safety and security.


Assuntos
Inteligência Artificial , Condução de Veículo , Veículos Autônomos
3.
Sensors (Basel) ; 21(23)2021 Dec 02.
Artigo em Inglês | MEDLINE | ID: mdl-34884085

RESUMO

Driving in an adverse rain environment is a crucial challenge for vision-based advanced driver assistance systems (ADAS) in the automotive industry. The vehicle windshield wiper removes adherent raindrops that cause distorted images from in-vehicle frontal view cameras, but, additionally, it causes an occlusion that can hinder visibility at the same time. The wiper-occlusion causes erroneous judgments by vision-based applications and endangers safety. This study proposes behind-the-scenes (BTS) that detects and removes wiper-occlusion in real-time image inputs under rainy weather conditions. The pixel-wise wiper masks are detected by high-pass filtering to predict the optical flow of a sequential image pair. We fine-tuned a deep learning-based optical flow model with a synthesized dataset, which was generated with pseudo-ground truth wiper masks and flows using auto-labeling with acquired real rainy images. A typical optical flow dataset with static synthetic objects is synthesized with real fast-moving objects to enhance data diversity. We annotated wiper masks and scenes as detection ground truths from the collected real images for evaluation. BTS outperforms by achieving a 0.962 SSIM and 91.6% F1 score in wiper mask detection and 88.3% F1 score in wiper image detection. Consequently, BTS enhanced the performance of vision-based image restoration and object detection applications by canceling occlusions and demonstrated it potential role in improving ADAS under rainy weather conditions.


Assuntos
Condução de Veículo , Chuva , Visão Ocular , Tempo (Meteorologia)
4.
Sensors (Basel) ; 21(23)2021 Nov 27.
Artigo em Inglês | MEDLINE | ID: mdl-34883917

RESUMO

An authorized traffic controller (ATC) has the highest priority for direct road traffic. In some irregular situations, the ATC supersedes other traffic control. Human drivers indigenously understand such situations and tend to follow the ATC; however, an autonomous vehicle (AV) can become confused in such circumstances. Therefore, autonomous driving (AD) crucially requires a human-level understanding of situation-aware traffic gesture recognition. In AVs, vision-based recognition is particularly desirable because of its suitability; however, such recognition systems have various bottlenecks, such as failing to recognize other humans on the road, identifying a variety of ATCs, and gloves in the hands of ATCs. We propose a situation-aware traffic control hand-gesture recognition system, which includes ATC detection and gesture recognition. Three-dimensional (3D) hand model-based gesture recognition is used to mitigate the problem associated with gloves. Our database contains separate training and test videos of approximately 60 min length, captured at a frame rate of 24 frames per second. It has 35,291 different frames that belong to traffic control hand gestures. Our approach correctly recognized traffic control hand gestures; therefore, the proposed system can be considered as an extension of the operational domain of the AV.


Assuntos
Gestos , Reconhecimento Automatizado de Padrão , Algoritmos , Veículos Autônomos , Bases de Dados Factuais , Mãos , Humanos , Reconhecimento Psicológico
5.
Sensors (Basel) ; 20(24)2020 Dec 16.
Artigo em Inglês | MEDLINE | ID: mdl-33339247

RESUMO

The typical configuration of virtual reality (VR) devices consists of a head-mounted display (HMD) and handheld controllers. As such, these units have limited utility in tasks that require hand-free operation, such as in surgical operations or assembly works in cyberspace. We propose a user interface for a VR headset based on a wearer's facial gestures for hands-free interaction, similar to a touch interface. By sensing and recognizing the expressions associated with the in situ intentional movements of a user's facial muscles, we define a set of commands that combine predefined facial gestures with head movements. This is achieved by utilizing six pairs of infrared (IR) photocouplers positioned at the foam interface of an HMD. We demonstrate the usability and report on the user experience as well as the performance of the proposed command set using an experimental VR game without any additional controllers. We obtained more than 99% of recognition accuracy for each facial gesture throughout the three steps of experimental tests. The proposed input interface is a cost-effective and efficient solution that facilitates hands-free user operation of a VR headset using built-in infrared photocouplers positioned in the foam interface. The proposed system recognizes facial gestures and incorporates a hands-free user interface to HMD, which is similar to the touch-screen experience of a smartphone.


Assuntos
Face , Gestos , Interface Usuário-Computador , Realidade Virtual , Mãos , Humanos
6.
Sensors (Basel) ; 19(20)2019 Oct 14.
Artigo em Inglês | MEDLINE | ID: mdl-31614988

RESUMO

Developing a user interface (UI) suitable for headset environments is one of the challenges in the field of augmented reality (AR) technologies. This study proposes a hands-free UI for an AR headset that exploits facial gestures of the wearer to recognize user intentions. The facial gestures of the headset wearer are detected by a custom-designed sensor that detects skin deformation based on infrared diffusion characteristics of human skin. We designed a deep neural network classifier to determine the user's intended gestures from skin-deformation data, which are exploited as user input commands for the proposed UI system. The proposed classifier is composed of a spatiotemporal autoencoder and deep embedded clustering algorithm, trained in an unsupervised manner. The UI device was embedded in a commercial AR headset, and several experiments were performed on the online sensor data to verify operation of the device. We achieved implementation of a hands-free UI for an AR headset with average accuracy of 95.4% user-command recognition, as determined through tests by participants.


Assuntos
Aprendizado Profundo , Gestos , Interface Usuário-Computador , Realidade Virtual , Algoritmos , Análise por Conglomerados , Face , Humanos , Pele
7.
Comput Intell Neurosci ; 2022: 5389359, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35498178

RESUMO

Fully autonomous vehicles (FAVs) lack monitoring inside the cabin. Therefore, an in-cabin monitoring system (IMS) is required for surveilling people causing irregular or abnormal situations. However, monitoring in the public domain allows disclosure of an individual's face, which goes against privacy preservation. Furthermore, there is a contrary demand for privacy in the IMS of AVs. Therefore, an intelligent IMS must simultaneously satisfy the contrary requirements of personal privacy protection and person identification during abnormal situations. In this study, we proposed a privacy-preserved IMS, which can reidentify anonymized virtual individual faces in an abnormal situation. This IMS includes a step for extracting facial features, which is accomplished by the edge device (onboard unit) of the AV. This device anonymizes an individual's facial identity before transmitting the video frames to a data server. We created different abnormal scenarios in the vehicle cabin. Further, we reidentified the involved person by using the anonymized virtual face and the reserved feature vectors extracted from the suspected individual. Overall, the proposed approach preserves personal privacy while maintaining security in surveillance systems, such as for in-cabin monitoring of FAVs.


Assuntos
Veículos Autônomos , Privacidade , Computadores , Cabeça , Humanos , Inteligência
8.
Comput Intell Neurosci ; 2022: 9097868, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35652062

RESUMO

XOR is a special nonlinear problem in artificial intelligence (AI) that resembles multiple real-world nonlinear data distributions. A multiplicative neuron model can solve these problems. However, the multiplicative model has the indigenous problem of backpropagation for densely distributed XOR problems and higher dimensional parity problems. To overcome this issue, we have proposed an enhanced translated multiplicative single neuron model. It can provide desired tessellation surface. We have considered an adaptable scaling factor associated with each input in our proposed model. It helps in achieving optimal scaling factor value for higher dimensional input. The efficacy of the proposed model has been tested by randomly increasing input dimensions for XOR-type data distribution. The proposed model has crisply classified even higher dimensional input in their respective class. Also, the computational complexity is the same as that of the previous multiplicative neuron model. It has shown more than an 80% reduction in absolute loss as compared to the previous neuron model in similar experimental conditions. Therefore, it can be considered as a generalized artificial model (single neuron) with the capability of solving XOR-like real problems.


Assuntos
Inteligência Artificial , Neurônios , Neurônios/fisiologia , Projetos de Pesquisa
SELEÇÃO DE REFERÊNCIAS
Detalhe da pesquisa