Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 47
Filtrar
Más filtros

Banco de datos
País/Región como asunto
Tipo del documento
Intervalo de año de publicación
1.
Opt Express ; 29(4): 4802-4820, 2021 Feb 15.
Artículo en Inglés | MEDLINE | ID: mdl-33726028

RESUMEN

Semantic segmentation (SS) is promising for outdoor scene perception in safety-critical applications like autonomous vehicles, assisted navigation and so on. However, traditional SS is primarily based on RGB images, which limits the reliability of SS in complex outdoor scenes, where RGB images lack necessary information dimensions to fully perceive unconstrained environments. As a preliminary investigation, we examine SS in an unexpected obstacle detection scenario, which demonstrates the necessity of multimodal fusion. Thereby, in this work, we present EAFNet, an Efficient Attention-bridged Fusion Network, to exploit complementary information coming from different optical sensors. Specifically, we incorporate polarization sensing to obtain supplementary information, considering its optical characteristics for robust representation of diverse materials. By using a single-shot polarization sensor, we build the first RGB-P dataset which consists of 394 annotated pixel-aligned RGB-polarization images. A comprehensive variety of experiments shows the effectiveness of EAFNet to fuse polarization and RGB information, as well as its flexibility to be adapted to other sensor combination scenarios.

2.
Appl Opt ; 60(26): 8188-8197, 2021 Sep 10.
Artículo en Inglés | MEDLINE | ID: mdl-34613083

RESUMEN

Depth estimation, as a necessary clue to convert 2D images into the 3D space, has been applied in many machine vision areas. However, to achieve an entire surrounding 360° geometric sensing, traditional stereo matching algorithms for depth estimation are limited due to large noise, low accuracy, and strict requirements for multi-camera calibration. In this work, for a unified surrounding perception, we introduce panoramic images to obtain a larger field of view. We extend PADENet [IEEE 23rd International Conference on Intelligent Transportation Systems, (2020), pp. 1-610.1109/ITSC45102.2020.9294206], which first appeared in our previous conference work for outdoor scene understanding, to perform panoramic monocular depth estimation with a focus for indoor scenes. At the same time, we improve the training process of the neural network adapted to the characteristics of panoramic images. In addition, we fuse the traditional stereo matching algorithm with deep learning methods and further improve the accuracy of depth predictions. With a comprehensive variety of experiments, this research demonstrates the effectiveness of our schemes aiming for indoor scene perception.

3.
Appl Opt ; 60(21): 6264-6274, 2021 Jul 20.
Artículo en Inglés | MEDLINE | ID: mdl-34613293

RESUMEN

In this paper, we propose panoramic annular simultaneous localization and mapping (PA-SLAM), a visual SLAM system based on a panoramic annular lens. A hybrid point selection strategy is put forward in the tracking front end, which ensures repeatability of key points and enables loop closure detection based on the bag-of-words approach. Every detected loop candidate is verified geometrically, and the Sim(3) relative pose constraint is estimated to perform pose graph optimization and global bundle adjustment in the back end. A comprehensive set of experiments on real-world data sets demonstrates that the hybrid point selection strategy allows reliable loop closure detection, and the accumulated error and scale drift have been significantly reduced via global optimization, enabling PA-SLAM to reach state-of-the-art accuracy while maintaining high robustness and efficiency.

4.
Sensors (Basel) ; 21(10)2021 May 20.
Artículo en Inglés | MEDLINE | ID: mdl-34065360

RESUMEN

Scene sonification is a powerful technique to help Visually Impaired People (VIP) understand their surroundings. Existing methods usually perform sonification on the entire images of the surrounding scene acquired by a standard camera or on the priori static obstacles acquired by image processing algorithms on the RGB image of the surrounding scene. However, if all the information in the scene are delivered to VIP simultaneously, it will cause information redundancy. In fact, biological vision is more sensitive to moving objects in the scene than static objects, which is also the original intention of the event-based camera. In this paper, we propose a real-time sonification framework to help VIP understand the moving objects in the scene. First, we capture the events in the scene using an event-based camera and cluster them into multiple moving objects without relying on any prior knowledge. Then, sonification based on MIDI is enabled on these objects synchronously. Finally, we conduct comprehensive experiments on the scene video with sonification audio attended by 20 VIP and 20 Sighted People (SP). The results show that our method allows both participants to clearly distinguish the number, size, motion speed, and motion trajectories of multiple objects. The results show that our method is more comfortable to hear than existing methods in terms of aesthetics.


Asunto(s)
Algoritmos , Personas con Daño Visual , Humanos , Procesamiento de Imagen Asistido por Computador , Movimiento (Física)
5.
Molecules ; 26(17)2021 Aug 31.
Artículo en Inglés | MEDLINE | ID: mdl-34500708

RESUMEN

The purpose of this study is to investigate the potential effects of 5-hydroxytryptophan (5-HTP) duodenal perfusion on melatonin (MT) synthesis in the gastrointestinal (GI) tract of sheep. 5-hydroxytryptophan is a precursor in the melatonin synthetic pathway. The results showed that this method significantly increased melatonin production in the mucosa of all segments in GI tract including duodenum, jejunum, ileum, cecum and colon. The highest melatonin level was identified in the colon and this indicates that the microbiota located in the colon may also participate in the melatonin production. In addition, portion of the melatonin generated by the GI tract can pass the liver metabolism and enters the circulation via portal vein. The current study provides further evidence to support that GI tract is the major site for melatonin synthesis and the GI melatonin also contributes to the circulatory melatonin level since plasma melatonin concentrations in 5-HTP treated groups were significantly higher than those in the control group. In conclusion, the results show that 10-50 mg of 5-HTP flowing into the duodenum within 6 h effectively improve the production of melatonin in the GI tract and melatonin concentration in sheep blood circulation during the day.


Asunto(s)
5-Hidroxitriptófano/metabolismo , Tracto Gastrointestinal/metabolismo , Melatonina/metabolismo , Alimentación Animal , Animales , Colon/metabolismo , Duodeno/metabolismo , Íleon/metabolismo , Yeyuno/metabolismo , Glándula Pineal/metabolismo , Ovinos
6.
Sensors (Basel) ; 20(15)2020 Jul 27.
Artículo en Inglés | MEDLINE | ID: mdl-32727159

RESUMEN

Visual Place Recognition (VPR) addresses visual instance retrieval tasks against discrepant scenes and gives precise localization. During a traverse, the captured images (query images) would be traced back to the already existing positions in the database images, rendering vehicles or pedestrian navigation devices distinguish ambient environments. Unfortunately, diverse appearance variations can bring about huge challenges for VPR, such as illumination changing, viewpoint varying, seasonal cycling, disparate traverses (forward and backward), and so on. In addition, the majority of current VPR algorithms are designed for forward-facing images, which can only provide with narrow Field of View (FoV) and come with severe viewpoint influences. In this paper, we propose a panoramic localizer, which is based on coarse-to-fine descriptors, leveraging panoramas for omnidirectional perception and sufficient FoV up to 360∘. We adopt NetVLAD descriptors in the coarse matching in a panorama-to-panorama way, for their robust performances in distinguishing different appearances, utilizing Geodesc keypoint descriptors in the fine stage in the meantime, for their capacity of detecting detailed information, formatting powerful coarse-to-fine descriptors. A comprehensive set of experiments is conducted on several datasets including both public benchmarks and our real-world campus scenes. Our system is proved to be with high recall and strong generalization capacity across various appearances. The proposed panoramic localizer can be integrated into mobile navigation devices, available for a variety of localization application scenarios.

7.
Sensors (Basel) ; 20(18)2020 Sep 12.
Artículo en Inglés | MEDLINE | ID: mdl-32932585

RESUMEN

The current COVID-19 pandemic is having a major impact on our daily lives. Social distancing is one of the measures that has been implemented with the aim of slowing the spread of the disease, but it is difficult for blind people to comply with this. In this paper, we present a system that helps blind people to maintain physical distance to other persons using a combination of RGB and depth cameras. We use a real-time semantic segmentation algorithm on the RGB camera to detect where persons are and use the depth camera to assess the distance to them; then, we provide audio feedback through bone-conducting headphones if a person is closer than 1.5 m. Our system warns the user only if persons are nearby but does not react to non-person objects such as walls, trees or doors; thus, it is not intrusive, and it is possible to use it in combination with other assistive devices. We have tested our prototype system on one blind and four blindfolded persons, and found that the system is precise, easy to use, and amounts to low cognitive load.


Asunto(s)
Inteligencia Artificial , Betacoronavirus , Ceguera/rehabilitación , COVID-19/prevención & control , Infecciones por Coronavirus/prevención & control , Pandemias/prevención & control , Neumonía Viral/prevención & control , Auxiliares Sensoriales , Dispositivos Electrónicos Vestibles , Acústica , Adulto , Algoritmos , Inteligencia Artificial/estadística & datos numéricos , Ceguera/psicología , Visión de Colores , Sistemas de Computación/estadística & datos numéricos , Infecciones por Coronavirus/epidemiología , Diseño de Equipo , Femenino , Alemania/epidemiología , Humanos , Procesamiento de Imagen Asistido por Computador/estadística & datos numéricos , Masculino , Distanciamiento Físico , Neumonía Viral/epidemiología , Robótica , SARS-CoV-2 , Semántica , Gafas Inteligentes/estadística & datos numéricos , Personas con Daño Visual/rehabilitación , Dispositivos Electrónicos Vestibles/estadística & datos numéricos
8.
Sensors (Basel) ; 20(11)2020 Jun 05.
Artículo en Inglés | MEDLINE | ID: mdl-32517134

RESUMEN

In recent years, with the development of depth cameras and scene detection algorithms, a wide variety of electronic travel aids for visually impaired people have been proposed. However, it is still challenging to convey scene information to visually impaired people efficiently. In this paper, we propose three different auditory-based interaction methods, i.e., depth image sonification, obstacle sonification as well as path sonification, which convey raw depth images, obstacle information and path information respectively to visually impaired people. Three sonification methods are compared comprehensively through a field experiment attended by twelve visually impaired participants. The results show that the sonification of high-level scene information, such as the direction of pathway, is easier to learn and adapt, and is more suitable for point-to-point navigation. In contrast, through the sonification of low-level scene information, such as raw depth images, visually impaired people can understand the surrounding environment more comprehensively. Furthermore, there is no interaction method that is best suited for all participants in the experiment, and visually impaired individuals need a period of time to find the most suitable interaction method. Our findings highlight the features and the differences of three scene detection algorithms and the corresponding sonification methods. The results provide insights into the design of electronic travel aids, and the conclusions can also be applied in other fields, such as the sound feedback of virtual reality applications.


Asunto(s)
Realidad Virtual , Personas con Daño Visual , Algoritmos , Femenino , Humanos , Aprendizaje , Masculino
9.
J Am Chem Soc ; 141(40): 15891-15900, 2019 Oct 09.
Artículo en Inglés | MEDLINE | ID: mdl-31523949

RESUMEN

Over the past decade, electrochemical carbon dioxide reduction has become a thriving area of research with the aim of converting electricity to renewable chemicals and fuels. Recent advances through catalyst development have significantly improved selectivity and activity. However, drawing potential dependent structure-activity relationships has been complicated, not only due to the ill-defined and intricate morphological and mesoscopic structure of electrocatalysts, but also by immense concentration gradients existing between the electrode surface and bulk solution. In this work, by using in situ surface enhanced infrared absorption spectroscopy (SEIRAS) and computational modeling, we explicitly show that commonly used strong phosphate buffers cannot sustain the interfacial pH during CO2 electroreduction on copper electrodes at relatively low current densities, <10 mA/cm2. The pH near the electrode surface was observed to be as much as 5 pH units higher compared to bulk solution in 0.2 M phosphate buffer at potentials relevant to the formation of hydrocarbons (-1 V vs RHE), even on smooth polycrystalline copper electrodes. Drastically increasing the buffer capacity did not stand out as a viable solution for the problem as the concurrent production of hydrogen increased dramatically, which resulted in a breakdown of the buffer in a narrow potential range. These unforeseen results imply that most of the studies, if not all, on electrochemical CO2 reduction to hydrocarbons in CO2 saturated aqueous solutions were evaluated under mass transport limitations on copper electrodes. We underscore that the large concentration gradients on electrodes with high local current density (e.g., nanostructured) have important implications on the selectivity, activity, and kinetic analysis, and any attempt to draw structure-activity relationships must rule out mass transport effects.

10.
Opt Express ; 27(17): 24481-24497, 2019 Aug 19.
Artículo en Inglés | MEDLINE | ID: mdl-31510336

RESUMEN

Visual odometry has received a great deal of attention during the past decade. However, being fragile to rapid motion and dynamic scenarios prevents it from practical use. Here, we present PALVO by applying panoramic annular lens to visual odometry, greatly increasing the robustness to both cases. We modify the camera model for PAL and specially design the initialization process based on the essential matrix. Our method estimates the camera's poses through two-stage tracking, meanwhile builds the local map using a probabilistic mapping method based on the Bayesian framework and feature correspondence search along the epipolar curve. Several experiments are performed to verify our algorithm, demonstrating that our algorithm provides an extremely competitive performance in robustness to rapid motion and dynamic scenarios, meanwhile achieves the same level of accuracy as the state-of-the-art visual odometry.

11.
Med Sci Monit ; 25: 3605-3616, 2019 May 15.
Artículo en Inglés | MEDLINE | ID: mdl-31091223

RESUMEN

BACKGROUND Based on the extensive biological effects of melatonin (MLT), it is beneficial to increase the MLT content in the bodies of animals at a specific physiological stage. This study was conducted to investigate the effect of a diet supplemented with rumen-protected (RP) 5-hydroxytryptophan (5-HTP) on the pineal gland and intestinal tract MLT synthesis of sheep. MATERIAL AND METHODS Eighteen Kazakh sheep were assigned randomly to 3 diet groups: control group (CT, corn-soybean meal basal diet), CT+111 group (111 mg/kg BW RP 5-HTP), and CT+222 group (222 mg/kg BW RP 5-HTP). The gene expressions of aromatic amino acid decarboxylase (AADC), arylalkylamine N-acetyltransferase (AA-NAT), hydroxyindole-O-methyltransferase (HIOMT), monoamine oxidase A (MAOA), and the intermediates of MLT synthesis were observed from the pineal gland and intestinal tract by the reverse transcription (RT)-PCR method. The 5-HTP, 5-HT, N-acetylserotonin (NAS), MLT, and 5-hydroxyindole acetic acid (5-HIAA) contents in the pineal gland and intestinal tract were analyzed by ultra-high-performance liquid chromatography-tandem mass spectrometry. RESULTS The study showed that the pineal gland HIOMT expression (P<0.05), MLT (P<0.05) and 5-HIAA (P<0.05) levels in the 222 mg/kg group significantly increased compared to those in the CT and CT+111 mg/kg groups. In addition, the AADC (P<0.01) and AA-NAT (P<0.05) gene expression levels in the duodenum and jejunum were increased by the supplementation of RP 5-HTP. CONCLUSIONS Rumen-protected 5-hydroxytryptophan promoted melatonin synthesis in the pineal gland and intestinal tract during the natural light period.


Asunto(s)
5-Hidroxitriptófano/farmacología , Melatonina/metabolismo , 5-Hidroxitriptófano/metabolismo , Acetilserotonina O-Metiltransferasa , Animales , Descarboxilasas de Aminoácido-L-Aromático , N-Acetiltransferasa de Arilalquilamina , Peso Corporal , Ritmo Circadiano , Suplementos Dietéticos , Ácido Hidroxiindolacético , Mucosa Intestinal/efectos de los fármacos , Intestinos/efectos de los fármacos , Melatonina/biosíntesis , Melatonina/farmacología , Glándula Pineal/efectos de los fármacos , Rumen/metabolismo , Ovinos
12.
Appl Opt ; 58(12): 3141-3155, 2019 Apr 20.
Artículo en Inglés | MEDLINE | ID: mdl-31044789

RESUMEN

Semantic segmentation represents a promising means to unify different detection tasks, especially pixel-wise traversability perception as the fundamental enabler in robotic vision systems aiding upper-level navigational applications. However, major research efforts are being put into earning marginal accuracy increments on semantic segmentation benchmarks, without assuring the robustness of real-time segmenters to be deployed in assistive cognition systems for the visually impaired. In this paper, we explore in a comparative study across four perception systems, including a pair of commercial smart glasses, a customized wearable prototype, and two portable red-green-blue-depth (RGB-D) cameras that are being integrated in the next generation of navigation assistance devices. More concretely, we analyze the gap between the concepts of "accuracy" and "robustness" on the critical traversability-related semantic scene understanding. A cluster of efficient deep architectures is proposed, which is built using spatial factorizations, hierarchical dilations, and pyramidal representations. Based on these architectures, this research demonstrates the augmented robustness of semantically traversable area parsing against the variations of environmental conditions in diverse RGB-D observations, and sensorial factors such as illumination, imaging quality, field of view, and detectable depth range.

13.
Appl Opt ; 58(23): 6377-6387, 2019 Aug 10.
Artículo en Inglés | MEDLINE | ID: mdl-31503785

RESUMEN

Stereo cameras are widely used in wearable visually impaired assistance devices (VIADs). However, the inevitable vibration, shock, and mechanical stress may make the camera pair become misaligned and cause a sharp decline in the quality of the acquired depth map, which significantly influences the performance of VIADs. In this paper, we propose an epipolar-constraint-based unconstrained self-calibration method that requires neither user involvement nor specific environment, while achieving a rotation accuracy of 0.83 mrad and a translation accuracy of 0.42 mm. Several approaches are proposed to address the image matching issues, including blurred images removal, mismatched key points removal, etc. Based on correctly matched key point pairs, a planar quadric-distribution approach is proposed to ensure the quality and consistency of the final key point group. These collection approaches ensure the reliability of key point pairs, which is the most important factor to realize high accuracy with minimum constraint. A comprehensive set of experiments demonstrates the high robustness of the proposed methods, which are suitable for VIADs. We also present a field test with blindfolded users to validate the flexibility and applicability of the approach.

14.
Appl Opt ; 57(11): 2809-2819, 2018 Apr 10.
Artículo en Inglés | MEDLINE | ID: mdl-29714283

RESUMEN

The introduction of RGB-depth (RGB-D) sensors harbors a revolutionary power in the field of navigational assistance for the visually impaired. However, RGB-D sensors are limited by a minimum detectable distance of about 800 mm. This paper proposes an effective approach to decrease the minimum range for navigational assistance based on a RGB-D sensor of RealSense R200. A large-scale stereo matching between two infrared (IR) images and a cross-modal stereo matching between one IR image and RGB image are incorporated for short-range depth acquisition. The minimum range reduction is critical not only for avoiding obstacles up close, but also in the enhancement of traversability awareness. Overall, the minimum detectable distance of RealSense is reduced from 650 mm to 60 mm with qualified accuracy. A traversable line is created to give feedback to visually impaired individuals through stereo sound. The approach is proved to have usefulness and reliability by a comprehensive set of experiments and field tests in real-world scenarios involving real visually impaired participants.


Asunto(s)
Percepción de Profundidad , Interpretación de Imagen Asistida por Computador/instrumentación , Reconocimiento de Normas Patrones Automatizadas/métodos , Auxiliares Sensoriales , Baja Visión/terapia , Personas con Daño Visual/rehabilitación , Dispositivos Electrónicos Vestibles , Diseño de Equipo , Humanos , Reproducibilidad de los Resultados , Caminata/fisiología
15.
Sensors (Basel) ; 18(8)2018 Jul 31.
Artículo en Inglés | MEDLINE | ID: mdl-30065208

RESUMEN

Localization systems play an important role in assisted navigation. Precise localization renders visually impaired people aware of ambient environments and prevents them from coming across potential hazards. The majority of visual localization algorithms, which are applied to autonomous vehicles, are not adaptable completely to the scenarios of assisted navigation. Those vehicle-based approaches are vulnerable to viewpoint, appearance and route changes (between database and query images) caused by wearable cameras of assistive devices. Facing these practical challenges, we propose Visual Localizer, which is composed of ConvNet descriptor and global optimization, to achieve robust visual localization for assisted navigation. The performance of five prevailing ConvNets are comprehensively compared, and GoogLeNet is found to feature the best performance on environmental invariance. By concatenating two compressed convolutional layers of GoogLeNet, we use only thousands of bytes to represent image efficiently. To further improve the robustness of image matching, we utilize the network flow model as a global optimization of image matching. The extensive experiments using images captured by visually impaired volunteers illustrate that the system performs well in the context of assisted navigation.

16.
Sensors (Basel) ; 18(5)2018 May 10.
Artículo en Inglés | MEDLINE | ID: mdl-29748508

RESUMEN

Navigational assistance aims to help visually-impaired people to ambulate the environment safely and independently. This topic becomes challenging as it requires detecting a wide variety of scenes to provide higher level assistive awareness. Vision-based technologies with monocular detectors or depth sensors have sprung up within several years of research. These separate approaches have achieved remarkable results with relatively low processing time and have improved the mobility of impaired people to a large extent. However, running all detectors jointly increases the latency and burdens the computational resources. In this paper, we put forward seizing pixel-wise semantic segmentation to cover navigation-related perception needs in a unified way. This is critical not only for the terrain awareness regarding traversable areas, sidewalks, stairs and water hazards, but also for the avoidance of short-range obstacles, fast-approaching pedestrians and vehicles. The core of our unification proposal is a deep architecture, aimed at attaining efficient semantic understanding. We have integrated the approach in a wearable navigation system by incorporating robust depth segmentation. A comprehensive set of experiments prove the qualified accuracy over state-of-the-art methods while maintaining real-time speed. We also present a closed-loop field test involving real visually-impaired users, demonstrating the effectivity and versatility of the assistive framework.


Asunto(s)
Auxiliares Sensoriales , Personas con Daño Visual/rehabilitación , Dispositivos Electrónicos Vestibles , Percepción de Profundidad , Humanos , Interpretación de Imagen Asistida por Computador , Reconocimiento de Normas Patrones Automatizadas , Caminata
17.
Opt Express ; 25(2): 1173-1184, 2017 Jan 23.
Artículo en Inglés | MEDLINE | ID: mdl-28158002

RESUMEN

Structured light is a prevailing and reliable active approach of 3D object reconstruction. But complex ambience is undesirable in the measurement because it could cause severe noise and increase computing overhead. In this paper, we propose a structured light coded by spatially-distributed polarization state of the illuminating patterns. The proposed structured light has the advantage of enhancing target in 3D reconstruction by polarization cues. Specifically, this method can estimate the degree of linear polarization (DOLP) in the scene, distinguish target by DOLP and selectively reconstruct it. The coding strategy and the corresponding polarimetric principle are presented and verified by experimental results. As our approach takes advantage of the intrinsic properties of liquid crystal display (LCD) projector and requires no rotation of polarizer, it is effective and efficient for practical applications.

18.
Sensors (Basel) ; 17(8)2017 Aug 17.
Artículo en Inglés | MEDLINE | ID: mdl-28817069

RESUMEN

The use of RGB-Depth (RGB-D) sensors for assisting visually impaired people (VIP) has been widely reported as they offer portability, function-diversity and cost-effectiveness. However, polarization cues to assist traversability awareness without precautions against stepping into water areas are weak. In this paper, a polarized RGB-Depth (pRGB-D) framework is proposed to detect traversable area and water hazards simultaneously with polarization-color-depth-attitude information to enhance safety during navigation. The approach has been tested on a pRGB-D dataset, which is built for tuning parameters and evaluating the performance. Moreover, the approach has been integrated into a wearable prototype which generates a stereo sound feedback to guide visually impaired people (VIP) follow the prioritized direction to avoid obstacles and water hazards. Furthermore, a preliminary study with ten blindfolded participants suggests its effectivity and reliability.

19.
Phys Chem Chem Phys ; 19(1): 251-257, 2016 Dec 21.
Artículo en Inglés | MEDLINE | ID: mdl-27901134

RESUMEN

In this study, we have fabricated crystalline metallic Pt nanoparticles-loaded α-Bi2O3 microrods (Pt/Bi2O3) using a precipitation method, followed by an impregnation-reduction deposition route. The Pt/Bi2O3 catalysts have much higher photocatalytic activities than that of the pure Bi2O3 for the degradation of RhB and 2,4-DCP under visible light irradiation. The photogenerated charge separation, transfer, and capture in the photocatalysis of Pt/Bi2O3 were investigated in detail according to the various characterizations and analyses of the photogenerated active species and H2O2. It was revealed that the loaded Pt plays an important mediating role in the efficient separation of photogenerated electron-hole pairs by rapidly transferring electrons from the excited Bi2O3 to the surface oxygen. As a result, the reduction of O2 to form H2O2 by the photogenerated electrons was promoted, leaving more holes on the deep valence band to drive the degradation of organic compounds and the production of ˙OH radicals, which were responsible for the enhanced photocatalysis of Pt/Bi2O3.

20.
Sensors (Basel) ; 16(11)2016 Nov 21.
Artículo en Inglés | MEDLINE | ID: mdl-27879634

RESUMEN

The introduction of RGB-Depth (RGB-D) sensors into the visually impaired people (VIP)-assisting area has stirred great interest of many researchers. However, the detection range of RGB-D sensors is limited by narrow depth field angle and sparse depth map in the distance, which hampers broader and longer traversability awareness. This paper proposes an effective approach to expand the detection of traversable area based on a RGB-D sensor, the Intel RealSense R200, which is compatible with both indoor and outdoor environments. The depth image of RealSense is enhanced with IR image large-scale matching and RGB image-guided filtering. Traversable area is obtained with RANdom SAmple Consensus (RANSAC) segmentation and surface normal vector estimation, preliminarily. A seeded growing region algorithm, combining the depth image and RGB image, enlarges the preliminary traversable area greatly. This is critical not only for avoiding close obstacles, but also for allowing superior path planning on navigation. The proposed approach has been tested on a score of indoor and outdoor scenarios. Moreover, the approach has been integrated into an assistance system, which consists of a wearable prototype and an audio interface. Furthermore, the presented approach has been proved to be useful and reliable by a field test with eight visually impaired volunteers.


Asunto(s)
Técnicas Biosensibles/métodos , Personas con Daño Visual , Algoritmos , Humanos , Reconocimiento de Normas Patrones Automatizadas
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA