Your browser doesn't support javascript.
loading
: 20 | 50 | 100
1 - 20 de 24
1.
Article En | MEDLINE | ID: mdl-38502616

Nowadays, numerous countries are facing the challenge of aging population. Additionally, the number of people with reduced mobility due to physical illness is increasing. In response to this issue, robots used for walking assistance and sit-to-stand (STS) transition have been introduced in nursing to assist these individuals with walking. Given the shared characteristics of these robots, this paper collectively refers to them as Walking Support Robots (WSR). Additionally, service robots with assisting functions have been included in the scope of this review. WSR are a crucial element of modern nursing assistants and have received significant research attention. Unlike passive walkers that require much user's strength to move, WSR can autonomously perceive the state of the user and environment, and select appropriate control strategies to assist the user in maintaining balance and movement. This paper offers a comprehensive review of recent literature on WSR, encompassing an analysis of structure design, perception methods, control strategies and safety & comfort features. In conclusion, it summarizes the key findings, current challenges and discusses potential future research directions in this field.


Robotics , Humans , Aged , Walking/physiology , Movement , Aging , Exercise Therapy
2.
Nanomicro Lett ; 16(1): 11, 2023 Nov 09.
Article En | MEDLINE | ID: mdl-37943399

Humans can perceive our complex world through multi-sensory fusion. Under limited visual conditions, people can sense a variety of tactile signals to identify objects accurately and rapidly. However, replicating this unique capability in robots remains a significant challenge. Here, we present a new form of ultralight multifunctional tactile nano-layered carbon aerogel sensor that provides pressure, temperature, material recognition and 3D location capabilities, which is combined with multimodal supervised learning algorithms for object recognition. The sensor exhibits human-like pressure (0.04-100 kPa) and temperature (21.5-66.2 °C) detection, millisecond response times (11 ms), a pressure sensitivity of 92.22 kPa-1 and triboelectric durability of over 6000 cycles. The devised algorithm has universality and can accommodate a range of application scenarios. The tactile system can identify common foods in a kitchen scene with 94.63% accuracy and explore the topographic and geomorphic features of a Mars scene with 100% accuracy. This sensing approach empowers robots with versatile tactile perception to advance future society toward heightened sensing, recognition and intelligence.

3.
Sci Adv ; 9(9): eadf8831, 2023 Mar 03.
Article En | MEDLINE | ID: mdl-36867698

Iontronic pressure sensors are promising in robot haptics because they can achieve high sensing performance using nanoscale electric double layers (EDLs) for capacitive signal output. However, it is challenging to achieve both high sensitivity and high mechanical stability in these devices. Iontronic sensors need microstructures that offer subtly changeable EDL interfaces to boost sensitivity, while the microstructured interfaces are mechanically weak. Here, we embed isolated microstructured ionic gel (IMIG) in a hole array (28 × 28) of elastomeric matrix and cross-link the IMIGs laterally to achieve enhanced interfacial robustness without sacrificing sensitivity. The embedded configuration toughens and strengthens the skin by pinning cracks and by the elastic dissipation of the interhole structures. Furthermore, cross-talk between the sensing elements is suppressed by isolating the ionic materials and by designing a circuit with a compensation algorithm. We have demonstrated that the skin is potentially useful for robotic manipulation tasks and object recognition.

4.
Sci Adv ; 8(51): eade2450, 2022 Dec 23.
Article En | MEDLINE | ID: mdl-36563155

Tactile sensations are mainly transmitted to each other by physical touch. Wireless touch perception could be a revolution for us to interact with the world. Here, we report a wireless self-sensing and haptic-reproducing electronic skin (e-skin) to realize noncontact touch communications. A flexible self-sensing actuator was developed to provide an integrated function in both tactile sensing and haptic feedback. When this e-skin was dynamically pressed, the actuator generated an induced voltage as tactile information. Via wireless communication, another e-skin could receive this tactile data and run a synchronized haptic reproduction. Thus, touch could be wirelessly conveyed in bidirections between two users as a touch intercom. Furthermore, this e-skin could be connected with various smart devices to form a touch internet of things where one-to-one and one-to-multiple touch delivery could be realized. This wireless touch presents huge potentials in remote touch video, medical care/assistance, education, and many other applications.

5.
ACS Nano ; 16(10): 16784-16795, 2022 10 25.
Article En | MEDLINE | ID: mdl-36166598

In the long pursuit of smart robotics, it has been envisioned to empower robots with human-like senses, especially vision and touch. While tremendous progress has been made in image sensors and computer vision over the past decades, tactile sense abilities are lagging behind due to the lack of large-scale flexible tactile sensor array with high sensitivity, high spatial resolution, and fast response. In this work, we have demonstrated a 64 × 64 flexible tactile sensor array with a record-high spatial resolution of 0.9 mm (equivalently 28.2 pixels per inch) by integrating a high-performance piezoresistive film (PRF) with a large-area active matrix of carbon nanotube thin-film transistors. PRF with self-formed microstructures exhibited high pressure-sensitivity of ∼385 kPa-1 for multi-walled carbon nanotubes concentration of 6%, while the 14% one exhibited fast response time of ∼3 ms, good linearity, broad detection range beyond 1400 kPa, and excellent cyclability over 3000 cycles. Using this fully integrated tactile sensor array, the footprint maps of an artificial honeybee were clearly identified. Furthermore, we hardware-implemented a smart tactile system by integrating the PRF-based sensor array with a memristor-based computing-in-memory chip to record and recognize handwritten digits and Chinese calligraphy, achieving high classification accuracies of 98.8% and 97.3% in hardware, respectively. The integration of sensor networks with deep learning hardware may enable edge or near-sensor computing with significantly reduced power consumption and latency. Our work could empower the building of large-scale intelligent sensor networks for next-generation smart robotics.


Nanotubes, Carbon , Robotics , Humans , Animals , Touch , Nanotubes, Carbon/chemistry
6.
Sci Adv ; 8(36): eabp8738, 2022 Sep 09.
Article En | MEDLINE | ID: mdl-36083898

The human somatosensory system is capable of extracting features with millimeter-scale spatial resolution and submillisecond temporal precision. Current technologies that can render tactile stimuli with such high definition are neither portable nor easily accessible. Here, we present a wearable electrotactile rendering system that elicits tactile stimuli with both high spatial resolution (76 dots/cm2) and rapid refresh rates (4 kHz), because of a previously unexplored current-steering super-resolution stimulation technique. For user safety, we present a high-frequency modulation method to reduce the stimulation voltage to as low as 13 V. The utility of our high spatiotemporal tactile rendering system is highlighted in applications such as braille display, virtual reality shopping, and digital virtual experiences. Furthermore, we integrate our setup with tactile sensors to transmit fine tactile features through thick gloves used by firefighters, allowing tiny objects to be localized based on tactile sensing alone.

7.
Environ Res ; 212(Pt C): 113480, 2022 09.
Article En | MEDLINE | ID: mdl-35588771

Soil respiration, particularly heterotrophic respiration (RH), is a potent source of carbon dioxide (CO2) in the atmosphere. The current research focuses on the evaluation of RH for six land use systems including sloping cropland (SC), shrub land (SD), grassland (GD), shrub & grassland (SGD), newly abandoned cropland (NC) and afforested forest (AF). Heterotrophic respiration showed a diverse seasonal pattern over a year long period that was affected by various soil properties and climatic variables across the six land use systems in a subtropical Karst landscape. The lowest RH scores were found in the SD site (annual cumulative soil CO2 flux: 2447 kg C ha-1), whereas the maximum heterotrophic respiration occurred in the SF site (annual cumulative soil CO2 13597 kg C ha-1). The values of RH were: SC site: 3.8-191.5 mg C m-2 h-1, NC site: 1.04-129 mg C m-2 h-1, GD site: 3.6-100.7 mg C m-2 h-1, SGD site: 0.3-393.5 mg C m-2 h-1, SD site: 3-116 mg C m-2 h-1, and SF site: 10.6-398.2 mg C m-2 h-1. Highly significant (p ≤ 0.01) and positive correlations between RH rate and soil temperature were found for the studied land use types (correlation coefficients as follows; SC: 0.77, NC: 0.61, GD: 0.283, SGD: 0.535, SD: 0.230, SF: 0.85). However, water filled pore space (WFPS), NH4+, NO3-, dissolved organic carbon (DOC) and total dissolved nitrogen (TDN) concentrations showed varied (positive and negative) correlations with RH. The overall results show that soil temperature can be considered as the most limiting factor for RH among all the sites studied in the present research. In these environments, soil heterotrophic respiration significantly correlated with soil temperature, highlighting the significance of climate on heterotrophic respiration.


Carbon Dioxide , Soil , Carbon Dioxide/analysis , China , Forests , Respiration , Temperature
8.
Nat Commun ; 13(1): 1317, 2022 Mar 10.
Article En | MEDLINE | ID: mdl-35273183

Electronic skins (e-skins) are devices that can respond to mechanical stimuli and enable robots to perceive their surroundings. A great challenge for existing e-skins is that they may easily fail under extreme mechanical conditions due to their multilayered architecture with mechanical mismatch and weak adhesion between the interlayers. Here we report a flexible pressure sensor with tough interfaces enabled by two strategies: quasi-homogeneous composition that ensures mechanical match of interlayers, and interlinked microconed interface that results in a high interfacial toughness of 390 J·m-2. The tough interface endows the sensor with exceptional signal stability determined by performing 100,000 cycles of rubbing, and fixing the sensor on a car tread and driving 2.6 km on an asphalt road. The topological interlinks can be further extended to soft robot-sensor integration, enabling a seamless interface between the sensor and robot for highly stable sensing performance during manipulation tasks under complicated mechanical conditions.

9.
Microsyst Nanoeng ; 7: 85, 2021.
Article En | MEDLINE | ID: mdl-34745644

Skin-integrated electronics, also known as electronic skin (e-skin), are rapidly developing and are gradually being adopted in biomedical fields as well as in our daily lives. E-skin capable of providing sensitive and high-resolution tactile sensations and haptic feedback to the human body would open a new e-skin paradigm for closed-loop human-machine interfaces. Here, we report a class of materials and mechanical designs for the miniaturization of mechanical actuators and strategies for their integration into thin, soft e-skin for haptic interfaces. The mechanical actuators exhibit small dimensions of 5 mm diameter and 1.45 mm thickness and work in an electromagnetically driven vibrotactile mode with resonance frequency overlapping the most sensitive frequency of human skin. Nine mini actuators can be integrated simultaneously in a small area of 2 cm × 2 cm to form a 3 × 3 haptic feedback array, which is small and compact enough to mount on a thumb tip. Furthermore, the thin, soft haptic interface exhibits good mechanical properties that work properly during stretching, bending, and twisting and therefore can conformally fit onto various parts of the human body to afford programmable tactile enhancement and Braille recognition with an accuracy rate over 85%.

10.
Sci Adv ; 7(39): eabi6751, 2021 Sep 24.
Article En | MEDLINE | ID: mdl-34550743

The rapid development of the Internet of Things depends on wireless devices and their network. Traditional wireless sensing and transmission technology still requires multiple modules for sensing, signal modulation, transmission, and power, making the whole system bulky, rigid, and costly. Here, we proposed a paradigm shift wireless sensing solution based on the breakdown discharge­induced displacement current. Through that, we can combine the abovementioned functional modules in a single unit of self-powered wireless sensing e-sticker (SWISE), which features a small size (down to 9 mm by 9 mm) and long effective transmission distance (>30 m) when compared to existing wireless sensing technologies. Furthermore, SWISEs have functions of multipoint motion sensing and gas detection in fully self-powered manner. This work proposes a solution for flexible self-powered wireless sensing platforms, which shows great potential for implantable and wearable electronics, robotics, health care, infrastructure monitoring, human-machine interface, virtual reality, etc.

11.
IEEE Trans Image Process ; 30: 4008-4021, 2021.
Article En | MEDLINE | ID: mdl-33784621

Accurate 3D reconstruction of the hand and object shape from a hand-object image is important for understanding human-object interaction as well as human daily activities. Different from bare hand pose estimation, hand-object interaction poses a strong constraint on both the hand and its manipulated object, which suggests that hand configuration may be crucial contextual information for the object, and vice versa. However, current approaches address this task by training a two-branch network to reconstruct the hand and object separately with little communication between the two branches. In this work, we propose to consider hand and object jointly in feature space and explore the reciprocity of the two branches. We extensively investigate cross-branch feature fusion architectures with MLP or LSTM units. Among the investigated architectures, a variant with LSTM units that enhances object feature with hand feature shows the best performance gain. Moreover, we employ an auxiliary depth estimation module to augment the input RGB image with the estimated depth map, which further improves the reconstruction accuracy. Experiments conducted on public datasets demonstrate that our approach significantly outperforms existing approaches in terms of the reconstruction accuracy of objects.

12.
Article En | MEDLINE | ID: mdl-29993980

To enhance the resolution and accuracy of depth data, some video-based depth super-resolution methods have been proposed which utilizes its neighboring depth images in the temporal domain. They often consist of two main stages: motion compensation of temporally neighboring depth images and fusion of compensated depth images. However, large displacement 3D motion often leads to compensation error, and the compensation error is further introduced into the fusion. A video-based depth super-resolution method with novel motion compensation and fusion approaches is proposed in this paper. We claim that, 3D Nearest Neighboring Field (NNF) is a better choice than using positions with true motion displacement for depth enhancements. To handle large displacement 3D motion, the compensation stage utilized 3D NNF instead of true motion used in previous methods. Next, the fusion approach is modeled as a regression problem to predict the super-resolution result efficiently for each depth image by using its compensated depth images. A new deep convolutional neural network architecture is designed for fusion, which is able to employ a large amount of video data for learning the complicated regression function. We comprehensively evaluate our method on various RGB-D video sequences to show its superior performance.

13.
Front Neurosci ; 12: 21, 2018.
Article En | MEDLINE | ID: mdl-29456486

Auditory spatial localization in humans is performed using a combination of interaural time differences, interaural level differences, as well as spectral cues provided by the geometry of the ear. To render spatialized sounds within a virtual reality (VR) headset, either individualized or generic Head Related Transfer Functions (HRTFs) are usually employed. The former require arduous calibrations, but enable accurate auditory source localization, which may lead to a heightened sense of presence within VR. The latter obviate the need for individualized calibrations, but result in less accurate auditory source localization. Previous research on auditory source localization in the real world suggests that our representation of acoustic space is highly plastic. In light of these findings, we investigated whether auditory source localization could be improved for users of generic HRTFs via cross-modal learning. The results show that pairing a dynamic auditory stimulus, with a spatio-temporally aligned visual counterpart, enabled users of generic HRTFs to improve subsequent auditory source localization. Exposure to the auditory stimulus alone or to asynchronous audiovisual stimuli did not improve auditory source localization. These findings have important implications for human perception as well as the development of VR systems as they indicate that generic HRTFs may be enough to enable good auditory source localization in VR.

14.
Asian-Australas J Anim Sci ; 31(1): 63-70, 2018 Jan.
Article En | MEDLINE | ID: mdl-28728360

OBJECTIVE: The aim of the study was to isolate gossypol-degrading bacteria and to assess its potential for gossypol degradation. METHODS: Rumen liquid was collected from fistulated cows grazing the experimental pasture. Approximately 1 mL of the rumen liquid was spread onto basal medium plates containing 2 g/L gossypol as the only source of carbon and was then cultured at 39°C to isolate gossypol-degrading bacteria. The isolated colonies were cultured for 6 h and then their size and shape observed by microscope and scanning electron microscope. The 16S rRNA gene of isolated colonies was sequenced and aligned using National Center for Biotechnology Information-Basic Local Alignment Search Tool. The various fermentation conditions, initial pH, incubation temperature, inoculum level and fermentationperiod were analyzed in cottonseed meal (CSM). The crude protein (CP), total gossypol (TG), and free gossypol (FG) were determined in CSM after fermentation with isolated strain at 39°C for 72 h. RESULTS: Screening results showed that a single bacterial isolate, named Rumen Bacillus Subtilis (RBS), could use gossypol as a carbon source. The bacterium was identified by 16S rDNA sequencing as being 98% homologous to the sequence of Bacillus subtilis strain GH38. The optimum fermentation conditions were found to be 72 h, 39°C, pH 6.5, moisture 50%, inoculum level 107 cell/g. In the optimum fermentation conditions, the FG and TG content in fermented CSM decreased 78.86% and 49% relative to the control. The content of CP and the essential amino acids of the fermented CSM increased respectively, compared with the control. CONCLUSION: The isolation of a gossypol-degrading bacterium from the cow rumen is of great importance for gossypol biodegradation and may be a valuable potential source for gossypol-degradation of CSM.

15.
Sci Rep ; 7(1): 3817, 2017 06 19.
Article En | MEDLINE | ID: mdl-28630450

Humans are good at selectively listening to specific target conversations, even in the presence of multiple concurrent speakers. In our research, we study how auditory-visual cues modulate this selective listening. We do so by using immersive Virtual Reality technologies with spatialized audio. Exposing 32 participants to an Information Masking Task with concurrent speakers, we find significantly more errors in the decision-making processes triggered by asynchronous audiovisual speech cues. More precisely, the results show that lips on the Target speaker matched to a secondary (Mask) speaker's audio severely increase the participants' comprehension error rates. In a control experiment (n = 20), we further explore the influences of the visual modality over auditory selective attention. The results show a dominance of visual-speech cues, which effectively turn the Mask into the Target and vice-versa. These results reveal a disruption of selective attention that is triggered by bottom-up multisensory integration. The findings are framed in the sensory perception and cognitive neuroscience theories. The VR setup is validated by replicating previous results in this literature in a supplementary experiment.


Attention/physiology , Decision Making/physiology , Speech Perception/physiology , Virtual Reality , Adult , Female , Humans , Male , Middle Aged
16.
IEEE Trans Image Process ; 25(8): 3906-18, 2016 08.
Article En | MEDLINE | ID: mdl-27214902

Data may often contain noise or irrelevant information, which negatively affect the generalization capability of machine learning algorithms. The objective of dimension reduction algorithms, such as principal component analysis (PCA), non-negative matrix factorization (NMF), random projection (RP), and auto-encoder (AE), is to reduce the noise or irrelevant information of the data. The features of PCA (eigenvectors) and linear AE are not able to represent data as parts (e.g. nose in a face image). On the other hand, NMF and non-linear AE are maimed by slow learning speed and RP only represents a subspace of original data. This paper introduces a dimension reduction framework which to some extend represents data as parts, has fast learning speed, and learns the between-class scatter subspace. To this end, this paper investigates a linear and non-linear dimension reduction framework referred to as extreme learning machine AE (ELM-AE) and sparse ELM-AE (SELM-AE). In contrast to tied weight AE, the hidden neurons in ELM-AE and SELM-AE need not be tuned, and their parameters (e.g, input weights in additive neurons) are initialized using orthogonal and sparse random weights, respectively. Experimental results on USPS handwritten digit recognition data set, CIFAR-10 object recognition, and NORB object recognition data set show the efficacy of linear and non-linear ELM-AE and SELM-AE in terms of discriminative capability, sparsity, training time, and normalized mean square error.

17.
IEEE Trans Pattern Anal Mach Intell ; 31(11): 1968-84, 2009 Nov.
Article En | MEDLINE | ID: mdl-19762925

In this paper, we present a new method to modify the appearance of a face image by manipulating the illumination condition, when the face geometry and albedo information is unknown. This problem is particularly difficult when there is only a single image of the subject available. Recent research demonstrates that the set of images of a convex Lambertian object obtained under a wide variety of lighting conditions can be approximated accurately by a low-dimensional linear subspace using a spherical harmonic representation. Moreover, morphable models are statistical ensembles of facial properties such as shape and texture. In this paper, we integrate spherical harmonics into the morphable model framework by proposing a 3D spherical harmonic basis morphable model (SHBMM). The proposed method can represent a face under arbitrary unknown lighting and pose simply by three low-dimensional vectors, i.e., shape parameters, spherical harmonic basis parameters, and illumination coefficients, which are called the SHBMM parameters. However, when the image was taken under an extreme lighting condition, the approximation error can be large, thus making it difficult to recover albedo information. In order to address this problem, we propose a subregion-based framework that uses a Markov random field to model the statistical distribution and spatial coherence of face texture, which makes our approach not only robust to extreme lighting conditions, but also insensitive to partial occlusions. The performance of our framework is demonstrated through various experimental results, including the improved rates for face recognition under extreme lighting conditions.


Artificial Intelligence , Face/anatomy & histology , Image Interpretation, Computer-Assisted/methods , Lighting/methods , Pattern Recognition, Automated/methods , Photography/methods , Subtraction Technique , Algorithms , Biometry/methods , Humans , Image Enhancement/methods , Reproducibility of Results , Sensitivity and Specificity
18.
IEEE Trans Image Process ; 17(12): 2393-402, 2008 Dec.
Article En | MEDLINE | ID: mdl-19004711

Plane-based (2-D) camera calibration is becoming a hot research topic in recent years because of its flexibility. However, at least four image points are needed in every view to denote the coplanar feature in the 2-D camera calibration. Can we do the camera calibration by using the calibration object that only has three points? Some 1-D camera calibration techniques use the setup of three collinear points with known distances, but it is a kind of special conditions of calibration object setup. How about the general setup-three noncollinear points? We propose a new camera calibration algorithm based on the calibration objects with three noncollinear points. Experiments with simulated data and real images are carried out to verify the theoretical correctness and numerical robustness of our results. Because the objects with three noncollinear points have special properties in camera calibration, they are midway between 1-D and 2-D calibration objects. Our method is actually a new kind of camera calibration algorithm.


Algorithms , Image Enhancement/methods , Image Interpretation, Computer-Assisted/methods , Imaging, Three-Dimensional/methods , Photography/instrumentation , Photography/methods , Calibration , Motion , Reproducibility of Results , Sensitivity and Specificity
19.
IEEE Trans Pattern Anal Mach Intell ; 30(10): 1831-40, 2008 Oct.
Article En | MEDLINE | ID: mdl-18703834

In this paper we study the problem of "visual echo" in a full-duplex projector-camera system for telecollaboration applications. Visual echo is defined as the appearance of projected contents observed by the camera. It can potentially saturate the projected contents, similar to audio echo in telephone conversation. Our approach to visual echo cancellation includes an offline calibration procedure that records the geometric and photometric transfer between the projector and the camera in a look-up table. During run-time, projected contents in the captured video are identified using the calibration information and suppressed, therefore achieving the goal of cancelling visual echo. Our approach can accurately handle full-color images under arbitrary reflectance of display surfaces and photometric response of the projector or camera. It is robust to geometric registration errors and quantization effects and is therefore particularly effective for high-frequency contents such as texts and hand drawings. We demonstrate the effectiveness of our approach with a variety of real images in a full-duplex projector-camera system.


Algorithms , Artifacts , Data Display , Image Enhancement/methods , Image Interpretation, Computer-Assisted/methods , Photography/methods , User-Computer Interface , Feedback , Reproducibility of Results , Sensitivity and Specificity
20.
IEEE Trans Pattern Anal Mach Intell ; 28(10): 1701-6, 2006 Oct.
Article En | MEDLINE | ID: mdl-16986550

We propose a novel global-local variational energy to automatically extract objects of interest from images. Previous formulations only incorporate local region potentials, which are sensitive to incorrectly classified pixels during iteration. We introduce a global likelihood potential to achieve better estimation of the foreground and background models and, thus, better extraction results. Extensive experiments demonstrate its efficacy.


Algorithms , Artificial Intelligence , Image Enhancement/methods , Image Interpretation, Computer-Assisted/methods , Information Storage and Retrieval/methods , Pattern Recognition, Automated/methods , Reproducibility of Results , Sensitivity and Specificity
...