Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
3D Print Addit Manuf ; 8(5): 281-292, 2021 Oct 01.
Artículo en Inglés | MEDLINE | ID: mdl-36654933

RESUMEN

Additive manufacturing (AM) brings new design potential compared with traditional manufacturing. Nevertheless, traditional manufacturing knowledge remains embedded in the minds of designers and is a real cognitive barrier to design in AM. Design for Additive Manufacturing (DfAM) provides tools, techniques, and guidelines to optimize design with the specifics of AM. These methods are usable at different moments of the design process. Only few DfAMs focus on the early stages of design, the ideation phase, which allows for the most innovation. The literature highlights the effectiveness of methodologies based on tangible tools, such as cards or objects, to generate creativity. The difficulty with such tools is to be inspirational as well as formative. Therefore, this article presents a method to help designers capture the design potential of AM to design creative solutions at the early stages of product design, named the Augmented Design with AM Methodology (ADAM2). This methodology relies on the potential of AM, defined in 14 opportunities and a set of 14 inspirational objects, each representing an opportunity. Dedicated to creativity sessions, this methodology allows forcing the association between knowledge of a company's sector and the design potential of AM. To validate the effectiveness of the ADAM2 methodology, we use it for an industrial application in a jewelry and watchmaking company. The results showed that ADAM2 promote the generation of creative solutions and the exploitation of the design potential of AM during the early design stages.

2.
Front Robot AI ; 5: 93, 2018.
Artículo en Inglés | MEDLINE | ID: mdl-33500972

RESUMEN

In this paper we introduce MoSART, a novel approach for Mobile Spatial Augmented Reality on Tangible objects. MoSART is dedicated to mobile interaction with tangible objects in single or collaborative situations. It is based on a novel "all-in-one" Head-Mounted Display (AMD) including a projector (for the SAR display) and cameras (for the scene registration). Equipped with the HMD the user is able to move freely around tangible objects and manipulate them at will. The system tracks the position and orientation of the tangible 3D objects and projects virtual content over them. The tracking is a feature-based stereo optical tracking providing high accuracy and low latency. A projection mapping technique is used for the projection on the tangible objects which can have a complex 3D geometry. Several interaction tools have also been designed to interact with the tangible and augmented content, such as a control panel and a pointer metaphor, which can benefit as well from the MoSART projection mapping and tracking features. The possibilities offered by our novel approach are illustrated in several use cases, in single or collaborative situations, such as for virtual prototyping, training or medical visualization.

3.
J Eye Mov Res ; 11(6)2018 Dec 10.
Artículo en Inglés | MEDLINE | ID: mdl-33828716

RESUMEN

For an in-depth, AOI-based analysis of mobile eye tracking data, a preceding gaze assign-ment step is inevitable. Current solutions such as manual gaze mapping or marker-based approaches are tedious and not suitable for applications manipulating tangible objects. This makes mobile eye tracking studies with several hours of recording difficult to analyse quan-titatively. We introduce a new machine learning-based algorithm, the computational Gaze-Object Mapping (cGOM), that automatically maps gaze data onto respective AOIs. cGOM extends state-of-the-art object detection and segmentation by mask R-CNN with a gaze mapping feature. The new algorithm's performance is validated against a manual fixation-by-fixation mapping, which is considered as ground truth, in terms of true positive rate (TPR), true negative rate (TNR) and efficiency. Using only 72 training images with 264 labelled object representations, cGOM is able to reach a TPR of approx. 80% and a TNR of 85% compared to the manual mapping. The break-even point is reached at 2 hours of eye tracking recording for the total procedure, respectively 1 hour considering human working time only. Together with a real-time capability of the mapping process after completed train-ing, even hours of eye tracking recording can be evaluated efficiently. (Code and video examples have been made available at: https://gitlab.ethz.ch/pdz/cgom.git).

4.
Assist Technol ; 30(1): 39-50, 2018.
Artículo en Inglés | MEDLINE | ID: mdl-28632018

RESUMEN

Recently, a few software applications (apps) have been developed to enhance vocabulary and conceptual networks to address the needs of children with language impairments (LI), but there is no evidence about their impact and their usability in therapy contexts. Here, we try to fill this gap presenting a system aimed at improving the semantic competence and the structural knowledge of children with LI. The goal of the study is to evaluate learnability, usability, user satisfaction and quality of the interaction between the system and the children. The system consists of a tablet, hosting an app with educational and training purposes, equipped with a Near Field Communication (NFC) reader, used to interact with the user by means of objects. Fourteen preschool children with LI played with the device during one 45-minute speech therapy session. Reactions and feedbacks were recorded and rated. The system proved to be easy to understand and learn, as well as engaging and rewarding. The success of the device probably rests on the integration of smart technology and real, tangible objects. The device can be seen as a valuable aid to support and enhance communication abilities in children with LI as well as typically developing individuals.


Asunto(s)
Trastornos del Lenguaje/terapia , Aplicaciones Móviles , Semántica , Logopedia/instrumentación , Preescolar , Humanos , Aprendizaje , Evaluación del Resultado de la Atención al Paciente , Interfaz Usuario-Computador
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA