RESUMEN
MuscleX is an integrated, open-source computer software suite for data reduction of X-ray fiber diffraction patterns from striated muscle and other fibrous systems. It is written in Python and runs on Linux, Microsoft Windows or macOS. Most modules can be run either from a graphical user interface or in a `headless mode' from the command line, suitable for incorporation into beamline control systems. Here, we provide an overview of the general structure of the MuscleX software package and describe the specific features of the individual modules as well as examples of applications.
RESUMEN
Modern quantum chemistry algorithms are increasingly able to accurately predict molecular properties that are useful for chemists in research and education. Despite this progress, performing such calculations is currently unattainable to the wider chemistry community, as they often require domain expertise, computer programming skills, and powerful computer hardware. In this review, we outline methods to eliminate these barriers using cutting-edge technologies. We discuss the ingredients needed to create accessible platforms that can compute quantum chemistry properties in real time, including graphical processing units-accelerated quantum chemistry in the cloud, artificial intelligence-driven natural molecule input methods, and extended reality visualization. We end by highlighting a series of exciting applications that assemble these components to create uniquely interactive platforms for computing and visualizing spectra, 3D structures, molecular orbitals, and many other chemical properties.
RESUMEN
OBJECTIVE: Smartphone-based self-testing could facilitate large-scale data collection and remote diagnostics. For this purpose, the matrix sentence test (MST) is an ideal candidate due to its repeatability and accuracy. In clinical practice, the MST requires professional audiological equipment and supervision, which is infeasible for smartphone-based self-testing. Therefore, it is crucial to investigate the feasibility of self-administering the MST on smartphones, including the development of an appropriate user interface for the small screen size. DESIGN: We compared the traditional closed matrix user interface (10 × 5 matrix) to three alternative, newly-developed interfaces (slide, type, wheel) regarding SRT consistency, user preference, and completion time. STUDY SAMPLE: We included 15 younger normal hearing and 14 older hearing-impaired participants in our study. RESULTS: The slide interface is most suitable for mobile implementation, providing consistent and fast SRTs and enabling all participants to perform the tasks effectively. While the traditional matrix interface works well for most participants, some participants experienced difficulties due to its small size on the screen. CONCLUSIONS: We propose the newly-introduced slide interface as a plausible alternative for smartphone screens. This might be more attractive for elderly patients that may exhibit more challenges with dexterity and vision than our test subjects employed here.
RESUMEN
BACKGROUND: In recent years, advances in technology have led to an influx of mental health apps, in particular the development of mental health and well-being chatbots, which have already shown promise in terms of their efficacy, availability, and accessibility. The ChatPal chatbot was developed to promote positive mental well-being among citizens living in rural areas. ChatPal is a multilingual chatbot, available in English, Scottish Gaelic, Swedish, and Finnish, containing psychoeducational content and exercises such as mindfulness and breathing, mood logging, gratitude, and thought diaries. OBJECTIVE: The primary objective of this study is to evaluate a multilingual mental health and well-being chatbot (ChatPal) to establish if it has an effect on mental well-being. Secondary objectives include investigating the characteristics of individuals that showed improvements in well-being along with those with worsening well-being and applying thematic analysis to user feedback. METHODS: A pre-post intervention study was conducted where participants were recruited to use the intervention (ChatPal) for a 12-week period. Recruitment took place across 5 regions: Northern Ireland, Scotland, the Republic of Ireland, Sweden, and Finland. Outcome measures included the Short Warwick-Edinburgh Mental Well-Being Scale, the World Health Organization-Five Well-Being Index, and the Satisfaction with Life Scale, which were evaluated at baseline, midpoint, and end point. Written feedback was collected from participants and subjected to qualitative analysis to identify themes. RESULTS: A total of 348 people were recruited to the study (n=254, 73% female; n=94, 27% male) aged between 18 and 73 (mean 30) years. The well-being scores of participants improved from baseline to midpoint and from baseline to end point; however, improvement in scores was not statistically significant on the Short Warwick-Edinburgh Mental Well-Being Scale (P=.42), the World Health Organization-Five Well-Being Index (P=.52), or the Satisfaction With Life Scale (P=.81). Individuals that had improved well-being scores (n=16) interacted more with the chatbot and were significantly younger compared to those whose well-being declined over the study (P=.03). Three themes were identified from user feedback, including "positive experiences," "mixed or neutral experiences," and "negative experiences." Positive experiences included enjoying exercises provided by the chatbot, while most of the mixed, neutral, or negative experiences mentioned liking the chatbot overall, but there were some barriers, such as technical or performance errors, that needed to be overcome. CONCLUSIONS: Marginal improvements in mental well-being were seen in those who used ChatPal, albeit nonsignificant. We propose that the chatbot could be used along with other service offerings to complement different digital or face-to-face services, although further research should be carried out to confirm the effectiveness of this approach. Nonetheless, this paper highlights the need for blended service offerings in mental health care.
Asunto(s)
Ejercicio Físico , Salud Mental , Humanos , Masculino , Femenino , Adolescente , Adulto Joven , Adulto , Persona de Mediana Edad , Anciano , Programas Informáticos , Terapia por Ejercicio , Bienestar PsicológicoRESUMEN
Orientation and mobility apps for visually impaired people are well known to be effective in improving the quality of life for this target group. A mobile application that guides a visually impaired person step-by-step through a physical space is a valuable aid, but it does not provide an overview of a complex environment "at a glance," as a traditional hard-copy tactile map does. The aim of this study is to investigate whether a smartphone GPS map, enriched with haptic and audio hints, can facilitate cognitive mapping for visually impaired users. Encouraged by a preliminary study conducted in co-operation with two visually impaired volunteers, we designed and developed an Android prototype for exploration of an urban area. Our goal was to provide an affordable, portable and versatile solution to help users increase awareness of an environment through the positions of its landmarks and points of interest. Vibro-tactile and audio hints were linked to the coordinates on the map via the GeoJSON data format and were issued exploiting the text-to-speech and vibration features of the mobile device, as they were displayed through the operating system's APIs. Test sessions and interviews with visually impaired users produced encouraging results. Results, to be verified by more extensive testing, overall confirm the validity of our approach and are in line with results found in the literature.
RESUMEN
Cognitive accessibility aims to make content more accessible for people with cognitive impairments, such as the elderly and people with intellectual and learning disabilities. In this sense, it is possible to design an accessible user interface from a cognitive point of view. As a contribution, this article presents cognitive accessibility design patterns and their application in designing the Easier web system's user interface. The Easier web system provides a tool that assists in the understanding and readability of text content geared towards people with intellectual disabilities. It detects complex words and offers easier replacements and other resources such as a definition of the complex word. In addition to applying the design patterns, user tests with people with intellectual disabilities and older people have been carried out to evaluate the cognitive accessibility of the Easier system's interface. The results indicate that people with cognitive impairments know how to use the interfaces and have a satisfactory experience. In addition, a design proposal to provide a glossary mechanism to be used in web interfaces with simplified texts is presented and validated.
RESUMEN
Intelligent user interfaces (IUI) are driven by the goal of improvement in human-computer interaction (HCI), mainly improving user interfaces' user experience (UX) or usability with the help of artificial intelligence. The main goal of this study is to find, assess, and synthesize existing state-of-the-art work in the field of IUI with an additional focus on the evaluation of IUI. This study analyzed 211 studies published in the field between 2012 and 2022. Studies are most frequently tied to HCI and SE domains. Definitions of IUI were observed, showing that adaptation, representation, and intelligence are key characteristics associated with IUIs, whereas adaptation, reasoning, and representation are the most commonly used verbs in their description. Evaluation of IUI is mainly conducted with experiments and questionnaires, though usability and UX are not considered together in evaluations. Most evaluations (81% of studies) reported partial or complete improvement in usability or UX. A shortage of evaluation tools, methods, and metrics, tailored for IUI, is noticed. Most often, empirical data collection methods and data sources in IUI evaluation studies are experiment, prototype development, and questionnaire.
Asunto(s)
Inteligencia Artificial , Interfaz Usuario-Computador , Computadores , Humanos , Inteligencia , Encuestas y CuestionariosRESUMEN
In this work, we explore the role of augmented reality as a meta-user interface, with particular reference to its applications for interactive fitting room systems and the impact on the related shopping experience. Starting from literature and existing systems, we synthesized a set of nine interaction design patterns to develop AR fitting rooms and to support the shopping experience. The patterns were evaluated through a focus group with possible stakeholders with the aim of evaluating and envisioning the effects on the shopping experience. The focus group analysis shows as a result that the shopping experience related to an AR fitting room based on the proposed patterns is influenced by three main factors, namely: the perception of the utility, the ability to generate interest and curiosity, and the perceived comfort of the interaction and environment in which the system is installed. As a further result, the study shows that the patterns can successfully support these factors, but some elements that emerged from the focus group should be more investigated and taken into consideration by the designers.
Asunto(s)
Realidad Aumentada , Interfaz Usuario-ComputadorRESUMEN
The distinct properties and affordances of paper provide benefits that enabled paper to maintain an important role in the digital age. This is so much so, that some pen-paper interaction has been imitated in the digital world with touchscreens and stylus pens. Because digital medium also provides several advantages not available to physical paper, there is a clear benefit to merge the two mediums. Despite the plethora of concepts, prototypes and systems to digitise handwritten information on paper, these systems require specially prepared paper, complex setups and software, which can be used solely in combination with paper, and, most importantly, do not support the concurrent precise interaction with both mediums (paper and touchscreen) using one pen only. In this paper, we present the design, fabrication and evaluation of the Hybrid Stylus. The Hybrid Stylus is assembled with the infinity pencil tip (nib) made of graphite and a specially designed shielded tip holder that is attached to an active stylus. The stylus can be used for writing on a physical paper, while it still maintains all the features needed for tablet interaction. Moreover, the stylus also allows simultaneous digitisation of handwritten information on the paper when the paper is placed on the tablet screen. In order to evaluate the concept, we also add a user-friendly manual alignment of paper position on the underlying tablet computer The evaluation demonstrates that the system achieves almost perfect digitisation of strokes (98.6% of strokes were correctly registered with only 1.2% of ghost strokes) whilst maintaining excellent user experience of writing with a pencil on the paper.
Asunto(s)
Grafito , Computadoras de Mano , Escritura Manual , Programas Informáticos , TiempoRESUMEN
OBJECTIVE: The present study examines the cognitive effects of placing icons in unexpected spatial locations within websites. BACKGROUND: Prior research has revealed evidence for cognitive conflict when web icons occur in unexpected locations (e.g., cart, top left), generally consistent with a dynamical systems models. Here, we compare the relative strength of evidence for both dual and dynamical systems models. METHODS: Participants clicked on icons located in either expected (e.g., cart, top right) or unexpected (e.g., cart, top left) locations while mouse trajectories were continuously recorded. Trajectories were classified according to prototypes associated with each cognitive model. The dynamical systems model predicts curved trajectories, while the dual-systems model predicts straight and change of mind trajectories. RESULTS: Trajectory classification revealed that curved trajectories increased (+11%), while straight and change of mind trajectories decreased (-12%) when target icons occurred in unexpected locations (p < .001). CONCLUSION: Rather than employing a single cognitive strategy, users shift from a primarily dual-systems to dynamical systems strategy when icons occur in unexpected locations. APPLICATION: Potential applications of this work include the assessment of cognitive impacts such as mental workload and cognitive conflict during real-time interaction with websites and other screen-based interfaces, personalization and adaptive interfaces based on an individual's cognitive strategy, and data-driven A/B testing of alternative interface designs.
RESUMEN
OBJECTIVE: Expand research on the Sustained Attention to Response Task (SART) to a more applied agricultural target detection/selection task and examine the utility of various performance metrics, including composite measures of speed and accuracy, in a High-Go/Low-No-Go stimuli task. BACKGROUND: Modified SARTs have been utilized to investigate mechanisms, such as failures of response inhibition, occurring in friendly fire and collateral damage incidents. Researchers have demonstrated that composite measures of speed and accuracy are useful for Low Go/High No-Go stimuli tasks, but this has not been demonstrated for High-Go/Low-No-Go tasks, such as the SART. METHOD: Participants performed a modified SART, where they selected ("sprayed") images of weeds (Go stimuli) that appeared on a computer screen, while withholding to rarer soybean plant images (No-Go stimuli). RESULTS: Response time was a function of distance from a central starting point. Participants committed commission errors (sprayed the soybeans) at a significantly higher rate when the stimuli appeared under the cursor centered on the screen for each trial. Participant's omission errors (failure to spray a weed) increased significantly as a function of distance. The composite measures examined were primarily influenced by response time and omission errors limiting their utility when commission errors are of particular interest. CONCLUSION: Participants are far more accurate in their decision making when required to execute a longer duration motor task in High-Go/Low-No-Go experiments. APPLICATION: Demonstrates a serious human factors liability of target detection and snap-to-target systems.
RESUMEN
BACKGROUND: This research reports on a pilot study that examined the usability of a reminiscence app called 'InspireD' using eye tracking technology. The InspireD app is a bespoke digital intervention aimed at supporting personalized reminiscence for people living with dementia and their carers. The app was developed and refined in two co-creation workshops and subsequently tested in a third workshop using eye tracking technology. INTERVENTION: Eye tracking was used to gain insight into the user's cognition since our previous work showed that the think-aloud protocol can add to cognitive burden for people living with dementia while also making the test more unnatural. RESULTS: Results showed that there were no barriers to using a wearable eye tracker in this setting and participants were able to use the reminiscence app freely. However, some tasks required prompts from the observer when difficulties arose. While prompts are not normally used in usability testing (as some argue the prompting defeats the purpose of testing), we used 'prompt frequency' as a proxy for measuring the intuitiveness of the task. There was a correlation between task completion rates and prompt frequency. Results also showed that people living with dementia had fewer gaze fixations when compared to their carers. Carers had greater fixation and saccadic frequencies when compared to people living with dementia. This perhaps indicates that people living with dementia take more time to scan and consume information on an app. A number of identified usability issues are also discussed in the paper. PATIENT OR PUBLIC CONTRIBUTION: The study presents findings from three workshops which looked at user needs analysis, feedback and an eye tracking usability test combined involving 14 participants, 9 of whom were people living with dementia and the remaining 5 were carers.
Asunto(s)
Demencia , Aplicaciones Móviles , Cuidadores , Demencia/terapia , Fijación Ocular , Humanos , Proyectos PilotoRESUMEN
To equip computers with human communication skills and to enable natural interaction between the computer and a human, intelligent solutions are required based on artificial intelligence (AI) methods, algorithms, and sensor technology. This study aimed at identifying and analyzing the state-of-the-art AI methods and algorithms and sensors technology in existing human-computer intelligent interaction (HCII) research to explore trends in HCII research, categorize existing evidence, and identify potential directions for future research. We conduct a systematic mapping study of the HCII body of research. Four hundred fifty-four studies published in various journals and conferences between 2010 and 2021 were identified and analyzed. Studies in the HCII and IUI fields have primarily been focused on intelligent recognition of emotion, gestures, and facial expressions using sensors technology, such as the camera, EEG, Kinect, wearable sensors, eye tracker, gyroscope, and others. Researchers most often apply deep-learning and instance-based AI methods and algorithms. The support sector machine (SVM) is the most widely used algorithm for various kinds of recognition, primarily an emotion, facial expression, and gesture. The convolutional neural network (CNN) is the often-used deep-learning algorithm for emotion recognition, facial recognition, and gesture recognition solutions.
Asunto(s)
Inteligencia Artificial , Aprendizaje Automático , Algoritmos , Computadores , Humanos , Redes Neurales de la ComputaciónRESUMEN
Internet of Things (IoT) technologies have been applied to various fields such as manufacturing, automobile industry and healthcare. IoT-based healthcare has a significant impact on real-time remote monitoring of patients' health and consequently improving treatments and reducing healthcare costs. In fact, IoT has made healthcare more reliable, efficient, and accessible. Two major drawbacks which IoT suffers from can be expressed as: first, thelimited battery capacityof thesensorsis quickly depleted due to the continuous stream of data; second, the dependence of the system on the cloud for computations and processing causes latency in data transmission which is not accepted in real-time monitoring applications. This research is conducted to develop a real-time, secure, and energy-efficient platform which provides a solution for reducing computation load on the cloud and diminishing data transmission delay. In the proposed platform, the sensors utilize a state-of-the-art power saving technique known as Compressive Sensing (CS). CS allows sensors to retrieve the sensed data using fewer measurements by sending a compressed signal. In this framework, the signal reconstruction and processing are computed locally on a Heterogeneous Multicore Platform (HMP) device to decrease the dependency on the cloud. In addition, a framework has been implemented to control the system, set different parameters, display the data as well as send live notifications to medical experts through the cloud in order to alert them of any eventual hazardous event or abnormality and allow quick interventions. Finally, a case study of the system is presented demonstrating the acquisition and monitoring of the data for a given subject in real-time. The obtained results reveal that the proposed solution reduces 15.4% of energy consumption in sensors, that makes this prototype a good candidate for IoT employment in healthcare.
Asunto(s)
Internet de las Cosas , Anciano , Atención a la Salud , HumanosRESUMEN
The typical configuration of virtual reality (VR) devices consists of a head-mounted display (HMD) and handheld controllers. As such, these units have limited utility in tasks that require hand-free operation, such as in surgical operations or assembly works in cyberspace. We propose a user interface for a VR headset based on a wearer's facial gestures for hands-free interaction, similar to a touch interface. By sensing and recognizing the expressions associated with the in situ intentional movements of a user's facial muscles, we define a set of commands that combine predefined facial gestures with head movements. This is achieved by utilizing six pairs of infrared (IR) photocouplers positioned at the foam interface of an HMD. We demonstrate the usability and report on the user experience as well as the performance of the proposed command set using an experimental VR game without any additional controllers. We obtained more than 99% of recognition accuracy for each facial gesture throughout the three steps of experimental tests. The proposed input interface is a cost-effective and efficient solution that facilitates hands-free user operation of a VR headset using built-in infrared photocouplers positioned in the foam interface. The proposed system recognizes facial gestures and incorporates a hands-free user interface to HMD, which is similar to the touch-screen experience of a smartphone.
Asunto(s)
Cara , Gestos , Interfaz Usuario-Computador , Realidad Virtual , Mano , HumanosRESUMEN
Gesture spotting is an essential task for recognizing finger gestures used to control in-car touchless interfaces. Automated methods to achieve this task require to detect video segments where gestures are observed, to discard natural behaviors of users' hands that may look as target gestures, and be able to work online. In this paper, we address these challenges with a recurrent neural architecture for online finger gesture spotting. We propose a multi-stream network merging hand and hand-location features, which help to discriminate target gestures from natural movements of the hand, since these may not happen in the same 3D spatial location. Our multi-stream recurrent neural network (RNN) recurrently learns semantic information, allowing to spot gestures online in long untrimmed video sequences. In order to validate our method, we collect a finger gesture dataset in an in-vehicle scenario of an autonomous car. 226 videos with more than 2100 continuous instances were captured with a depth sensor. On this dataset, our gesture spotting approach outperforms state-of-the-art methods with an improvement of about 10% and 15% of recall and precision, respectively. Furthermore, we demonstrated that by combining with an existing gesture classifier (a 3D Convolutional Neural Network), our proposal achieves better performance than previous hand gesture recognition methods.
Asunto(s)
Dedos/fisiología , Gestos , Procesamiento de Imagen Asistido por Computador/métodos , Redes Neurales de la Computación , Algoritmos , Automóviles , Humanos , Sistemas Hombre-Máquina , Reconocimiento de Normas Patrones Automatizadas , Interfaz Usuario-ComputadorRESUMEN
Computation is critical for enabling us to process data volumes and model data complexities that are unthinkable by manual means. However, we are far from automating the sense-making process. Human knowledge and reasoning are critical for discovery. Visualization offers a powerful interface between mind and machine that should be further exploited in future genome analysis tools.
Asunto(s)
Genética , Genómica/métodos , Procesamiento de Imagen Asistido por Computador/métodos , Interfaz Usuario-Computador , Gráficos por Computador , Genética/tendenciasRESUMEN
Along with the proliferation of high-end and performant mobile devices, we find that the inclusion of visually animated user interfaces are commonplace, but that research on their performance is scarce. Thus, for this study, eight mobile apps have been developed for scrutiny and assessment to report on the device hardware impact and penalties caused by transitions and animations, with an emphasis on apps generated using cross-platform development frameworks. The tasks we employ for animation performance measuring, are those of (i) a complex animation consisting of multiple elements, (ii) the opening sequence of a side menu navigation pattern, and (iii) a transition animation during in-app page navigation. We employ multiple performance profiling tools, and scrutinize metrics including frames per second (FPS), CPU usage, device memory usage and GPU memory usage, all to uncover the impact caused by executing transitions and animations. We uncover important differences in device hardware utilization during animations across the different cross-platform technologies employed. Additionally, Android and iOS are found to differ greatly in terms of memory consumption, CPU usage and rendered FPS, a discrepancy that is true for both the native and cross-platform apps. The findings we report are indeed factors contributing to the complexity of app development.
RESUMEN
OBJECTIVE: This study compared the visual inspection performance of airport security officers (screeners) when screening hold baggage with state-of-the-art 3D versus older 2D imaging. BACKGROUND: 3D imaging based on computer tomography features better automated detection of explosives and higher baggage throughput than older 2D X-ray imaging technology. Nonetheless, some countries and airports hesitate to implement 3D systems due to their lower image quality and the concern that screeners will need extensive and specific training before they can be allowed to work with 3D imaging. METHOD: Screeners working with 2D imaging (2D screeners) and screeners working with 3D imaging (3D screeners) conducted a simulated hold baggage screening task with both types of imaging. Differences in image quality of the imaging systems were assessed with the standard procedure for 2D imaging. RESULTS: Despite lower image quality, screeners' detection performance with 3D imaging was similar to that with 2D imaging. 3D screeners revealed higher detection performance with both types of imaging than 2D screeners. CONCLUSION: Features of 3D imaging systems (3D image rotation and slicing) seem to compensate for lower image quality. Visual inspection competency acquired with one type of imaging seems to transfer to visual inspection with the other type of imaging. APPLICATION: Replacing older 2D with newer 3D imaging systems can be recommended. 2D screeners do not need extensive and specific training to achieve comparable detection performance with 3D imaging. Current image quality standards for 2D imaging need revision before they can be applied to 3D imaging.
Asunto(s)
Aeropuertos , Bombas (Dispositivos Explosivos) , Sistemas Hombre-Máquina , Reconocimiento Visual de Modelos/fisiología , Desempeño Psicomotor/fisiología , Medidas de Seguridad , Tomografía Computarizada por Rayos X , Interfaz Usuario-Computador , Adulto , Femenino , Humanos , Imagenología Tridimensional , Masculino , Persona de Mediana EdadRESUMEN
OBJECTIVE: The objective is to provide a review of ecological interface design (EID), to illustrate its value to human factors/ergonomics, and to identify areas for future research and development. BACKGROUND: EID uses mature interface technologies to provide decision making and problem solving support. A variety of theoretical concepts and analytical tools have been developed to meet the associated challenges. EID provides support that is simultaneously grounded in the practical realities of a work domain and tailored to human capabilities and limitations. METHOD: EID's theoretical foundation is discussed briefly. Concrete examples of ecological and traditional interfaces are provided. Different categories of work domains are described, as well as the associated implications for interface design. A targeted literature review is conducted and the experimental outcomes are summarized. A representative evaluation is discussed, and interpretations of performance are provided. RESULTS: The evidence reveals that EID has been remarkably successful in significantly improving performance for work domains with constraints that are law driven (e.g., process control). In contrast, work domains that are intent-driven (e.g., information retrieval) have, by and large, been ignored. Also, few studies have addressed nonvisual displays. CONCLUSION: EID has not yet realized its potential to improve safety and efficiency across the entire continuum of work domains. APPLICATION: EID provides a single integrated framework that is (a) sufficiently comprehensive to deal with complicated work domains and (b) capable of producing innovative support that will generalize to actual work settings.