Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 849
Filtrar
1.
Front Artif Intell ; 7: 1418869, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38957452

RESUMO

The release of GPT-4 has garnered widespread attention across various fields, signaling the impending widespread adoption and application of Large Language Models (LLMs). However, previous research has predominantly focused on the technical principles of ChatGPT and its social impact, overlooking its effects on human-computer interaction and user psychology. This paper explores the multifaceted impacts of ChatGPT on human-computer interaction, psychology, and society through a literature review. The author investigates ChatGPT's technical foundation, including its Transformer architecture and RLHF (Reinforcement Learning from Human Feedback) process, enabling it to generate human-like responses. In terms of human-computer interaction, the author studies the significant improvements GPT models bring to conversational interfaces. The analysis extends to psychological impacts, weighing the potential of ChatGPT to mimic human empathy and support learning against the risks of reduced interpersonal connections. In the commercial and social domains, the paper discusses the applications of ChatGPT in customer service and social services, highlighting the improvements in efficiency and challenges such as privacy issues. Finally, the author offers predictions and recommendations for ChatGPT's future development directions and its impact on social relationships.

2.
Sci Rep ; 14(1): 15611, 2024 Jul 06.
Artigo em Inglês | MEDLINE | ID: mdl-38971806

RESUMO

This study compares how English-speaking adults and children from the United States adapt their speech when talking to a real person and a smart speaker (Amazon Alexa) in a psycholinguistic experiment. Overall, participants produced more effortful speech when talking to a device (longer duration and higher pitch). These differences also varied by age: children produced even higher pitch in device-directed speech, suggesting a stronger expectation to be misunderstood by the system. In support of this, we see that after a staged recognition error by the device, children increased pitch even more. Furthermore, both adults and children displayed the same degree of variation in their responses for whether "Alexa seems like a real person or not", further indicating that children's conceptualization of the system's competence shaped their register adjustments, rather than an increased anthropomorphism response. This work speaks to models on the mechanisms underlying speech production, and human-computer interaction frameworks, providing support for routinized theories of spoken interaction with technology.


Assuntos
Fala , Humanos , Adulto , Criança , Masculino , Feminino , Fala/fisiologia , Adulto Jovem , Adolescente , Psicolinguística
3.
iScience ; 27(6): 110164, 2024 Jun 21.
Artigo em Inglês | MEDLINE | ID: mdl-38974471

RESUMO

This study introduces a novel virtual cursor control system designed to empower individuals with neuromuscular disabilities in the digital world. By combining eye-tracking with motor imagery (MI) in a hybrid brain-computer interface (BCI), the system enhances cursor control accuracy and simplicity. Real-time classification accuracy reaches 87.92% (peak of 93.33%), with cursor stability in the gazing state at 96.1%. Integrated into common operating systems, it enables tasks like text entry, online chatting, email, web surfing, and picture dragging, with an average text input rate of 53.2 characters per minute (CPM). This technology facilitates fundamental computing tasks for patients, fostering their integration into the online community and paving the way for future developments in BCI systems.

4.
Comput Biol Med ; 179: 108808, 2024 Jul 11.
Artigo em Inglês | MEDLINE | ID: mdl-38996556

RESUMO

In this paper, a novel skipping spatial-spectral-temporal network (S3T-Net) is developed to handle intra-individual differences in electroencephalogram (EEG) signals for accurate, robust, and generalized emotion recognition. In particular, aiming at the 4D features extracted from the raw EEG signals, a multi-branch architecture is proposed to learn spatial-spectral cross-domain representations, which benefits enhancing the model generalization ability. Time dependency among different spatial-spectral features is further captured via a bi-directional long-short term memory module, which employs an attention mechanism to integrate context information. Moreover, a skip-change unit is designed to add another auxiliary pathway for updating model parameters, which alleviates the vanishing gradient problem in complex spatial-temporal network. Evaluation results show that the proposed S3T-Net outperforms other advanced models in terms of the emotion recognition accuracy, which yields an performance improvement of 0.23% , 0.13%, and 0.43% as compared to the sub-optimal model in three test scenes, respectively. In addition, the effectiveness and superiority of the key components of S3T-Net are demonstrated from various experiments. As a reliable and competent emotion recognition model, the proposed S3T-Net contributes to the development of intelligent sentiment analysis in human-computer interaction (HCI) realm.

5.
Philos Technol ; 37(3): 92, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-39027378

RESUMO

Despite growing interest in automated (or algorithmic) decision-making (ADM), little work has been done to conceptually clarify the term. This article aims to tackle this issue by developing a conceptualization of ADM specifically tailored to organizational contexts. It has two main goals: (1) to meaningfully demarcate ADM from similar, yet distinct algorithm-supported practices; and (2) to draw internal distinctions such that different ADM types can be meaningfully distinguished. The proposed conceptualization builds on three arguments: First, ADM primarily refers to the automation of practical decisions (decisions to φ) as opposed to cognitive decisions (decisions that p). Second, rather than referring to algorithms as literally making decisions, ADM refers to the use of algorithms to solve decision problems at an organizational level. Third, since algorithmic tools by nature primarily settle cognitive decision problems, their classification as ADM depends on whether and to what extent an algorithmically generated output p has an action triggering effect-i.e., translates into a consequential action φ. The examination of precisely this p-φ relationship, allows us to pinpoint different ADM types (suggesting, offloading, superseding). Taking these three arguments into account, we arrive at the following definition: ADM refers to the practice of using algorithms to solve decision problems, where these algorithms can play a suggesting, offloading, or superseding role relative to humans, and decisions are defined as action triggering choices.

6.
PeerJ ; 12: e17754, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-39035154

RESUMO

Background: In recent years, the scientific community has been captivated by the intriguing Autonomous sensory meridian response (ASMR), a unique phenomenon characterized by tingling sensations originating from the scalp and propagating down the spine. While anecdotal evidence suggests the therapeutic potential of ASMR, the field has witnessed a surge of scientific interest, particularly through the use of neuroimaging techniques including functional magnetic resonance imaging (fMRI) as well as electroencephalography (EEG) and physiological measures such as eye tracking (Pupil Diameter), heart rate (HR), heartbeat-evoked potential (HEP), blood pressure (BP), pulse rates (PR), finger photoplethysmography (PPG), and skin conductance (SC). This article is intended to provide a comprehensive overview of technology's contributions to the scientific elucidation of ASMR mechanisms. Methodology: A meticulous literature review was undertaken to identify studies that have examined ASMR using EEG and physiological measurements. The comprehensive search was conducted across databases such as PUBMED, SCOPUS, and IEEE, using a range of relevant keywords such as 'ASMR', 'Autonomous sensory meridian response', 'EEG', 'fMRI', 'electroencephalography', 'physiological measures', 'heart rate', 'skin conductance', and 'eye tracking'. This rigorous process yielded a substantial number of 63 PUBMED and 166 SCOPUS-related articles, ensuring the inclusion of a wide range of high-quality research in this review. Results: The review uncovered a body of research utilizing EEG and physiological measures to explore ASMR's effects. EEG studies have revealed distinct patterns of brain activity associated with ASMR experiences, particularly in regions implicated in emotional processing and sensory integration. In physiological measurements, a decrease in HR and an increase in SC and pupil diameter indicate relaxation and increased attention during ASMR-triggered stimuli. Conclusions: The findings of this review underscore the significance of EEG and physiological measures in unraveling the psychological and physiological effects of ASMR. ASMR experiences have been associated with unique neural signatures, while physiological measures provide valuable insights into the autonomic responses elicited by ASMR stimuli. This review not only highlights the interdisciplinary nature of ASMR research but also emphasizes the need for further investigation to elucidate the mechanisms underlying ASMR and explore its potential therapeutic applications, thereby paving the way for the development of novel therapeutic interventions.


Assuntos
Eletroencefalografia , Imageamento por Ressonância Magnética , Humanos , Eletroencefalografia/métodos , Frequência Cardíaca/fisiologia , Meridianos , Fotopletismografia/métodos , Pressão Sanguínea/fisiologia , Tecnologia de Rastreamento Ocular
7.
Ergonomics ; : 1-13, 2024 Jul 02.
Artigo em Inglês | MEDLINE | ID: mdl-38954600

RESUMO

This article, from the perspective of structural features, focuses on in-car user interface icons and explores the impact of different icon structural features on visual search efficiency. Initially, we categorised the icons into four groups based on structural features: individual structure icons (ISI), enclosed structure icons (ESI), horizontal structure icons (HSI) and vertical structure icons (VSI). Subsequently, we conducted a visual search experiment with structure as the sole variable, recording participants' behaviours and eye-tracking data. Finally, data analysis was conducted using methods including analysis of variance and logistic regression. The results indicate that differences in icon structural features significantly affect visual search efficiency, showcasing significant intergroup differences. HSI exhibit the highest visual search efficiency, while ESI show the lowest efficiency. ISI have shorter response times but the lowest matching accuracy. VSI only perform better than ESI. These findings hold significant implications for optimising icon design and enhancing visual search efficiency.


Visual search efficiency of icons is crucial for human-computer interaction. We investigated how the structural features of icons influence visual search efficiency. Horizontal icons are most effective, enclosed icons the least. Individual icons are quick but less accurate. Vertical icons outperform enclosed ones. Structural features should be considered in design.

9.
Artigo em Inglês | MEDLINE | ID: mdl-39031762

RESUMO

As an essential component in wearable electronic devices and intelligent robots, flexible pressure sensors have enormous application value in fields such as healthcare, human-computer interaction, and intelligent perception. However, due to the complex and ever-changing pressure loads borne by sensors in different application scenarios, this also puts great demands on the flexible response and adjustment ability of a sensor's detection range. Therefore, developing a flexible pressure sensor with a wide and adjustable detection range, which can be applied flexibly under different pressure loads, is also a major challenge in current research. In this paper, we propose a flexible pressure sensor with a wide and adjustable detection range based on an inflatable adjustable safety airbag as the dielectric layer. This sensor uses inflatable airbags prepared using 3D printing technology and silicone reverse molding technology as the dielectric layer and achieves high sensitivity (0.6 kPa-1 to 1.19 kPa-1), wide detection range (220-1500 kPa), and flexible performance applicability by adjusting the air pressure inside the dielectric layer. At the same time, its simple production process, convenient production, fast response time (100 ms), and good stability provide the possibility for the flexible application of sensors in different pressure detection. The experimental results indicate that the sensor has enormous potential for applications in wearable devices, healthcare, human-computer interaction, and intelligent perception recognition.

10.
Neurosci Lett ; 837: 137901, 2024 Jul 15.
Artigo em Inglês | MEDLINE | ID: mdl-39019145

RESUMO

Neurological or neurodevelopmental disorders, such as Parkinson's disease and dyslexia, can impair rhythm perception and production. Deficits in rhythm are associated with poor performance in language, attention, and working memory tasks. Research indicates that retraining rhythmic skills may enhance these related cognitive functions. In this context, using tactile aids for rhythm training emerges as a promising approach for children who do not fully benefit from conventional audiovisual rhythm games. This is because tactile aids can compensate for sensory deficiencies and facilitate more extensive brain activation. In our study, we employed functional near-infrared spectroscopy (fNIRS) to assess the impact of tactile aids on brain cortical activation during rhythmic training in children aged 6-12 years (N = 25). We also measured the participants' spontaneous motor rhythms. The findings indicate that tactile stimulation significantly improves performance in synchronized rhythm tasks compared to audiovisual stimulation alone, particularly enhancing activation in brain regions associated with speech training such as the prefrontal cortex, motor cortex, and temporal areas. These results not only support the application of rhythm training in speech rehabilitation, but also highlight the potential of tactile aids as an effective multisensory learning strategy.

11.
Heliyon ; 10(12): e32979, 2024 Jun 30.
Artigo em Inglês | MEDLINE | ID: mdl-39021923

RESUMO

This paper presents the outcomes of a pioneering study that explores the potential of remote intergenerational communication to combat social isolation among children and older adults, especially under constraints posed by pandemics such as COVID-19. Acknowledging the limited mobility of many older adults, this research aims to provide insights into how digital platforms can facilitate meaningful exchanges between generations. Utilizing a mixed methodology approach, the study first conducted a user interaction analysis to outline guidelines for participant engagement with the Information and Communication Technology (ICT)-based tool called IRAGE (Intergenerational Remote Access to Gaming Experiences) designed specifically for this purpose. Following the development of the ICT tool, three sessions of the remote intergenerational experience were held, during which participants' interactions were recorded and subsequently analyzed quantitatively and qualitatively. Key findings from the study reveal that remote intergenerational communication can significantly mitigate feelings of isolation among older adults, contributing to their mental health and emotional well-being. Moreover, the study highlights the effectiveness of the web-based platform in facilitating these interactions, with older adults and children finding the user interface intuitive and the overall experience engaging. These outcomes underscore the importance of leveraging technology to maintain social connections during challenging times and offer valuable guidelines for developing ICT tools that cater to the needs of diverse user groups. By demonstrating the feasibility and benefits of remote intergenerational communication, this research contributes to the broader discourse on active aging and the role of digital technologies in promoting social inclusion and emotional health.

12.
Micromachines (Basel) ; 15(6)2024 May 24.
Artigo em Inglês | MEDLINE | ID: mdl-38930663

RESUMO

Virtual reality technology brings a new experience to human-computer interaction, while wearable force feedback devices can enhance the immersion of users in interaction. This paper proposes a wearable fingertip force feedback device that uses a tendon drive mechanism, with the aim of simulating the stiffness characteristics of objects within virtual scenes. The device adjusts the rotation angle of the torsion spring through a DC motor, and then uses a wire to convert the torque into a feedback force at the user's index fingertips, with an output force of up to 4 N and a force change rate of up to 10 N/s. This paper introduces the mechanical structure and design process of the force feedback device, and conducts a mechanical analysis of the device to select the appropriate components. Physical and psychological experiments are conducted to comprehensively evaluate the device's performance in conveying object stiffness information. The results show that the device can simulate different stiffness characteristics of objects, and users can distinguish objects with different stiffness characteristics well when wearing the force feedback device and interacting with the three-dimensional virtual environments.

13.
JMIR AI ; 3: e52211, 2024 Mar 13.
Artigo em Inglês | MEDLINE | ID: mdl-38875574

RESUMO

BACKGROUND: Many promising artificial intelligence (AI) and computer-aided detection and diagnosis systems have been developed, but few have been successfully integrated into clinical practice. This is partially owing to a lack of user-centered design of AI-based computer-aided detection or diagnosis (AI-CAD) systems. OBJECTIVE: We aimed to assess the impact of different onboarding tutorials and levels of AI model explainability on radiologists' trust in AI and the use of AI recommendations in lung nodule assessment on computed tomography (CT) scans. METHODS: In total, 20 radiologists from 7 Dutch medical centers performed lung nodule assessment on CT scans under different conditions in a simulated use study as part of a 2×2 repeated-measures quasi-experimental design. Two types of AI onboarding tutorials (reflective vs informative) and 2 levels of AI output (black box vs explainable) were designed. The radiologists first received an onboarding tutorial that was either informative or reflective. Subsequently, each radiologist assessed 7 CT scans, first without AI recommendations. AI recommendations were shown to the radiologist, and they could adjust their initial assessment. Half of the participants received the recommendations via black box AI output and half received explainable AI output. Mental model and psychological trust were measured before onboarding, after onboarding, and after assessing the 7 CT scans. We recorded whether radiologists changed their assessment on found nodules, malignancy prediction, and follow-up advice for each CT assessment. In addition, we analyzed whether radiologists' trust in their assessments had changed based on the AI recommendations. RESULTS: Both variations of onboarding tutorials resulted in a significantly improved mental model of the AI-CAD system (informative P=.01 and reflective P=.01). After using AI-CAD, psychological trust significantly decreased for the group with explainable AI output (P=.02). On the basis of the AI recommendations, radiologists changed the number of reported nodules in 27 of 140 assessments, malignancy prediction in 32 of 140 assessments, and follow-up advice in 12 of 140 assessments. The changes were mostly an increased number of reported nodules, a higher estimated probability of malignancy, and earlier follow-up. The radiologists' confidence in their found nodules changed in 82 of 140 assessments, in their estimated probability of malignancy in 50 of 140 assessments, and in their follow-up advice in 28 of 140 assessments. These changes were predominantly increases in confidence. The number of changed assessments and radiologists' confidence did not significantly differ between the groups that received different onboarding tutorials and AI outputs. CONCLUSIONS: Onboarding tutorials help radiologists gain a better understanding of AI-CAD and facilitate the formation of a correct mental model. If AI explanations do not consistently substantiate the probability of malignancy across patient cases, radiologists' trust in the AI-CAD system can be impaired. Radiologists' confidence in their assessments was improved by using the AI recommendations.

14.
Sci Rep ; 14(1): 13126, 2024 06 07.
Artigo em Inglês | MEDLINE | ID: mdl-38849422

RESUMO

In human-computer interaction systems, speech emotion recognition (SER) plays a crucial role because it enables computers to understand and react to users' emotions. In the past, SER has significantly emphasised acoustic properties extracted from speech signals. The use of visual signals for enhancing SER performance, however, has been made possible by recent developments in deep learning and computer vision. This work utilizes a lightweight Vision Transformer (ViT) model to propose a novel method for improving speech emotion recognition. We leverage the ViT model's capabilities to capture spatial dependencies and high-level features in images which are adequate indicators of emotional states from mel spectrogram input fed into the model. To determine the efficiency of our proposed approach, we conduct a comprehensive experiment on two benchmark speech emotion datasets, the Toronto English Speech Set (TESS) and the Berlin Emotional Database (EMODB). The results of our extensive experiment demonstrate a considerable improvement in speech emotion recognition accuracy attesting to its generalizability as it achieved 98%, 91%, and 93% (TESS-EMODB) accuracy respectively on the datasets. The outcomes of the comparative experiment show that the non-overlapping patch-based feature extraction method substantially improves the discipline of speech emotion recognition. Our research indicates the potential for integrating vision transformer models into SER systems, opening up fresh opportunities for real-world applications requiring accurate emotion recognition from speech compared with other state-of-the-art techniques.


Assuntos
Emoções , Humanos , Emoções/fisiologia , Fala/fisiologia , Aprendizado Profundo , Interface para o Reconhecimento da Fala , Bases de Dados Factuais , Algoritmos
15.
PeerJ Comput Sci ; 10: e2065, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38855206

RESUMO

Emotion recognition utilizing EEG signals has emerged as a pivotal component of human-computer interaction. In recent years, with the relentless advancement of deep learning techniques, using deep learning for analyzing EEG signals has assumed a prominent role in emotion recognition. Applying deep learning in the context of EEG-based emotion recognition carries profound practical implications. Although many model approaches and some review articles have scrutinized this domain, they have yet to undergo a comprehensive and precise classification and summarization process. The existing classifications are somewhat coarse, with insufficient attention given to the potential applications within this domain. Therefore, this article systematically classifies recent developments in EEG-based emotion recognition, providing researchers with a lucid understanding of this field's various trajectories and methodologies. Additionally, it elucidates why distinct directions necessitate distinct modeling approaches. In conclusion, this article synthesizes and dissects the practical significance of EEG signals in emotion recognition, emphasizing its promising avenues for future application.

16.
Sensors (Basel) ; 24(12)2024 Jun 15.
Artigo em Inglês | MEDLINE | ID: mdl-38931675

RESUMO

Human Activity Recognition (HAR) plays an important role in the automation of various tasks related to activity tracking in such areas as healthcare and eldercare (telerehabilitation, telemonitoring), security, ergonomics, entertainment (fitness, sports promotion, human-computer interaction, video games), and intelligent environments. This paper tackles the problem of real-time recognition and repetition counting of 12 types of exercises performed during athletic workouts. Our approach is based on the deep neural network model fed by the signal from a 9-axis motion sensor (IMU) placed on the chest. The model can be run on mobile platforms (iOS, Android). We discuss design requirements for the system and their impact on data collection protocols. We present architecture based on an encoder pretrained with contrastive learning. Compared to end-to-end training, the presented approach significantly improves the developed model's quality in terms of accuracy (F1 score, MAPE) and robustness (false-positive rate) during background activity. We make the AIDLAB-HAR dataset publicly available to encourage further research.


Assuntos
Atividades Humanas , Redes Neurais de Computação , Telemedicina , Humanos , Exercício Físico/fisiologia , Algoritmos
17.
Sensors (Basel) ; 24(11)2024 May 23.
Artigo em Inglês | MEDLINE | ID: mdl-38894117

RESUMO

The fast-paced evolution of technology has compelled the digitalization of education, requiring educators to interact with computers and develop digital competencies relevant to the teaching-learning process. This need has prompted various organizations to define frameworks for assessing digital competency emphasizing teachers' interaction with computer technologies in education. Different authors have presented assessment methods for teachers' digital competence based on the video analysis of recorded classes using sensors such as cameras, microphones, or electroencephalograms. The main limitation of these solutions is the large number of resources they require, making it difficult to assess large numbers of teachers in resource-constrained environments. This article proposes the automation of teachers' digital competence evaluation process based on monitoring metrics obtained from teachers' interaction with a Learning Management System (LMS). Based on the Digital Competence Framework for Educators (DigCompEdu), indicators were defined and extracted that allow automatic measurement of a teacher's competency level. A tool was designed and implemented to conduct a successful proof of concept capable of automating the evaluation process of all university faculty, including 987 lecturers from different fields of knowledge. Results obtained allow for drawing conclusions on technological adoption according to the teacher's profile and planning educational actions to improve these competencies.

18.
Sensors (Basel) ; 24(11)2024 Jun 02.
Artigo em Inglês | MEDLINE | ID: mdl-38894383

RESUMO

Because of the absence of visual perception, visually impaired individuals encounter various difficulties in their daily lives. This paper proposes a visual aid system designed specifically for visually impaired individuals, aiming to assist and guide them in grasping target objects within a tabletop environment. The system employs a visual perception module that incorporates a semantic visual SLAM algorithm, achieved through the fusion of ORB-SLAM2 and YOLO V5s, enabling the construction of a semantic map of the environment. In the human-machine cooperation module, a depth camera is integrated into a wearable device worn on the hand, while a vibration array feedback device conveys directional information of the target to visually impaired individuals for tactile interaction. To enhance the system's versatility, a Dobot Magician manipulator is also employed to aid visually impaired individuals in grasping tasks. The performance of the semantic visual SLAM algorithm in terms of localization and semantic mapping was thoroughly tested. Additionally, several experiments were conducted to simulate visually impaired individuals' interactions in grasping target objects, effectively verifying the feasibility and effectiveness of the proposed system. Overall, this system demonstrates its capability to assist and guide visually impaired individuals in perceiving and acquiring target objects.


Assuntos
Algoritmos , Pessoas com Deficiência Visual , Dispositivos Eletrônicos Vestíveis , Humanos , Pessoas com Deficiência Visual/reabilitação , Força da Mão/fisiologia , Tecnologia Assistiva , Percepção Visual/fisiologia , Semântica , Masculino
19.
Proc Natl Acad Sci U S A ; 121(24): e2318124121, 2024 Jun 11.
Artigo em Inglês | MEDLINE | ID: mdl-38830100

RESUMO

There is much excitement about the opportunity to harness the power of large language models (LLMs) when building problem-solving assistants. However, the standard methodology of evaluating LLMs relies on static pairs of inputs and outputs; this is insufficient for making an informed decision about which LLMs are best to use in an interactive setting, and how that varies by setting. Static assessment therefore limits how we understand language model capabilities. We introduce CheckMate, an adaptable prototype platform for humans to interact with and evaluate LLMs. We conduct a study with CheckMate to evaluate three language models (InstructGPT, ChatGPT, and GPT-4) as assistants in proving undergraduate-level mathematics, with a mixed cohort of participants from undergraduate students to professors of mathematics. We release the resulting interaction and rating dataset, MathConverse. By analyzing MathConverse, we derive a taxonomy of human query behaviors and uncover that despite a generally positive correlation, there are notable instances of divergence between correctness and perceived helpfulness in LLM generations, among other findings. Further, we garner a more granular understanding of GPT-4 mathematical problem-solving through a series of case studies, contributed by experienced mathematicians. We conclude with actionable takeaways for ML practitioners and mathematicians: models that communicate uncertainty, respond well to user corrections, and can provide a concise rationale for their recommendations, may constitute better assistants. Humans should inspect LLM output carefully given their current shortcomings and potential for surprising fallibility.


Assuntos
Idioma , Matemática , Resolução de Problemas , Humanos , Resolução de Problemas/fisiologia , Estudantes/psicologia
20.
Sci Rep ; 14(1): 14855, 2024 06 27.
Artigo em Inglês | MEDLINE | ID: mdl-38937475

RESUMO

Exploring a novel approach to mental health technology, this study illuminates the intricate interplay between exteroception (the perception of the external world), and interoception (the perception of the internal world). Drawing on principles of sensory substitution, we investigated how interoceptive signals, particularly respiration, could be conveyed through exteroceptive modalities, namely vision and hearing. To this end, we developed a unique, immersive multisensory environment that translates respiratory signals in real-time into dynamic visual and auditory stimuli. The system was evaluated by employing a battery of various psychological assessments, with the findings indicating a significant increase in participants' interoceptive sensibility and an enhancement of the state of flow, signifying immersive and positive engagement with the experience. Furthermore, a correlation between these two variables emerged, revealing a bidirectional enhancement between the state of flow and interoceptive sensibility. Our research is the first to present a sensory substitution approach for substituting between interoceptive and exteroceptive senses, and specifically as a transformative method for mental health interventions, paving the way for future research.


Assuntos
Interocepção , Humanos , Interocepção/fisiologia , Feminino , Masculino , Adulto , Adulto Jovem , Estimulação Acústica , Respiração , Estimulação Luminosa
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA