RESUMEN
OBJECTIVES: The JAVELIN Bladder 100 phase 3 trial showed that avelumab first-line maintenance + best supportive care significantly prolonged overall survival and progression-free survival versus best supportive care alone in patients with advanced urothelial carcinoma who were progression-free following first-line platinum-based chemotherapy. We report findings from J-AVENUE (NCT05431777), a real-world study of avelumab first-line maintenance therapy in Japan. METHODS: Medical charts of patients with advanced urothelial carcinoma without disease progression following first-line platinum-based chemotherapy, who received avelumab maintenance between February and November 2021, were reviewed. Patients were followed until June 2022. The primary endpoint was patient characteristics; secondary endpoints included time to treatment failure and progression-free survival. RESULTS: In 79 patients analyzed, median age was 72 years (range, 44-86). Primary tumor site was upper tract in 45.6% and bladder in 54.4%. The most common first-line chemotherapy regimen was cisplatin + gemcitabine (63.3%). Median number of chemotherapy cycles received was four. Best response to chemotherapy was complete response in 10.1%, partial response in 58.2%, and stable disease in 31.6%. Median treatment-free interval before avelumab was 4.9 weeks. With avelumab first-line maintenance therapy, the disease control rate was 58.2%, median time to treatment failure was 4.6 months (95% CI, 3.3-6.4), and median progression-free survival was 6.1 months (95% CI, 3.6-9.7). CONCLUSIONS: Findings from J-AVENUE show the effectiveness of avelumab first-line maintenance in patients with advanced urothelial carcinoma in Japan in clinical practice, with similar progression-free survival to JAVELIN Bladder 100 and previous real-world studies, supporting its use as a standard of care.
Asunto(s)
Anticuerpos Monoclonales Humanizados , Carcinoma de Células Transicionales , Quimioterapia de Mantención , Adulto , Anciano , Anciano de 80 o más Años , Femenino , Humanos , Masculino , Persona de Mediana Edad , Anticuerpos Monoclonales Humanizados/uso terapéutico , Anticuerpos Monoclonales Humanizados/administración & dosificación , Antineoplásicos Inmunológicos/uso terapéutico , Protocolos de Quimioterapia Combinada Antineoplásica/uso terapéutico , Carcinoma de Células Transicionales/tratamiento farmacológico , Carcinoma de Células Transicionales/mortalidad , Carcinoma de Células Transicionales/patología , Japón , Quimioterapia de Mantención/métodos , Supervivencia sin Progresión , Estudios Retrospectivos , Resultado del Tratamiento , Neoplasias de la Vejiga Urinaria/tratamiento farmacológico , Neoplasias de la Vejiga Urinaria/patología , Neoplasias de la Vejiga Urinaria/mortalidad , Neoplasias Urológicas/tratamiento farmacológico , Neoplasias Urológicas/mortalidad , Neoplasias Urológicas/patologíaRESUMEN
OBJECTIVE: This study evaluated whether preoperative radiographs accurately predicted intra-articular cartilage damage in varus knees. METHODS: The study assessed 181 knees in 156 patients who underwent total knee arthroplasty. Cartilage damage was graded by two examiners with the International Cartilage Repair Society classification; one used knee radiographs and the other used intraoperative photographs. It was then determined if this radiographic cartilage assessment over- or underestimated the actual damage severity. Knee morphological characteristics affecting radiographic misestimation of damage severity were also identified. RESULTS: The concordance rate between radiographic and intraoperative assessments of the medial femoral condyle was high, at around 0.7. Large discrepancies were found for the lateral femoral condyle and medial trochlear groove. Radiographic assessment underestimated cartilage damage on the medial side of the lateral femoral condyle due to a large lateral tibiofemoral joint opening and severe varus alignment {both r = -0.43}. Medial trochlear damage was also underdiagnosed, in cases of residual medial tibiofemoral cartilage and shallow medial tibial slope {r = -0.25 and -0.21, respectively}. CONCLUSIONS: Radiographic evaluation of knee osteoarthritis was moderately practical using International Cartilage Repair Society grades. Lateral femoral condyle and medial trochlear cartilage damage tended to be misestimated, but considering morphologic factors might improve the diagnostic rate.
Asunto(s)
Artroplastia de Reemplazo de Rodilla , Cartílago Articular , Fémur , Articulación de la Rodilla , Osteoartritis de la Rodilla , Radiografía , Índice de Severidad de la Enfermedad , Humanos , Osteoartritis de la Rodilla/diagnóstico por imagen , Osteoartritis de la Rodilla/patología , Osteoartritis de la Rodilla/complicaciones , Osteoartritis de la Rodilla/cirugía , Femenino , Masculino , Cartílago Articular/diagnóstico por imagen , Cartílago Articular/patología , Anciano , Persona de Mediana Edad , Articulación de la Rodilla/diagnóstico por imagen , Articulación de la Rodilla/patología , Radiografía/métodos , Fémur/diagnóstico por imagen , Fémur/patología , Anciano de 80 o más AñosRESUMEN
Intelligent transportation systems encompass a series of technologies and applications that exchange information to improve road traffic and avoid accidents. According to statistics, some studies argue that human mistakes cause most road accidents worldwide. For this reason, it is essential to model driver behavior to improve road safety. This paper presents a Fuzzy Rule-Based System for driver classification into different profiles considering their behavior. The system's knowledge base includes an ontology and a set of driving rules. The ontology models the main entities related to driver behavior and their relationships with the traffic environment. The driving rules help the inference system to make decisions in different situations according to traffic regulations. The classification system has been integrated on an intelligent transportation architecture. Considering the user's driving style, the driving assistance system sends them recommendations, such as adjusting speed or choosing alternative routes, allowing them to prevent or mitigate negative transportation events, such as road crashes or traffic jams. We carry out a set of experiments in order to test the expressiveness of the ontology along with the effectiveness of the overall classification system in different simulated traffic situations. The results of the experiments show that the ontology is expressive enough to model the knowledge of the proposed traffic scenarios, with an F1 score of 0.9. In addition, the system allows proper classification of the drivers' behavior, with an F1 score of 0.84, outperforming Random Forest and Naive Bayes classifiers. In the simulation experiments, we observe that most of the drivers who are recommended an alternative route experience an average time gain of 66.4%, showing the utility of the proposal.
Asunto(s)
Accidentes de Tránsito , Conducción de Automóvil , Humanos , Accidentes de Tránsito/prevención & control , Teorema de Bayes , Transportes , Simulación por ComputadorRESUMEN
Motor executions alter sensory processes. Studies have shown that loudness perception changes when a sound is generated by active movement. However, it is still unknown where and how the motor-related changes in loudness perception depend on the task demand of motor execution. We examined whether different levels of precision demands in motor control affects loudness perception. We carried out a loudness discrimination test, in which the sound stimulus was produced in conjunction with the force generation task. We tested three target force amplitude levels. The force target was presented on a monitor as a fixed visual target. The generated force was also presented on the same monitor as a movement of the visual cursor. Participants adjusted their force amplitude in a predetermined range without overshooting using these visual targets and moving cursor. In the control condition, the sound and visual stimuli were generated externally (without a force generation task). We found that the discrimination performance was significantly improved when the sound was produced by the force generation task compared to the control condition, in which the sound was produced externally, although we did not find that this improvement in discrimination performance changed depending on the different target force amplitude levels. The results suggest that the demand for precise control to produce a fixed amount of force may be key to obtaining the facilitatory effect of motor execution in auditory processes.
Asunto(s)
Percepción Sonora , Sonido , Estimulación Acústica , Cognición , Humanos , MovimientoRESUMEN
Speech learning requires precise motor control, but it likewise requires transient storage of information to enable the adjustment of upcoming movements based on the success or failure of previous attempts. The contribution of somatic sensory memory for limb position has been documented in work on arm movement; however, in speech, the sensory support for speech production comes from both somatosensory and auditory inputs, and accordingly sensory memory for either or both of sounds and somatic inputs might contribute to learning. In the present study, adaptation to altered auditory feedback was used as an experimental model of speech motor learning. Participants also underwent tests of both auditory and somatic sensory memory. We found that although auditory memory for speech sounds is better than somatic memory for speechlike facial skin deformations, somatic sensory memory predicts adaptation, whereas auditory sensory memory does not. Thus even though speech relies substantially on auditory inputs and in the present manipulation adaptation requires the minimization of auditory error, it is somatic inputs that provide the memory support for learning.NEW & NOTEWORTHY In speech production, almost everyone achieves an exceptionally high level of proficiency. This is remarkable because speech involves some of the smallest and most carefully timed movements of which we are capable. The present paper demonstrates that sensory memory contributes to speech motor learning. Moreover, we report the surprising result that somatic sensory memory predicts speech motor learning, whereas auditory memory does not.
Asunto(s)
Memoria , Destreza Motora , Habla , Adolescente , Adulto , Femenino , Humanos , Masculino , Percepción del Habla , Percepción VisualRESUMEN
The human tongue is atypical as a motor system since its movement is determined by deforming its soft tissues via muscles that are in large part embedded in it (muscular hydrostats). However, the neurophysiological mechanisms enabling fine tongue motor control are not well understood. We investigated sensorimotor control mechanisms of the tongue through a perturbation experiment. A mechanical perturbation was applied to the tongue during the articulation of three vowels (/i/, /e/, /ε/) under conditions of voicing, whispering, and posturing. Tongue movements were measured at three surface locations in the sagittal plane using electromagnetic articulography. We found that the displacement induced by the external force was quickly compensated for. Individual sensors did not return to their original positions but went toward a position on the original tongue contour for that vowel. The amplitude of compensatory response at each tongue site varied systematically according to the articulatory condition. A mathematical simulation that included reflex mechanisms suggested that the observed compensatory response can be attributed to a reflex mechanism, rather than passive tissue properties. The results provide evidence for the existence of quick compensatory mechanisms in the tongue that may be dependent on tunable reflexes. The tongue posture for vowels could be regulated in relation to the shape of the tongue contour, rather than to specific positions for individual tissue points.NEW & NOTEWORTHY This study presents evidence of quick compensatory mechanisms in tongue motor control for speech production. The tongue posture is controlled not in relation to a specific tongue position, but to the shape of the tongue contour to achieve specific speech sounds. Modulation of compensatory responses due to task demands and mathematical simulations support the idea that the quick compensatory response is driven by a reflex mechanism.
Asunto(s)
Actividad Motora/fisiología , Postura/fisiología , Reflejo/fisiología , Habla/fisiología , Lengua/fisiología , Adolescente , Adulto , Femenino , Humanos , Masculino , Adulto JovenRESUMEN
Alzheimer's disease (AD) is characterized by the formation of extracellular amyloid plaques containing the amyloid ß-protein (Aß) within the parenchyma of the brain. Aß is considered to be the key pathogenic factor of AD. Recently, we showed that Angiotensin II type 1 receptor (AT1R), which regulates blood pressure, is involved in Aß production, and that telmisartan (Telm), which is an angiotensin II receptor blocker (ARB), increased Aß production via AT1R. However, the precise mechanism underlying how AT1R is involved in Aß production is unknown. Interestingly, AT1R, a G protein-coupled receptor, was strongly suggested to be involved in signal transduction by heterodimerization with ß2-adrenergic receptor (ß2-AR), which is also shown to be involved in Aß generation. Therefore, in this study, we aimed to clarify whether the interaction between AT1R and ß2-AR is involved in the regulation of Aß production. To address this, we analyzed whether the increase in Aß production by Telm treatment is affected by ß-AR antagonist using fibroblasts overexpressing amyloid precursor protein (APP). We found that the increase in Aß production by Telm treatment was decreased by the treatment of ß2-AR selective antagonist ICI-118551 more strongly than the treatment of ß1-AR selective antagonists. Furthermore, deficiency of AT1R abolished the effect of ß2-AR antagonist on the stimulation of Aß production caused by Telm. Taken together, the interaction between AT1R and ß2-AR is likely to be involved in Aß production.
Asunto(s)
Péptidos beta-Amiloides/metabolismo , Receptor de Angiotensina Tipo 1/metabolismo , Receptores Adrenérgicos beta 2/metabolismo , Antagonistas Adrenérgicos beta/farmacología , Bloqueadores del Receptor Tipo 1 de Angiotensina II/farmacología , Animales , Atenolol/farmacología , Bisoprolol/farmacología , Células Cultivadas , Ratones Endogámicos C57BL , Propanolaminas/farmacología , Propranolol/farmacología , Telmisartán/farmacologíaRESUMEN
Somatosensory stimulation associated with facial skin deformation has been developed and efficiently applied in the study of speech production and speech perception. However, the technique is limited to a simplified unidirectional pattern of stimulation, and cannot adapt to realistic stimulation patterns related to multidimensional orofacial gestures. To overcome this issue, a new multi-actuator system is developed enabling one to synchronously deform the facial skin in multiple directions. The first prototype involves stimulation in two directions and its efficiency is evaluated using a temporal order judgement test involving vertical and horizontal facial skin stretches at the sides of the mouth.
Asunto(s)
Percepción del Habla , Habla , Cara , Gestos , BocaRESUMEN
Speech motor control and learning rely on both somatosensory and auditory inputs. Somatosensory inputs associated with speech production can also affect the process of auditory perception of speech, and the somatosensory-auditory interaction may play a fundamental role in auditory perception of speech. In this report, we show that the somatosensory system contributes to perceptual recalibration, separate from its role in motor function. Subjects participated in speech motor adaptation to altered auditory feedback. Auditory perception of speech was assessed in phonemic identification tests before and after speech adaptation. To investigate a role of the somatosensory system in motor adaptation and subsequent perceptual change, we applied orofacial skin stretch in either a backward or forward direction during the auditory feedback alteration as a somatosensory modulation. We found that the somatosensory modulation did not affect the amount of adaptation at the end of training, although it changed the rate of adaptation. However, the perception following speech adaptation was altered depending on the direction of the somatosensory modulation. Somatosensory inflow rather than motor outflow thus drives changes to auditory perception of speech following speech adaptation, suggesting that somatosensory inputs play an important role in tuning of perceptual system.NEW & NOTEWORTHY This article reports that the somatosensory system works not equally with the motor system, but predominantly in the calibration of auditory perception of speech by speech production.
Asunto(s)
Adaptación Fisiológica/fisiología , Cara/fisiología , Retroalimentación Sensorial/fisiología , Psicolingüística , Percepción del Habla/fisiología , Habla/fisiología , Adolescente , Adulto , Femenino , Humanos , Labio/fisiología , Masculino , Adulto JovenRESUMEN
Intelligent transportation systems are a set of technological solutions used to improve the performance and safety of road transportation. A crucial element for the success of these systems is the exchange of information, not only between vehicles, but also among other components in the road infrastructure through different applications. One of the most important information sources in this kind of systems is sensors. Sensors can be within vehicles or as part of the infrastructure, such as bridges, roads or traffic signs. Sensors can provide information related to weather conditions and traffic situation, which is useful to improve the driving process. To facilitate the exchange of information between the different applications that use sensor data, a common framework of knowledge is needed to allow interoperability. In this paper an ontology-driven architecture to improve the driving environment through a traffic sensor network is proposed. The system performs different tasks automatically to increase driver safety and comfort using the information provided by the sensors.
RESUMEN
Articulatory information can support learning or remediating pronunciation of a second language (L2). This paper describes an electromagnetic articulometer-based visual-feedback approach using an articulatory target presented in real-time to facilitate L2 pronunciation learning. This approach trains learners to adjust articulatory positions to match targets for a L2 vowel estimated from productions of vowels that overlap in both L1 and L2. Training of Japanese learners for the American English vowel /æ/ that included visual training improved its pronunciation regardless of whether audio training was also included. Articulatory visual feedback is shown to be an effective method for facilitating L2 pronunciation learning.
Asunto(s)
Recursos Audiovisuales , Retroalimentación Sensorial , Retroalimentación Formativa , Multilingüismo , Estimulación Luminosa , Enseñanza/métodos , Estimulación Acústica , Adulto , Sistemas de Computación , Señales (Psicología) , Fenómenos Electromagnéticos , Humanos , Masculino , Hueso Paladar/fisiología , Fonética , Lengua/fisiología , Adulto JovenRESUMEN
PURPOSE: Orofacial somatosensory inputs play an important role in speech motor control and speech learning. Since receiving specific auditory-somatosensory inputs during speech perceptual training alters speech perception, similar perceptual training could also alter speech production. We examined whether the production performance was changed by perceptual training with orofacial somatosensory inputs. METHOD: We focused on the French vowels /e/ and /ø/, contrasted in their articulation by horizontal gestures. Perceptual training consisted of a vowel identification task contrasting /e/ and /ø/. Along with training, for the first group of participants, somatosensory stimulation was applied as facial skin stretch in backward direction. We recorded the target vowels uttered by the participants before and after the perceptual training and compared their F1, F2, and F3 formants. We also tested a control group with no somatosensory stimulation and another somatosensory group with a different vowel continuum (/e/-/i/) for perceptual training. RESULTS: Perceptual training with somatosensory stimulation induced changes in F2 and F3 in the produced vowel sounds. F2 decreased consistently in the two somatosensory groups. F3 increased following the /e/-/ø/ training and decreased following the /e/-/i/ training. F2 change was significantly correlated with the perceptual shift between the first and second half of the training phase in the somatosensory group with the /e/-/ø/ training, but not with the /e/-/i/ training. The control group displayed no effect on F2 and F3, and just a tendency of F1 increase. CONCLUSION: The results suggest that somatosensory inputs associated to speech sound inputs can play a role in speech training and learning in both production and perception.
Asunto(s)
Fonética , Percepción del Habla , Habla , Humanos , Percepción del Habla/fisiología , Femenino , Masculino , Adulto Joven , Habla/fisiología , Adulto , Cara/fisiología , Aprendizaje/fisiologíaRESUMEN
A quick correction mechanism of the tongue has been formerly experimentally observed in speech posture stabilization in response to a sudden tongue stretch perturbation. Given its relatively short latency (< 150 ms), the response could be driven by somatosensory feedback alone. The current study assessed this hypothesis by examining whether this response is induced in the absence of auditory feedback. We compared the response under two auditory conditions: with normal versus masked auditory feedback. Eleven participants were tested. They were asked to whisper the vowel /e/ for a few seconds. The tongue was stretched horizontally with step patterns of force (1 N during 1 s) using a robotic device. The articulatory positions were recorded using electromagnetic articulography simultaneously with the produced sound. The tongue perturbation was randomly and unpredictably applied in one-fifth of trials. The two auditory conditions were tested in random order. A quick compensatory response was induced in a similar way to the previous study. We found that the amplitudes of the compensatory responses were not significantly different between the two auditory conditions, either for the tongue displacement or for the produced sounds. These results suggest that the observed quick correction mechanism is primarily based on somatosensory feedback. This correction mechanism could be learned in such a way as to maintain the auditory goal on the sole basis of somatosensory feedback.
RESUMEN
Although there is no doubt from an empirical viewpoint that reflex mechanisms can contribute to tongue motor control in humans, there is limited neurophysiological evidence to support this idea. Previous results failing to observe any tonic stretch reflex in the tongue had reduced the likelihood of a reflex contribution in tongue motor control. The current study presents experimental evidence of a human tongue reflex in response to a sudden stretch while holding a posture for speech. The latency was relatively long (50 ms), which is possibly mediated through cortical-arc. The activation peak in a speech task was greater than in a non-speech task while background activation levels were similar in both tasks, and the peak amplitude in a speech task was not modulated by the additional task to react voluntarily to the perturbation. Computer simulations with a simplified linear mass-spring-damper model showed that the recorded muscle activation response is suited for the generation of tongue movement responses that were observed in a previous study with the appropriate timing when taking into account a possible physiological delay between reflex muscle activation and the corresponding force. Our results evidenced clearly that reflex mechanisms contribute to tongue posture stabilization for speech production.
Asunto(s)
Reflejo , Habla , Humanos , Electromiografía , Equilibrio Postural , Lengua , Músculo Esquelético/fisiologíaRESUMEN
Intergroup contact occurring through indirect means such as the internet has the potential to improve intergroup relationships and may be especially beneficial in high conflict situations. Here we conducted a three-timepoint online experiment to ascertain whether the use of a conversational agent in E-contact platforms could mitigate interethnic prejudices and hostility among Afghanistan's historically segregated and persistently conflictual ethnic groups. 128 Afghans of Pashtun, Tajik, and Hazara backgrounds were assigned to one of four E-contact conditions (control with no conversational agent and three experimental groups that varied in the conversational agent settings). Participants in the experimental conditions contributed more ideas and longer opinions and showed a greater reduction in outgroup prejudice and anxiety than those in the control group. These findings demonstrate that E-contact facilitated by a conversational agent can improve intergroup attitudes even in contexts characterized by a long history of intergroup segregation and conflict.
RESUMEN
Introduction: Orofacial somatosensory inputs modify the perception of speech sounds. Such auditory-somatosensory integration likely develops alongside speech production acquisition. We examined whether the somatosensory effect in speech perception varies depending on individual characteristics of speech production. Methods: The somatosensory effect in speech perception was assessed by changes in category boundary between /e/ and /ø/ in a vowel identification test resulting from somatosensory stimulation providing facial skin deformation in the rearward direction corresponding to articulatory movement for /e/ applied together with the auditory input. Speech production performance was quantified by the acoustic distances between the average first, second and third formants of /e/ and /ø/ utterances recorded in a separate test. Results: The category boundary between /e/ and /ø/ was significantly shifted towards /ø/ due to the somatosensory stimulation which is consistent with previous research. The amplitude of the category boundary shift was significantly correlated with the acoustic distance between the mean second - and marginally third - formants of /e/ and /ø/ productions, with no correlation with the first formant distance. Discussion: Greater acoustic distances can be related to larger contrasts between the articulatory targets of vowels in speech production. These results suggest that the somatosensory effect in speech perception can be linked to speech production performance.
RESUMEN
The advent of Artificial Intelligence (AI) is fostering the development of innovative methods of communication and collaboration. Integrating AI into Information and Communication Technologies (ICTs) is now ushering in an era of social progress that has the potential to empower marginalized groups. This transformation paves the way to a digital inclusion that could qualitatively empower the online presence of women, particularly in conservative and male-dominated regions. To explore this possibility, we investigated the effect of integrating conversational agents into online debates encompassing 240 Afghans discussing the fall of Kabul in August 2021. We found that the agent leads to quantitative differences in how both genders contribute to the debate by raising issues, presenting ideas, and articulating arguments. We also found increased ideation and reduced inhibition for both genders, particularly females, when interacting exclusively with other females or the agent. The enabling character of the conversational agent reveals an apparatus that could empower women and increase their agency on online platforms.
Asunto(s)
Inteligencia Artificial , Comunicación , Humanos , Femenino , Masculino , Inhibición Psicológica , Procesos MentalesRESUMEN
Interactions between auditory and somatosensory information are relevant to the neural processing of speech since speech processes and certainly speech production involves both auditory information and inputs that arise from the muscles and tissues of the vocal tract. We previously demonstrated that somatosensory inputs associated with facial skin deformation alter the perceptual processing of speech sounds. We show here that the reverse is also true, that speech sounds alter the perception of facial somatosensory inputs. As a somatosensory task, we used a robotic device to create patterns of facial skin deformation that would normally accompany speech production. We found that the perception of the facial skin deformation was altered by speech sounds in a manner that reflects the way in which auditory and somatosensory effects are linked in speech production. The modulation of orofacial somatosensory processing by auditory inputs was specific to speech and likewise to facial skin deformation. Somatosensory judgments were not affected when the skin deformation was delivered to the forearm or palm or when the facial skin deformation accompanied nonspeech sounds. The perceptual modulation that we observed in conjunction with speech sounds shows that speech sounds specifically affect neural processing in the facial somatosensory system and suggest the involvement of the somatosensory system in both the production and perceptual processing of speech.
Asunto(s)
Cara/fisiología , Fenómenos Fisiológicos de la Piel , Corteza Somatosensorial/fisiología , Percepción del Habla/fisiología , Habla/fisiología , Percepción del Tacto/fisiología , Tacto/fisiología , Femenino , Humanos , Masculino , Adulto JovenRESUMEN
Somatosensory signals from the facial skin and muscles of the vocal tract provide a rich source of sensory input in speech production. We show here that the somatosensory system is also involved in the perception of speech. We use a robotic device to create patterns of facial skin deformation that would normally accompany speech production. We find that when we stretch the facial skin while people listen to words, it alters the sounds they hear. The systematic perceptual variation we observe in conjunction with speech-like patterns of skin stretch indicates that somatosensory inputs affect the neural processing of speech sounds and shows the involvement of the somatosensory system in the perceptual processing in speech.
Asunto(s)
Sensación/fisiología , Fenómenos Fisiológicos de la Piel , Percepción del Habla/fisiología , Adulto , Humanos , Fonética , Factores de TiempoRESUMEN
Orofacial somatosensory inputs may play a role in the link between speech perception and production. Given the fact that speech motor learning, which involves paired auditory and somatosensory inputs, results in changes to speech perceptual representations, somatosensory inputs may also be involved in learning or adaptive processes of speech perception. Here we show that repetitive pairing of somatosensory inputs and sounds, such as occurs during speech production and motor learning, can also induce a change of speech perception. We examined whether the category boundary between /ε/ and /a/ was changed as a result of perceptual training with orofacial somatosensory inputs. The experiment consisted of three phases: Baseline, Training, and Aftereffect. In all phases, a vowel identification test was used to identify the perceptual boundary between /ε/ and /a/. In the Baseline and the Aftereffect phase, an adaptive method based on the maximum-likelihood procedure was applied to detect the category boundary using a small number of trials. In the Training phase, we used the method of constant stimuli in order to expose participants to stimulus variants which covered the range between /ε/ and /a/ evenly. In this phase, to mimic the sensory input that accompanies speech production and learning in an experimental group, somatosensory stimulation was applied in the upward direction when the stimulus sound was presented. A control group (CTL) followed the same training procedure in the absence of somatosensory stimulation. When we compared category boundaries prior to and following paired auditory-somatosensory training, the boundary for participants in the experimental group reliably changed in the direction of /ε/, indicating that the participants perceived /a/ more than /ε/ as a consequence of training. In contrast, the CTL did not show any change. Although a limited number of participants were tested, the perceptual shift was reduced and almost eliminated 1 week later. Our data suggest that repetitive exposure of somatosensory inputs in a task that simulates the sensory pairing which occurs during speech production, changes perceptual system and supports the idea that somatosensory inputs play a role in speech perceptual adaptation, probably contributing to the formation of sound representations for speech perception.