Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 9 de 9
Filtrar
1.
Leukemia ; 33(10): 2403-2415, 2019 10.
Artículo en Inglés | MEDLINE | ID: mdl-30940908

RESUMEN

Acute myeloid leukemia (AML) is a devastating disease, with the majority of patients dying within a year of diagnosis. For patients with relapsed/refractory AML, the prognosis is particularly poor with currently available treatments. Although genetically heterogeneous, AML subtypes share a common differentiation arrest at hematopoietic progenitor stages. Overcoming this differentiation arrest has the potential to improve the long-term survival of patients, as is the case in acute promyelocytic leukemia (APL), which is characterized by a chromosomal translocation involving the retinoic acid receptor alpha gene. Treatment of APL with all-trans retinoic acid (ATRA) induces terminal differentiation and apoptosis of leukemic promyelocytes, resulting in cure rates of over 80%. Unfortunately, similarly efficacious differentiation therapies have, to date, been lacking outside of APL. Inhibition of dihydroorotate dehydrogenase (DHODH), a key enzyme in the de novo pyrimidine synthesis pathway, was recently reported to induce differentiation of diverse AML subtypes. In this report we describe the discovery and characterization of BAY 2402234 - a novel, potent, selective and orally bioavailable DHODH inhibitor that shows monotherapy efficacy and differentiation induction across multiple AML subtypes. Herein, we present the preclinical data that led to initiation of a phase I evaluation of this inhibitor in myeloid malignancies.


Asunto(s)
Antineoplásicos/farmacología , Diferenciación Celular/efectos de los fármacos , Inhibidores Enzimáticos/farmacología , Leucemia Mieloide Aguda/tratamiento farmacológico , Oxidorreductasas actuantes sobre Donantes de Grupo CH-CH/antagonistas & inhibidores , Animales , Apoptosis/efectos de los fármacos , Línea Celular Tumoral , Dihidroorotato Deshidrogenasa , Femenino , Células HL-60 , Humanos , Leucemia Mieloide Aguda/metabolismo , Leucemia Promielocítica Aguda/tratamiento farmacológico , Leucemia Promielocítica Aguda/metabolismo , Ratones , Ratones Endogámicos NOD , Ratones SCID , Pirimidinas/metabolismo , Células THP-1 , Translocación Genética/efectos de los fármacos
2.
Hum Brain Mapp ; 40(7): 2174-2187, 2019 05.
Artículo en Inglés | MEDLINE | ID: mdl-30666737

RESUMEN

While the significance of auditory cortical regions for the development and maintenance of speech motor coordination is well established, the contribution of somatosensory brain areas to learned vocalizations such as singing is less well understood. To address these mechanisms, we applied intermittent theta burst stimulation (iTBS), a facilitatory repetitive transcranial magnetic stimulation (rTMS) protocol, over right somatosensory larynx cortex (S1) and a nonvocal dorsal S1 control area in participants without singing experience. A pitch-matching singing task was performed before and after iTBS to assess corresponding effects on vocal pitch regulation. When participants could monitor auditory feedback from their own voice during singing (Experiment I), no difference in pitch-matching performance was found between iTBS sessions. However, when auditory feedback was masked with noise (Experiment II), only larynx-S1 iTBS enhanced pitch accuracy (50-250 ms after sound onset) and pitch stability (>250 ms after sound onset until the end). Results indicate that somatosensory feedback plays a dominant role in vocal pitch regulation when acoustic feedback is masked. The acoustic changes moreover suggest that right larynx-S1 stimulation affected the preparation and involuntary regulation of vocal pitch accuracy, and that kinesthetic-proprioceptive processes play a role in the voluntary control of pitch stability in nonsingers. Together, these data provide evidence for a causal involvement of right larynx-S1 in vocal pitch regulation during singing.


Asunto(s)
Lateralidad Funcional/fisiología , Laringe/fisiología , Percepción de la Altura Tonal/fisiología , Canto/fisiología , Corteza Somatosensorial/fisiología , Ritmo Teta/fisiología , Estimulación Acústica/métodos , Adulto , Femenino , Humanos , Imagen por Resonancia Magnética/métodos , Masculino , Corteza Somatosensorial/diagnóstico por imagen , Estimulación Magnética Transcraneal/métodos , Adulto Joven
3.
J Acoust Soc Am ; 141(3): 2224, 2017 03.
Artículo en Inglés | MEDLINE | ID: mdl-28372147

RESUMEN

By varying the dynamics in a musical performance, the musician can convey structure and different expressions. Spectral properties of most musical instruments change in a complex way with the performed dynamics, but dedicated audio features for modeling the parameter are lacking. In this study, feature extraction methods were developed to capture relevant attributes related to spectral characteristics and spectral fluctuations, the latter through a sectional spectral flux. Previously, ground truths ratings of performed dynamics had been collected by asking listeners to rate how soft/loud the musicians played in a set of audio files. The ratings, averaged over subjects, were used to train three different machine learning models, using the audio features developed for the study as input. The highest result was produced from an ensemble of multilayer perceptrons with an R2 of 0.84. This result seems to be close to the upper bound, given the estimated uncertainty of the ground truth data. The result is well above that of individual human listeners of the previous listening experiment, and on par with the performance achieved from the average rating of six listeners. Features were analyzed with a factorial design, which highlighted the importance of source separation in the feature extraction.


Asunto(s)
Percepción Auditiva , Aprendizaje Automático , Modelos Psicológicos , Música , Periodicidad , Percepción del Tiempo , Estimulación Acústica , Acústica , Simulación por Computador , Humanos , Juicio , Percepción Sonora , Percepción de la Altura Tonal , Reproducibilidad de los Resultados , Procesamiento de Señales Asistido por Computador , Espectrografía del Sonido , Factores de Tiempo
4.
Neuroimage ; 147: 97-110, 2017 02 15.
Artículo en Inglés | MEDLINE | ID: mdl-27916664

RESUMEN

Previous studies on vocal motor production in singing suggest that the right anterior insula (AI) plays a role in experience-dependent modulation of feedback integration. Specifically, when somatosensory input was reduced via anesthesia of the vocal fold mucosa, right AI activity was down regulated in trained singers. In the current fMRI study, we examined how masking of auditory feedback affects pitch-matching accuracy and corresponding brain activity in the same participants. We found that pitch-matching accuracy was unaffected by masking in trained singers yet declined in nonsingers. The corresponding brain region with the most differential and interesting activation pattern was the right AI, which was up regulated during masking in singers but down regulated in nonsingers. Likewise, its functional connectivity with inferior parietal, frontal, and voice-relevant sensorimotor areas was increased in singers yet decreased in nonsingers. These results indicate that singers relied more on somatosensory feedback, whereas nonsingers depended more critically on auditory feedback. When comparing auditory vs somatosensory feedback involvement, the right anterior insula emerged as the only region for correcting intended vocal output by modulating what is heard or felt as a function of singing experience. We propose the right anterior insula as a key node in the brain's singing network for the integration of signals of salience across multiple sensory and cognitive domains to guide vocal behavior.


Asunto(s)
Estimulación Acústica , Corteza Cerebral/fisiología , Retroalimentación Psicológica , Lateralidad Funcional/fisiología , Corteza Sensoriomotora/fisiología , Canto/fisiología , Adolescente , Adulto , Percepción Auditiva/fisiología , Niño , Femenino , Humanos , Imagen por Resonancia Magnética , Masculino , Música/psicología , Red Nerviosa/fisiología , Vías Nerviosas/fisiología , Enmascaramiento Perceptual , Percepción de la Altura Tonal/fisiología , Desempeño Psicomotor/fisiología , Adulto Joven
5.
J Acoust Soc Am ; 136(4): 1951-63, 2014 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-25324094

RESUMEN

The notion of perceptual features is introduced for describing general music properties based on human perception. This is an attempt at rethinking the concept of features, aiming to approach the underlying human perception mechanisms. Instead of using concepts from music theory such as tones, pitches, and chords, a set of nine features describing overall properties of the music was selected. They were chosen from qualitative measures used in psychology studies and motivated from an ecological approach. The perceptual features were rated in two listening experiments using two different data sets. They were modeled both from symbolic and audio data using different sets of computational features. Ratings of emotional expression were predicted using the perceptual features. The results indicate that (1) at least some of the perceptual features are reliable estimates; (2) emotion ratings could be predicted by a small combination of perceptual features with an explained variance from 75% to 93% for the emotional dimensions activity and valence; (3) the perceptual features could only to a limited extent be modeled using existing audio features. Results clearly indicated that a small number of dedicated features were superior to a "brute force" model using a large number of general audio features.


Asunto(s)
Percepción Auditiva , Emociones , Música , Estimulación Acústica , Acústica , Adolescente , Adulto , Inteligencia Artificial , Femenino , Humanos , Masculino , Persona de Mediana Edad , Modelos Teóricos , Variaciones Dependientes del Observador , Percepción de la Altura Tonal , Psicoacústica , Reproducibilidad de los Resultados , Procesamiento de Señales Asistido por Computador , Espectrografía del Sonido , Adulto Joven
6.
J Neurosci ; 33(14): 6070-80, 2013 Apr 03.
Artículo en Inglés | MEDLINE | ID: mdl-23554488

RESUMEN

Somatosensation plays an important role in the motor control of vocal functions, yet its neural correlate and relation to vocal learning is not well understood. We used fMRI in 17 trained singers and 12 nonsingers to study the effects of vocal-fold anesthesia on the vocal-motor singing network as a function of singing expertise. Tasks required participants to sing musical target intervals under normal conditions and after anesthesia. At the behavioral level, anesthesia altered pitch accuracy in both groups, but singers were less affected than nonsingers, indicating an experience-dependent effect of the intervention. At the neural level, this difference was accompanied by distinct patterns of decreased activation in singers (cortical and subcortical sensory and motor areas) and nonsingers (subcortical motor areas only) respectively, suggesting that anesthesia affected the higher-level voluntary (explicit) motor and sensorimotor integration network more in experienced singers, and the lower-level (implicit) subcortical motor loops in nonsingers. The right anterior insular cortex (AIC) was identified as the principal area dissociating the effect of expertise as a function of anesthesia by three separate sources of evidence. First, it responded differently to anesthesia in singers (decreased activation) and nonsingers (increased activation). Second, functional connectivity between AIC and bilateral A1, M1, and S1 was reduced in singers but augmented in nonsingers. Third, increased BOLD activity in right AIC in singers was correlated with larger pitch deviation under anesthesia. We conclude that the right AIC and sensory-motor areas play a role in experience-dependent modulation of feedback integration for vocal motor control during singing.


Asunto(s)
Biorretroalimentación Psicológica/fisiología , Mapeo Encefálico , Corteza Cerebral/fisiología , Lateralidad Funcional/fisiología , Música , Canto/fisiología , Adulto , Anestésicos Locales/farmacología , Corteza Cerebral/irrigación sanguínea , Retroalimentación , Femenino , Humanos , Procesamiento de Imagen Asistido por Computador , Lidocaína/farmacología , Imagen por Resonancia Magnética , Masculino , Oxígeno , Percepción de la Altura Tonal/fisiología , Análisis de Regresión , Factores de Tiempo , Pliegues Vocales/efectos de los fármacos , Pliegues Vocales/fisiología
7.
PLoS One ; 8(1): e55150, 2013.
Artículo en Inglés | MEDLINE | ID: mdl-23383088

RESUMEN

The organization of sound into meaningful units is fundamental to the processing of auditory information such as speech and music. In expressive music performance, structural units or phrases may become particularly distinguishable through subtle timing variations highlighting musical phrase boundaries. As such, expressive timing may support the successful parsing of otherwise continuous musical material. By means of the event-related potential technique (ERP), we investigated whether expressive timing modulates the neural processing of musical phrases. Musicians and laymen listened to short atonal scale-like melodies that were presented either isochronously (deadpan) or with expressive timing cues emphasizing the melodies' two-phrase structure. Melodies were presented in an active and a passive condition. Expressive timing facilitated the processing of phrase boundaries as indicated by decreased N2b amplitude and enhanced P3a amplitude for target phrase boundaries and larger P2 amplitude for non-target boundaries. When timing cues were lacking, task demands increased especially for laymen as reflected by reduced P3a amplitude. In line, the N2b occurred earlier for musicians in both conditions indicating general faster target detection compared to laymen. Importantly, the elicitation of a P3a-like response to phrase boundaries marked by a pitch leap during passive exposure suggests that expressive timing information is automatically encoded and may lead to an involuntary allocation of attention towards significant events within a melody. We conclude that subtle timing variations in music performance prepare the listener for musical key events by directing and guiding attention towards their occurrences. That is, expressive timing facilitates the structuring and parsing of continuous musical material even when the auditory input is unattended.


Asunto(s)
Encéfalo/fisiología , Potenciales Evocados/fisiología , Música , Estimulación Acústica , Adulto , Conducta/fisiología , Femenino , Humanos , Masculino , Factores de Tiempo
8.
J Acoust Soc Am ; 130(4): EL193-9, 2011 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-21974491

RESUMEN

The effect of variations in pitch, loudness, and timbre on the perception of the dynamics of isolated instrumental tones is investigated. A full factorial design was used in a listening experiment. The subjects were asked to indicate the perceived dynamics of each stimulus on a scale from pianissimo to fortissimo. Statistical analysis showed that for the instruments included (i.e., clarinet, flute, piano, trumpet, and violin) timbre and loudness had equally large effects, while pitch was relevant mostly for the first three. The results confirmed our hypothesis that loudness alone is not a reliable estimate of the dynamics of musical tones.


Asunto(s)
Acústica/instrumentación , Vías Auditivas/fisiología , Percepción Sonora , Música , Percepción de la Altura Tonal , Estimulación Acústica , Adulto , Análisis de Varianza , Audiometría , Umbral Auditivo , Diseño de Equipo , Humanos , Modelos Estadísticos , Psicoacústica , Factores de Tiempo , Adulto Joven
9.
Cortex ; 47(9): 1068-81, 2011 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-21696717

RESUMEN

Many studies on the synthesis of emotional expression in music performance have focused on the effect of individual performance variables on perceived emotional quality by making a systematical variation of variables. However, most of the studies have used a predetermined small number of levels for each variable, and the selection of these levels has often been done arbitrarily. The main aim of this research work is to improve upon existing methodologies by taking a synthesis approach. In a production experiment, 20 performers were asked to manipulate values of 7 musical variables simultaneously (tempo, sound level, articulation, phrasing, register, timbre, and attack speed) for communicating 5 different emotional expressions (neutral, happy, scary, peaceful, sad) for each of 4 scores. The scores were compositions communicating four different emotions (happiness, sadness, fear, calmness). Emotional expressions and music scores were presented in combination and in random order for each performer for a total of 5 × 4 stimuli. The experiment allowed for a systematic investigation of the interaction between emotion of each score and intended expressed emotions by performers. A two-way analysis of variance (ANOVA), repeated measures, with factors emotion and score was conducted on the participants' values separately for each of the seven musical factors. There are two main results. The first one is that musical variables were manipulated in the same direction as reported in previous research on emotional expressive music performance. The second one is the identification for each of the five emotions the mean values and ranges of the five musical variables tempo, sound level, articulation, register, and instrument. These values resulted to be independent from the particular score and its emotion. The results presented in this study therefore allow for both the design and control of emotionally expressive computerized musical stimuli that are more ecologically valid than stimuli without performance variations.


Asunto(s)
Percepción Auditiva , Emociones , Música/psicología , Estimulación Acústica , Adulto , Femenino , Humanos , Masculino , Persona de Mediana Edad
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA