Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Más filtros

Banco de datos
Tipo del documento
Asunto de la revista
País de afiliación
Intervalo de año de publicación
1.
Nature ; 612(7938): 170-176, 2022 12.
Artículo en Inglés | MEDLINE | ID: mdl-36265513

RESUMEN

Cyclic dinucleotides (CDNs) are ubiquitous signalling molecules in all domains of life1,2. Mammalian cells produce one CDN, 2'3'-cGAMP, through cyclic GMP-AMP synthase after detecting cytosolic DNA signals3-7. 2'3'-cGAMP, as well as bacterial and synthetic CDN analogues, can act as second messengers to activate stimulator of interferon genes (STING) and elicit broad downstream responses8-21. Extracellular CDNs must traverse the cell membrane to activate STING, a process that is dependent on the solute carrier SLC19A122,23. Moreover, SLC19A1 represents the major transporter for folate nutrients and antifolate therapeutics24,25, thereby placing SLC19A1 as a key factor in multiple physiological and pathological processes. How SLC19A1 recognizes and transports CDNs, folate and antifolate is unclear. Here we report cryo-electron microscopy structures of human SLC19A1 (hSLC19A1) in a substrate-free state and in complexes with multiple CDNs from different sources, a predominant natural folate and a new-generation antifolate drug. The structural and mutagenesis results demonstrate that hSLC19A1 uses unique yet divergent mechanisms to recognize CDN- and folate-type substrates. Two CDN molecules bind within the hSLC19A1 cavity as a compact dual-molecule unit, whereas folate and antifolate bind as a monomer and occupy a distinct pocket of the cavity. Moreover, the structures enable accurate mapping and potential mechanistic interpretation of hSLC19A1 with loss-of-activity and disease-related mutations. Our research provides a framework for understanding the mechanism of SLC19-family transporters and is a foundation for the development of potential therapeutics.


Asunto(s)
Microscopía por Crioelectrón , Fosfatos de Dinucleósidos , Antagonistas del Ácido Fólico , Ácido Fólico , Nucleótidos Cíclicos , Animales , Humanos , Fosfatos de Dinucleósidos/metabolismo , Ácido Fólico/metabolismo , Antagonistas del Ácido Fólico/farmacología , Mamíferos/metabolismo , Nucleótidos Cíclicos/metabolismo , Proteína Portadora de Folato Reducido/química , Proteína Portadora de Folato Reducido/genética , Proteína Portadora de Folato Reducido/metabolismo , Proteína Portadora de Folato Reducido/ultraestructura
2.
IEEE Trans Image Process ; 33: 1600-1613, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38373124

RESUMEN

Action quality assessment (AQA) is to assess how well an action is performed. Previous works perform modelling by only the use of visual information, ignoring audio information. We argue that although AQA is highly dependent on visual information, the audio is useful complementary information for improving the score regression accuracy, especially for sports with background music, such as figure skating and rhythmic gymnastics. To leverage multimodal information for AQA, i.e., RGB, optical flow and audio information, we propose a Progressive Adaptive Multimodal Fusion Network (PAMFN) that separately models modality-specific information and mixed-modality information. Our model consists of with three modality-specific branches that independently explore modality-specific information and a mixed-modality branch that progressively aggregates the modality-specific information from the modality-specific branches. To build the bridge between modality-specific branches and the mixed-modality branch, three novel modules are proposed. First, a Modality-specific Feature Decoder module is designed to selectively transfer modality-specific information to the mixed-modality branch. Second, when exploring the interaction between modality-specific information, we argue that using an invariant multimodal fusion policy may lead to suboptimal results, so as to take the potential diversity in different parts of an action into consideration. Therefore, an Adaptive Fusion Module is proposed to learn adaptive multimodal fusion policies in different parts of an action. This module consists of several FusionNets for exploring different multimodal fusion strategies and a PolicyNet for deciding which FusionNets are enabled. Third, a module called Cross-modal Feature Decoder is designed to transfer cross-modal features generated by Adaptive Fusion Module to the mixed-modality branch. Our extensive experiments validate the efficacy of the proposed method, and our method achieves state-of-the-art performance on two public datasets. Code is available at https://github.com/qinghuannn/PAMFN.


Asunto(s)
Interpretación de Imagen Asistida por Computador , Aprendizaje Automático
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA