Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Mais filtros

Bases de dados
Ano de publicação
Tipo de documento
Assunto da revista
País de afiliação
Intervalo de ano de publicação
1.
Small ; 19(26): e2301001, 2023 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-36949523

RESUMO

Molecule sieve effect (MSE) can enable direct separation of target, thus overcoming two major scientific and industrial separation problems in traditional separation, coadsorption, and desorption. Inspired by this, herein, the concept of coordination sieve effect (CSE) for direct separation of UO2 2+ , different from the previously established two-step separation method, adsorption plus desorption is reported. The used adsorbent, polyhedron-based hydrogen-bond framework (P-HOF-1), made from a metal-organic framework (MOF) precursor through a two-step postmodification approach, afforded high uptake capacity (close to theoretical value) towards monovalent Cs+ , divalent Sr2+ , trivalent Eu3+ , and tetravalent Th4+ ions, but completely excluded UO2 2+ ion, suggesting excellent CSE. Direct separation of UO2 2+ can be achieved from a mixed solution containing Cs+ , Sr2+ , Eu3+ , Th4+ , and UO2 2+ ions, giving >99.9% removal efficiency for Cs+ , Sr2+ , Eu3+ , and Th4+ ions, but <1.2% removal efficiency for UO2 2+ , affording benchmark reverse selectivity (SM/U ) of >83 and direct generation of high purity UO2 2+ (>99.9%). The mechanism for such direct separation via CSE, as unveiled by both single crystal X-ray diffraction and density-functional theory (DFT) calculation, is due to the spherical coordination trap in P-HOF-1 that can exactly accommodate the spherical coordination ions of Cs+ , Sr2+ , Eu3+ , and Th4+ , but excludes the planar coordination UO2 2+ ion.

2.
Sci Rep ; 14(1): 90, 2024 Jan 02.
Artigo em Inglês | MEDLINE | ID: mdl-38167638

RESUMO

The auto-encoder (AE) based image fusion models have achieved encouraging performance on infrared and visible image fusion. However, the meaningful information loss in the encoding stage and simple unlearnable fusion strategy are two significant challenges for such models. To address these issues, this paper proposes an infrared and visible image fusion model based on interactive residual attention fusion strategy and contrastive learning in the frequency domain. Firstly, the source image is transformed into three sub-bands of the high-frequency, low-frequency, and mid-frequency for powerful multiscale representation from the prospective of the frequency spectrum analysis. To further cope with the limitations of the straightforward fusion strategy, a learnable coordinate attention module in the fusion layer is incorporated to adaptively fuse representative information based on the characteristics of the corresponding feature maps. Moreover, the contrastive learning is leveraged to train the multiscale decomposition network for enhancing the complementarity of information at different frequency spectra. Finally, the detail-preserving loss, feature enhancing loss and contrastive loss are incorporated to jointly train the entire fusion model for good detail maintainability. Qualitative and quantitative comparisons demonstrate the feasibility and validity of our model, which can consistently generate fusion images containing both highlight targets and legible details, outperforming the state-of-the-art fusion methods.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA