RESUMO
The energetics of adsorption of H2O layers and H2O layers partially replaced with OH or Cl on an Al(111) surface and on selected surfaces of intermetallic phases, Mg2Si and Al2Cu, was studied by first-principle calculations using the density function theory (DFT). The results show that H2O molecules tended to bind to all investigated surfaces with an adsorption energy in a relatively narrow range, between -0.8 eV and -0.5 eV, at increased water coverage. This can be explained by the dominant role of networks of hydrogen bonds at higher H2O coverage. On the basis of the work function, the calculated Volta potential data suggest that both intermetallic phases became less noble than Al(111); also, the Volta potential difference was larger than 1 V when the coverage of the Cl-containing ad-layer reached one monolayer. The energetics of H2O dissociation and substitution by Cl as well as the corresponding work function of each surface were also calculated. The increase in the work function of the Al(111) surface was attributed to the oxidation effect during H2O adsorption, whereas the decrease of the work function for the Mg2Si(111)-Si surface upon H2O adsorption was explained by atomic and electronic rearrangements in the presence of H2O and Cl.
Assuntos
Teoria da Densidade Funcional , Hidróxidos/química , Água/química , Adsorção , Alumínio/química , Cloro/química , Cobre/química , Silicatos de Magnésio/química , Oxirredução , Propriedades de SuperfícieRESUMO
Mussel adhesive proteins are of great interest in many applications because of their outstanding adhesive property and film-forming ability. Understanding and controlling the film formation and its performance is crucial for the effective use of such proteins. In this study, we focus on the potential controlled film formation and compaction of one mussel adhesive protein, Mefp-1. The adsorption and film-forming behavior of Mefp-1 on a platinum (Pt) substrate under applied potentials were investigated by cyclic voltammetry, potential-controlled electrochemical impedance spectroscopy (EIS), and quartz crystal microbalance with dissipation monitoring (QCM-D). Moreover, microfriction measurements were performed to evaluate the mechanical properties of the Mefp-1 films formed at selected potentials. The results led to the conclusion that Mefp-1 adsorbs on the Pt substrate through both electrostatic and nonelectrostatic interactions and shows an effective blocking effect for the electroactive sites on the substrate. The properties of the adsorbed Mefp-1 film vary with the applied potential, and the compactness of the adsorbed Mefp-1 film can be reversibly tuned by the applied potential.
Assuntos
Proteínas/química , Adsorção , Propriedades de SuperfícieRESUMO
The interactions between polyaniline particles and polyaniline surfaces in polyester acrylate resin mixed with 1,6-hexanediol diacrylate monomer have been investigated using contact angle measurements and the atomic force microscopy colloidal probe technique. Polyaniline with different characteristics (hydrophilic and hydrophobic) were synthesized directly on spherical polystyrene particles of 10 µm in diameter. Surface forces were measured between core/shell structured polystyrene/polyaniline particles (and a pure polystyrene particle as reference) mounted on an atomic force microscope cantilever and a pressed pellet of either hydrophilic or hydrophobic polyaniline powders, in resins of various polymer:monomer ratios. A short-range purely repulsive interaction was observed between hydrophilic polyaniline (doped with phosphoric acid) surfaces in polyester acrylate resin. In contrast, interactions between hydrophobic polyaniline (doped with n-decyl phosphonic acid) were dominated by attractive forces, suggesting less compatibility and higher tendency for aggregation of these particles in liquid polyester acrylate compared to hydrophilic polyaniline. Both observations are in agreement with the conclusions from the interfacial energy studies performed by contact angle measurements.
RESUMO
Existing deep learning-based video super-resolution (SR) methods usually depend on the supervised learning approach, where the training data is usually generated by the blurring operation with known or predefined kernels (e.g., Bicubic kernel) followed by a decimation operation. However, this does not hold for real applications as the degradation process is complex and cannot be approximated by these idea cases well. Moreover, obtaining high-resolution (HR) videos and the corresponding low-resolution (LR) ones in real-world scenarios is difficult. To overcome these problems, we propose a self-supervised learning method to solve the blind video SR problem, which simultaneously estimates blur kernels and HR videos from the LR videos. As directly using LR videos as supervision usually leads to trivial solutions, we develop a simple and effective method to generate auxiliary paired data from original LR videos according to the image formation of video SR, so that the networks can be better constrained by the generated paired data for both blur kernel estimation and latent HR video restoration. In addition, we introduce an optical flow estimation module to exploit the information from adjacent frames for HR video restoration. Experiments show that our method performs favorably against state-of-the-art ones on benchmarks and real-world videos.
RESUMO
We present a simple and effective approach to explore both local spatial-temporal contexts and non-local temporal information for video deblurring. First, we develop an effective spatial-temporal contextual transformer to explore local spatial-temporal contexts from videos. As the features extracted by the spatial-temporal contextual transformer does not model the non-local temporal information of video well, we then develop a feature propagation method to aggregate useful features from the long-range frames so that both local spatial-temporal contexts and non-local temporal information can be better utilized for video deblurring. Finally, we formulate the spatial-temporal contextual transformer with the feature propagation into a unified deep convolutional neural network (CNN) and train it in an end-to-end manner. We show that using the spatial-temporal contextual transformer with the feature propagation is able to generate useful features and makes the deep CNN model more compact and effective for video deblurring. Extensive experimental results show that the proposed method performs favorably against state-of-the-art ones on the benchmark datasets in terms of accuracy and model parameters.
RESUMO
Most existing model-based and learning-based image deblurring methods usually use synthetic blur-sharp training pairs to remove blur. However, these approaches do not perform well in real-world applications as the blur-sharp training pairs are difficult to be obtained and the blur in real-world scenarios is spatial-variant. In this paper, we propose a self-supervised learning-based image deblurring method that can deal with both uniform and spatial-variant blur distributions. Moreover, our method does not need for blur-sharp pairs for training. In our proposed method, we design the Deblurring Network (D-Net) and the Spatial Degradation Network (SD-Net). Specifically, the D-Net is designed for image deblurring while the SD-Net is used to simulate the spatial-variant degradation. Furthermore, the off-the-shelf pre-trained model is employed as the prior of our model, which facilitates image deblurring. Meanwhile, we design a recursive optimization strategy to accelerate the convergence of the model. Extensive experiments demonstrate that our proposed model achieves favorable performance against existing image deblurring methods.
Assuntos
Processamento de Imagem Assistida por Computador , Redes Neurais de Computação , Aprendizado de Máquina Supervisionado , Processamento de Imagem Assistida por Computador/métodos , Aprendizado Profundo , Algoritmos , HumanosRESUMO
How to effectively explore the colors of exemplars and propagate them to colorize each frame is vital for exemplar-based video colorization. In this article, we present a BiSTNet to explore colors of exemplars and utilize them to help video colorization by a bidirectional temporal feature fusion with the guidance of semantic image prior. We first establish the semantic correspondence between each frame and the exemplars in deep feature space to explore color information from exemplars. Then, we develop a simple yet effective bidirectional temporal feature fusion module to propagate the colors of exemplars into each frame and avoid inaccurate alignment. We note that there usually exist color-bleeding artifacts around the boundaries of important objects in videos. To overcome this problem, we develop a mixed expert block to extract semantic information for modeling the object boundaries of frames so that the semantic image prior can better guide the colorization process. In addition, we develop a multi-scale refinement block to progressively colorize frames in a coarse-to-fine manner. Extensive experimental results demonstrate that the proposed BiSTNet performs favorably against state-of-the-art methods on the benchmark datasets and real-world scenes. Moreover, the BiSTNet obtains one champion in NTIRE 2023 video colorization challenge (Kang et al. 2023).
RESUMO
A fundamental understanding of the oxidation mechanisms of aluminum (Al) alloys is of great importance for its applications in corrosion, catalysis, sensors, etc. In this work, we systematically investigated the first-stage oxidation behaviors of three low-index Al facets with O coverage up to two monolayers (ML) by using density-functional theory (DFT). The large negative adsorption energies indicated favorable oxidation on all three facets. However, distinctive structural and electronic changes induced by the adsorption of oxygen have led to different oxidation modes. More specifically, the oxidation process proceeded by "intercalating" into the subsurface region along the (111) plane out of the (110) facet with spontaneous O ingress into (110) far below one ML, as revealed by the electron density distribution, whereas the oxide ad-layer grew in a "layer-by-layer" mode on Al(111) and (001) facets. Moreover, various Al-O complexes with different atomic coordination numbers (CN), configurations, and sizes may be indicators of the tendency of an Al surface to be oxidized. Besides, the oxide phases formed on (111)/(001) and (110) assembled the Al-O bond distribution within α-Al2O3 and γ-Al2O3, respectively.
RESUMO
We present compact and effective deep convolutional neural networks (CNNs) by exploring properties of videos for video deblurring. Motivated by the non-uniform blur property that not all the pixels of the frames are blurry, we develop a CNN to integrate a temporal sharpness prior (TSP) for removing blur in videos. The TSP exploits sharp pixels from adjacent frames to facilitate the CNN for better frame restoration. Observing that the motion field is related to latent frames instead of blurry ones in the image formation model, we develop an effective cascaded training approach to solve the proposed CNN in an end-to-end manner. As videos usually contain similar contents within and across frames, we propose a non-local similarity mining approach based on a self-attention method with the propagation of global features to constrain CNNs for frame restoration. We show that exploring the domain knowledge of videos can make CNNs more compact and efficient, where the CNN with the non-local spatial-temporal similarity is 3× smaller than the state-of-the-art methods in terms of model parameters while its performance gains are at least 1 dB higher in terms of PSNRs. Extensive experimental results show that our method performs favorably against state-of-the-art approaches on benchmarks and real-world videos.
Assuntos
Algoritmos , Redes Neurais de ComputaçãoRESUMO
Corrosion is the main factor limiting the lifetime of metallic materials, and a fundamental understanding of the governing mechanism and surface processes is difficult to achieve since the thin oxide films at the metal-liquid interface governing passivity are notoriously challenging to study. In this work, a combination of synchrotron-based techniques and electrochemical methods is used to investigate the passive film breakdown of a Ni-Cr-Mo alloy, which is used in many industrial applications. This alloy is found to be active toward oxygen evolution reaction (OER), and the OER onset coincides with the loss of passivity and severe metal dissolution. The OER mechanism involves the oxidation of Mo4+ sites in the oxide film to Mo6+ that can be dissolved, which results in passivity breakdown. This is fundamentally different from typical transpassive breakdown of Cr-containing alloys where Cr6+ is postulated to be dissolved at high anodic potentials, which is not observed here. At high current densities, OER also leads to acidification of the solution near the surface, further triggering metal dissolution. The OER plays an important role in the mechanism of passivity breakdown of Ni-Cr-Mo alloys due to their catalytic activity, and this effect needs to be considered when studying the corrosion of catalytically active alloys.
RESUMO
Dynamic scene deblurring is a challenging problem as it is difficult to be modeled mathematically. Benefiting from the deep convolutional neural networks, this problem has been significantly advanced by the end-to-end network architectures. However, the success of these methods is mainly due to simply stacking network layers. In addition, the methods based on the end-to-end network architectures usually estimate latent images in a regression way which does not preserve the structural details. In this paper, we propose an exemplar-based method to solve dynamic scene deblurring problem. To explore the properties of the exemplars, we propose a siamese encoder network and a shallow encoder network to respectively extract input features and exemplar features and then develop a rank module to explore useful features for better blur removing, where the rank modules are applied to the last three layers of encoder, respectively. The proposed method can be further extended to the way of multi-scale, which enables to recover more texture from the exemplar. Extensive experiments show that our method achieves significant improvements in both quantitative and qualitative evaluations.
RESUMO
Recent video frame interpolation methods have employed the curvilinear motion model to accommodate nonlinear motion among frames. The effectiveness of such model often hinges on motion estimation and occlusion detection, and therefore is greatly challenged when these methods are used to handle dynamic scenes that contain complex motions and occlusions. We address the challenges by proposing a bi-directional pseudo-three-dimensional network to exploit the correlation between motion estimation and depth-related occlusion estimation that considers the third dimension: depth. Specifically, the network exploits the correlation by learning shared multi-scale spatiotemporal representations, and by coupling the estimations, in both the past and future directions, to synthesize intermediate frames through a bi-directional pseudo-three-dimensional warping layer, where adaptive convolution kernels are estimated progressively from the coalescence of motion and depth-related occlusion estimations across multiple scales to acquire nonlocal and adaptive neighborhoods. The proposed network utilizes a novel multi-task collaborative learning strategy, which facilitates the supervised learning of video frame interpolation using complementary self-supervisory signals from motion and depth-related occlusion estimations. Across various benchmark datasets, the proposed method outperforms state-of-the-art methods in terms of accuracy, model size and runtime performance.
RESUMO
How to explore useful information from depth is the key success of the RGB-D saliency detection methods. While the RGB and depth images are from different domains, a modality gap will lead to unsatisfactory results for simple feature concatenation. Towards better performance, most methods focus on bridging this gap and designing different cross-modal fusion modules for features, while ignoring explicitly extracting some useful consistent information from them. To overcome this problem, we develop a simple yet effective RGB-D saliency detection method by learning discriminative cross-modality features based on the deep neural network. The proposed method first learns modality-specific features for RGB and depth inputs. And then we separately calculate the correlations of every pixel-pair in a cross-modality consistent way, i.e., the distribution ranges are consistent for the correlations calculated based on features extracted from RGB (RGB correlation) or depth inputs (depth correlation). From different perspectives, color or spatial, the RGB and depth correlations end up at the same point to depict how tightly each pixel-pair is related. Secondly, to complemently gather RGB and depth information, we propose a novel correlation-fusion to fuse RGB and depth correlations, resulting in a cross-modality correlation. Finally, the features are refined with both long-range cross-modality correlations and local depth correlations to predict salient maps. In which, the long-range cross-modality correlation provides context information for accurate localization, and the local depth correlation keeps good subtle structures for fine segmentation. In addition, a lightweight DepthNet is designed for efficient depth feature extraction. We solve the proposed network in an end-to-end manner. Both quantitative and qualitative experimental results demonstrate the proposed algorithm achieves favorable performance against state-of-the-art methods.
RESUMO
We propose an effective image dehazing algorithm which explores useful information from the input hazy image itself as the guidance for the haze removal. The proposed algorithm first uses a deep pre-dehazer to generate an intermediate result, and takes it as the reference image due to the clear structures it contains. To better explore the guidance information in the generated reference image, it then develops a progressive feature fusion module to fuse the features of the hazy image and the reference image. Finally, the image restoration module takes the fused features as input to use the guidance information for better clear image restoration. All the proposed modules are trained in an end-to-end fashion, and we show that the proposed deep pre-dehazer with progressive feature fusion module is able to help haze removal. Extensive experimental results show that the proposed algorithm performs favorably against state-of-the-art methods on the widely-used dehazing benchmark datasets as well as real-world hazy images.
RESUMO
Triboelectric nanogenerators (TENGs) have potential to achieve energy harvesting and condition monitoring of oils, the "lifeblood" of industry. However, oil absorption on the solid surfaces is a great challenge for oil-solid TENG (O-TENG). Here, oleophobic/superamphiphobic O-TENGs are achieved via engineering of solid surface wetting properties. The designed O-TENG can generate an excellent electricity (with a charge density of 9.1 µC m-2 and a power density of 1.23 mW m-2), which is an order of magnitude higher than other O-TENGs made from polytetrafluoroethylene and polyimide. It also has a significant durability (30,000 cycles) and can power a digital thermometer for self-powered sensor applications. Further, a superhigh-sensitivity O-TENG monitoring system is successfully developed for real-time detecting particle/water contaminants in oils. The O-TENG can detect particle contaminants at least down to 0.01 wt% and water contaminants down to 100 ppm, which are much better than previous online monitoring methods (particle > 0.1 wt%; water > 1000 ppm). More interesting, the developed O-TENG can also distinguish water from other contaminants, which means the developed O-TENG has a highly water-selective performance. This work provides an ideal strategy for enhancing the output and durability of TENGs for oil-solid contact and opens new intelligent pathways for oil-solid energy harvesting and oil condition monitoring.
RESUMO
Joint filtering mainly uses an additional guidance image as a prior and transfers its structures to the target image in the filtering process. Different from existing approaches that rely on local linear models or hand-designed objective functions to extract the structural information from the guidance image, we propose a new joint filtering method based on a spatially variant linear representation model (SVLRM), where the target image is linearly represented by the guidance image. However, learning SVLRMs for vision tasks is a highly ill-posed problem. To estimate the spatially variant linear representation coefficients, we develop an effective approach based on a deep convolutional neural network (CNN). As such, the proposed deep CNN (constrained by the SVLRM) is able to model the structural information of both the guidance and input images. We show that the proposed approach can be effectively applied to a variety of applications, including depth/RGB image upsampling and restoration, flash deblurring, natural image denoising, and scale-aware filtering. In addition, we show that the linear representation model can be extended to high-order representation models (e.g., quadratic and cubic polynomial representations). Extensive experimental results demonstrate that the proposed method performs favorably against the state-of-the-art methods that have been specifically designed for each task.
RESUMO
Deblurring images captured in dynamic scenes is challenging as the motion blurs are spatially varying caused by camera shakes and object movements. In this paper, we propose a spatially varying neural network to deblur dynamic scenes. The proposed model is composed of three deep convolutional neural networks (CNNs) and a recurrent neural network (RNN). The RNN is used as a deconvolution operator on feature maps extracted from the input image by one of the CNNs. Another CNN is used to learn the spatially varying weights for the RNN. As a result, the RNN is spatial-aware and can implicitly model the deblurring process with spatially varying kernels. To better exploit properties of the spatially varying RNN, we develop both one-dimensional and two-dimensional RNNs for deblurring. The third component, based on a CNN, reconstructs the final deblurred feature maps into a restored image. In addition, the whole network is end-to-end trainable. Quantitative and qualitative evaluations on benchmark datasets demonstrate that the proposed method performs favorably against the state-of-the-art deblurring algorithms.
Assuntos
Algoritmos , Redes Neurais de Computação , AprendizagemRESUMO
Outlier handling has attracted considerable attention recently but remains challenging for image deblurring. Existing approaches mainly depend on iterative outlier detection steps to explicitly or implicitly reduce the influence of outliers on image deblurring. However, these outlier detection steps usually involve heuristic operations and iterative optimization processes, which are complex and time-consuming. In contrast, we propose to learn a deep convolutional neural network to directly estimate the confidence map, which can identify reliable inliers and outliers from the blurred image and thus facilitates the following deblurring process. We analyze that the proposed algorithm incorporated with the learned confidence map is effective in handling outliers and does not require ad-hoc outlier detection steps which are critical to existing outlier handling methods. Compared to existing approaches, the proposed algorithm is more efficient and can be applied to both non-blind and blind image deblurring. Extensive experimental results demonstrate that the proposed algorithm performs favorably against state-of-the-art methods in terms of accuracy and efficiency.
RESUMO
Face Super-Resolution (FSR) aims to infer High-Resolution (HR) face images from the captured Low-Resolution (LR) face image with the assistance of external information. Existing FSR methods are less effective for the LR face images captured with serious low-quality since the huge imaging/degradation gap caused by the different imaging scenarios (i.e., the complex practical imaging scenario that generates test LR images, the simple manual imaging degradation that generates the training LR images) is not considered in these algorithms. In this paper, we propose an image homogenization strategy via re-expression to solve this problem. In contrast to existing methods, we propose a homogenization projection in LR space and HR space as compensation for the classical LR/HR projection to formulate the FSR in a multi-stage framework. We then develop a re-expression process to bridge the gap between the complex degradation and the simple degradation, which can remove the heterogeneous factors such as serious noise and blur. To further improve the accuracy of the homogenization, we extract the image patch set that is invariant to degradation changes as Robust Neighbor Resources (RNR), with which these two homogenization projections re-express the input LR images and the initial inferred HR images successively. Both quantitative and qualitative results on the public datasets demonstrate the effectiveness of the proposed algorithm against the state-of-the-art methods.
RESUMO
An intelligent monitoring lubricant is essential for the development of smart machines because unexpected and fatal failures of critical dynamic components in the machines happen every day, threatening the life and health of humans. Inspired by the triboelectric nanogenerators (TENGs) work on water, we present a feasible way to prepare a self-powered triboelectric sensor for real-time monitoring of lubricating oils via the contact electrification process of oil-solid contact (O-S TENG). Typical intruding contaminants in pure base oils can be successfully monitored. The O-S TENG has very good sensitivity, which even can respectively detect at least 1 mg mL-1 debris and 0.01 wt % water contaminants. Furthermore, the real-time monitoring of formulated engine lubricating oil in a real engine oil tank is achieved. Our results show that electron transfer is possible from an oil to solid surface during contact electrification. The electrical output characteristic depends on the screen effect from such as wear debris, deposited carbons, and age-induced organic molecules in oils. Previous work only qualitatively identified that the output ability of liquid can be improved by leaving less liquid adsorbed on the TENG surface, but the adsorption mass and adsorption speed of liquid and its consequences for the output performance were not studied. We quantitatively study the internal relationship between output ability and adsorbing behavior of lubricating oils by quartz crystal microbalance with dissipation (QCM-D) for liquid-solid contact interfaces. This study provides a real-time, online, self-powered strategy for intelligent diagnosis of lubricating oils.