Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 7 de 7
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Org Lett ; 26(2): 519-524, 2024 Jan 19.
Artigo em Inglês | MEDLINE | ID: mdl-38190623

RESUMO

Herein, we described a copper(I)-catalyzed dearomatization of benzofurans with 2-(chloromethyl)anilines to prepare various tetrahydrobenzofuro[3,2-b]quinolines and 2-(quinolin-2-yl)phenols in good to excellent yields through radical addition and an intramolecular cyclization process. Mechanistic studies revealed that 2-(chloromethyl)anilines served as radical precursors. The present method features broad substrate scope, good functional group tolerance, quinoline scaffold diversity, and radical addition dearomatization of benzofurans.

2.
IEEE Trans Haptics ; PP2023 Dec 25.
Artigo em Inglês | MEDLINE | ID: mdl-38145540

RESUMO

Haptic temporal signal recognition plays an important supporting role in robot perception. This paper investigates how to improve classification performance on multiple types of haptic temporal signal datasets using a Transformer model structure. By analyzing the feature representation of haptic temporal signals, a Transformer-based two-tower structural model, called Touchformer, is proposed to extract temporal and spatial features separately and integrate them using a self-attention mechanism for classification. To address the characteristics of small sample datasets, data augmentation is employed to improve the stability of the dataset. Adaptations to the overall architecture of the model and the training and optimization procedures are made to improve the recognition performance and robustness of the model. Experimental comparisons on three publicly available datasets demonstrate that the Touchformer model significantly outperforms the benchmark model, indicating our approach's effectiveness and providing a new solution for robot perception.

3.
Natl Sci Rev ; 10(6): nwad115, 2023 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-37292085

RESUMO

This paper presents a novel and efficient algorithm for Chinese historical document understanding, incorporating three key components: a multi-oriented text detector, a dual-path learning-based text recognizer, and a heuristic-based reading order predictor.

4.
Int J Biol Macromol ; 232: 123366, 2023 Mar 31.
Artigo em Inglês | MEDLINE | ID: mdl-36693609

RESUMO

Polyhydroxyalkanoates (PHAs) as biodegradable plastics have attracted increasing attention due to its biodegradable, biocompatible and renewable advantages. Exploitation some unique microbes for PHAs production is one of the most competitive approaches to meet complex industrial demand, and further develop next-generation industrial biotechnology. In this study, a rare actinomycetes strain A7-Y was isolated and identified from soil as the first PHAs producer of Aquabacterium genus. Produced PHAs by strain A7-Y was identified as poly(3-hydroxybutyrate) (PHB) based on its structure characteristics, which is also similar with commercial PHB. After optimization of fermentation conditions, strain A7-Y can produce 10.2 g/L of PHB in 5 L fed-batch fermenter, corresponding with 54 % PHB content of dry cell weight, which is superior to the reported actinomycetes species. Furthermore, the phaCAB operon in stain A7-Y was excavated to be responsible for the efficient PHB production and verified in recombinant Escherichia coli. Our results indicate that strain A7-Y and its biosynthetic gene cluster are potential candidates for developing a microbial formulation for the PHB production.


Assuntos
Actinobacteria , Poli-Hidroxialcanoatos , Poliésteres/química , Actinomyces , Actinobacteria/genética , Hidroxibutiratos
5.
IEEE Trans Pattern Anal Mach Intell ; 44(11): 8048-8064, 2022 11.
Artigo em Inglês | MEDLINE | ID: mdl-34460364

RESUMO

End-to-end text-spotting, which aims to integrate detection and recognition in a unified framework, has attracted increasing attention due to its simplicity of the two complimentary tasks. It remains an open problem especially when processing arbitrarily-shaped text instances. Previous methods can be roughly categorized into two groups: character-based and segmentation-based, which often require character-level annotations and/or complex post-processing due to the unstructured output. Here, we tackle end-to-end text spotting by presenting Adaptive Bezier Curve Network v2 (ABCNet v2). Our main contributions are four-fold: 1) For the first time, we adaptively fit arbitrarily-shaped text by a parameterized Bezier curve, which, compared with segmentation-based methods, can not only provide structured output but also controllable representation. 2) We design a novel BezierAlign layer for extracting accurate convolution features of a text instance of arbitrary shapes, significantly improving the precision of recognition over previous methods. 3) Different from previous methods, which often suffer from complex post-processing and sensitive hyper-parameters, our ABCNet v2 maintains a simple pipeline with the only post-processing non-maximum suppression (NMS). 4) As the performance of text recognition closely depends on feature alignment, ABCNet v2 further adopts a simple yet effective coordinate convolution to encode the position of the convolutional filters, which leads to a considerable improvement with negligible computation overhead. Comprehensive experiments conducted on various bilingual (English and Chinese) benchmark datasets demonstrate that ABCNet v2 can achieve state-of-the-art performance while maintaining very high efficiency. More importantly, as there is little work on quantization of text spotting models, we quantize our models to improve the inference time of the proposed ABCNet v2. This can be valuable for real-time applications. Code and model are available at: https://git.io/AdelaiDet.


Assuntos
Algoritmos , Benchmarking
6.
Artigo em Inglês | MEDLINE | ID: mdl-32857697

RESUMO

Scene text removal has attracted increasing research interests owing to its valuable applications in privacy protection, camera-based virtual reality translation, and image editing. However, existing approaches, which fall short on real applications, are mainly because they were evaluated on synthetic or unrepresentative datasets. To fill this gap and facilitate this research direction, this paper proposes a real-world dataset called SCUT-EnsText that consists of 3,562 diverse images selected from public scene text reading benchmarks, and each image is scrupulously annotated to provide visually plausible erasure targets. With SCUT-EnsText, we design a novel GANbased model termed EraseNet that can automatically remove text located on the natural images. The model is a two-stage network that consists of a coarse-erasure sub-network and a refinement sub-network. The refinement sub-network targets improvement in the feature representation and refinement of the coarse outputs to enhance the removal performance. Additionally, EraseNet contains a segmentation head for text perception and a local-global SN-Patch-GAN with spectral normalization (SN) on both the generator and discriminator for maintaining the training stability and the congruity of the erased regions. A sufficient number of experiments are conducted on both the previous public dataset and the brand-new SCUT-EnsText. Our EraseNet significantly outperforms the existing state-of-the-art methods in terms of all metrics, with remarkably superior higherquality results. The dataset and code will be made available at https://github.com/HCIILAB/SCUT-EnsText.

7.
Artigo em Inglês | MEDLINE | ID: mdl-31331887

RESUMO

Model-free tracking is a widely-accepted approach to track an arbitrary object in a video using a single frame annotation with no further prior knowledge about the object of interest. Extending this problem to track multiple objects is really challenging because: a) the tracker is not aware of the objects' type while trying to distinguish them from background (detection task), and b) The tracker needs to distinguish one object from other potentially similar objects (data association task) to generate stable trajectories. In order to track multiple arbitrary objects, most existing model-free tracking approaches rely on tracking each target individually by updating their appearance model independently. Therefore, in this scenario they often fail to perform well due to confusion between the appearance of similar objects, their sudden appearance changes and occlusion. To tackle this problem, we propose to use both appearance and motion models, and to learn them jointly using graphical models and deep neural networks features. We introduce an indicator variable to predict sudden appearance change and/or occlusion. When these happen, our model does not update the appearance model thus avoiding using the background and/or incorrect object to update the appearance of the object of interest mistakenly, and relies on our motion model to track. Moreover, we consider the correlation among all targets, and seek the joint optimal locations for all targets simultaneously as a graphical model inference problem. We learn the joint parameters for both appearance model and motion model in an online fashion under the framework of LaRank. Experiment results show that our method achieved superior performance compared to the competitive methods.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...