Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Mais filtros








Base de dados
Intervalo de ano de publicação
1.
Micromachines (Basel) ; 15(4)2024 Apr 19.
Artigo em Inglês | MEDLINE | ID: mdl-38675359

RESUMO

In drilling operations, measuring vibration parameters is crucial for enhancing drilling efficiency and ensuring safety. Nevertheless, the conventional vibration measurement sensor significantly extends the drilling cycle due to its dependence on an external power source. Therefore, we propose a vibration-accumulation-type self-powered sensor in this research, aiming to address these needs. By leveraging vibration accumulation and electromagnetic power generation to accelerate charging, the sensor's output performance is enhanced through a complementary charging mode. The experimental results regarding sensing performance demonstrate that the sensor possesses a measurement range spanning from 0 to 11 Hz, with a linearity of 3.2% and a sensitivity of 1.032. Additionally, it exhibits a maximum average measurement error of less than 4%. The experimental results of output performance measurement indicate that the sensor unit and generator set exhibit a maximum output power of 0.258 µW and 25.5 mW, respectively, and eight LED lights can be lit at the same time. When the sensor unit and power generation unit output together, the maximum output power of the sensor is also 25.5 mW. Furthermore, we conducted tests on the sensor's output signal in conditions of high temperature and humidity, confirming its continued functionality in such environments. This sensor not only achieves self-powered sensing capabilities, addressing the power supply challenges faced by traditional downhole sensors, but also integrates energy accumulation with electromagnetic power generation to enhance its output performance. This innovation enables the sensor to harness downhole vibration energy for powering other micro-power devices, showcasing promising application prospects.

2.
IEEE Trans Cybern ; 52(2): 1247-1257, 2022 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-32568717

RESUMO

Automatic image captioning is to conduct the cross-modal conversion from image visual content to natural language text. Involving computer vision (CV) and natural language processing (NLP), it has become one of the most sophisticated research issues in the artificial-intelligence area. Based on the deep neural network, the neural image caption (NIC) model has achieved remarkable performance in image captioning, yet there still remain some essential challenges, such as the deviation between descriptive sentences generated by the model and the intrinsic content expressed by the image, the low accuracy of the image scene description, and the monotony of generated sentences. In addition, most of the current datasets and methods for image captioning are in English. However, considering the distinction between Chinese and English in syntax and semantics, it is necessary to develop specialized Chinese image caption generation methods to accommodate the difference. To solve the aforementioned problems, we design the NICVATP2L model via visual attention and topic modeling, in which the visual attention mechanism reduces the deviation and the topic model improves the accuracy and diversity of generated sentences. Specifically, in the encoding phase, convolutional neural network (CNN) and topic model are used to extract visual and topic features of the input images, respectively. In the decoding phase, an attention mechanism is applied to processing image visual features for obtaining image visual region features. Finally, the topic features and the visual region features are combined to guide the two-layer long short-term memory (LSTM) network for generating Chinese image captions. To justify our model, we have conducted experiments over the Chinese AIC-ICC image dataset. The experimental results show that our model can automatically generate more informative and descriptive captions in Chinese in a more natural way, and it outperforms the existing image captioning NIC model.


Assuntos
Idioma , Processamento de Linguagem Natural , China , Redes Neurais de Computação , Semântica
3.
Artigo em Inglês | MEDLINE | ID: mdl-33006929

RESUMO

Video person re-identification (video Re-ID) plays an important role in surveillance video analysis and has gained increasing attention recently. However, existing supervised methods require vast labeled identities across cameras, resulting in poor scalability in practical applications. Although some unsupervised approaches have been exploited for video Re-ID, they are still in their infancy due to the complex nature of learning discriminative features on unlabelled data. In this paper, we focus on one-shot video Re-ID and present an iterative local-global collaboration learning approach to learning robust and discriminative person representations. Specifically, it jointly considers the global video information and local frame sequence information to better capture the diverse appearance of the person for feature learning and pseudo-label estimation. Moreover, as the cross-entropy loss may induce the model to focus on identity-irrelevant factors, we introduce the variational information bottleneck as a regularization term to train the model together. It can help filter undesirable information and characterize subtle differences among persons. Since accuracy cannot always be guaranteed for pseudo-labels, we adopt a dynamic selection strategy to select part of pseudo-labeled data with higher confidence to update the training set and re-train the learning model. During training, our method iteratively executes the feature learning, pseudo-label estimation, and dynamic sample selection until all the unlabeled data have been seen. Extensive experiments on two public datasets, i.e., DukeMTMC-VideoReID and MARS, have verified the superiority of our model to several cutting-edge competitors.

4.
IEEE Trans Cybern ; 47(11): 3680-3691, 2017 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-27337733

RESUMO

Community-based health services have risen as important online resources for resolving users health concerns. Despite the value, the gap between what health seekers with specific health needs and what busy physicians with specific attitudes and expertise can offer is being widened. To bridge this gap, we present a question routing scheme that is able to connect health seekers to the right physicians. In this scheme, we first bridge the expertise matching gap via a probabilistic fusion of the physician-expertise distribution and the expertise-question distribution. The distributions are calculated by hypergraph-based learning and kernel density estimation. We then measure physicians attitudes toward answering general questions from the perspectives of activity, responsibility, reputation, and willingness. At last, we adaptively fuse the expertise modeling and attitude modeling by considering the personal needs of the health seekers. Extensive experiments have been conducted on a real-world dataset to validate our proposed scheme.


Assuntos
Serviços de Saúde Comunitária , Necessidades e Demandas de Serviços de Saúde/estatística & dados numéricos , Informática Médica/métodos , Médicos/estatística & dados numéricos , Algoritmos , Atitude do Pessoal de Saúde , Humanos , Aprendizado de Máquina , Aceitação pelo Paciente de Cuidados de Saúde
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA