Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Mais filtros












Base de dados
Intervalo de ano de publicação
1.
IEEE Trans Image Process ; 33: 2226-2237, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38470583

RESUMO

Cross-modal retrieval (e.g., query a given image to obtain a semantically similar sentence, and vice versa) is an important but challenging task, as the heterogeneous gap and inconsistent distributions exist between different modalities. The dominant approaches struggle to bridge the heterogeneity by capturing the common representations among heterogeneous data in a constructed subspace which can reflect the semantic closeness. However, insufficient consideration is taken into the fact that learned latent representations are actually heavily entangled with those semantic-unrelated features, which obviously further compounds the challenges of cross-modal retrieval. To alleviate the difficulty, this work makes an assumption that the data are jointly characterized by two independent features: semantic-shared and semantic-unrelated representations. The former presents characteristics of consistent semantics shared by different modalities, while the latter reflects the characteristics with respect to the modality yet unrelated to semantics, such as background, illumination, and other low-level information. Therefore, this paper aims to disentangle the shared semantics from the entangled features, andthus the purer semantic representation can promote the closeness of paired data. Specifically, this paper designs a novel Semantics Disentangling approach for Cross-Modal Retrieval (termed as SDCMR) to explicitly decouple the two different features based on variational auto-encoder. Next, the reconstruction is performed by exchanging shared semantics to ensure the learning of semantic consistency. Moreover, a dual adversarial mechanism is designed to disentangle the two independent features via a pushing-and-pulling strategy. Comprehensive experiments on four widely used datasets demonstrate the effectiveness and superiority of the proposed SDCMR method by achieving a new bar on performance when compared against 15 state-of-the-art methods.

2.
J Autism Dev Disord ; 53(3): 934-946, 2023 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-35913654

RESUMO

This study segmented the time series of gaze behavior from nineteen children with autism spectrum disorder (ASD) and 20 children with typical development in a face-to-face conversation. A machine learning approach showed that behavior segments produced by these two groups of participants could be classified with the highest accuracy of 74.15%. These results were further used to classify children using a threshold classifier. A maximum classification accuracy of 87.18% was achieved, under the condition that a participant was considered as 'ASD' if over 46% of the child's 7-s behavior segments were classified as ASD-like behaviors. The idea of combining the behavior segmentation technique and the threshold classifier could maximally preserve participants' data, and promote the automatic screening of ASD.


Assuntos
Transtorno do Espectro Autista , Transtorno Autístico , Humanos , Criança , Transtorno do Espectro Autista/diagnóstico , Movimentos Oculares , Aprendizado de Máquina , Comunicação
3.
IEEE Trans Pattern Anal Mach Intell ; 44(10): 6534-6545, 2022 10.
Artigo em Inglês | MEDLINE | ID: mdl-34125668

RESUMO

Cross-modal retrieval has recently attracted growing attention, which aims to match instances captured from different modalities. The performance of cross-modal retrieval methods heavily relies on the capability of metric learning to mine and weight the informative pairs. While various metric learning methods have been developed for unimodal retrieval tasks, the cross-modal retrieval tasks, however, have not been explored to its fullest extent. In this paper, we develop a universal weighting metric learning framework for cross-modal retrieval, which can effectively sample informative pairs and assign proper weight values to them based on their similarity scores so that different pairs favor different penalty strength. Based on this framework, we introduce two types of polynomial loss for cross-modal retrieval, self-similarity polynomial loss and relative-similarity polynomial loss. The former provides a polynomial function to associate the weight values with self-similarity scores, and the latter defines a polynomial function to associate the weight values with relative-similarity scores. Both self and relative-similarity polynomial loss can be freely applied to off-the-shelf methods and further improve their retrieval performance. Extensive experiments on two image-text retrieval datasets, three video-text retrieval datasets and one fine-grained image retrieval dataset demonstrate that our proposed method can achieve a noticeable boost in retrieval performance.


Assuntos
Algoritmos , Processamento de Imagem Assistida por Computador , Humanos , Aprendizagem
4.
Zhongguo Wei Zhong Bing Ji Jiu Yi Xue ; 19(7): 425-7, 2007 Jul.
Artigo em Chinês | MEDLINE | ID: mdl-17631713

RESUMO

OBJECTIVE: To explore the influence of semi-hepatectomy on levels of interleukin-2 (IL-2), tumor necrosis factor (TNF), insulin growth factor (IGF), thyroxin (TT(3), TT(4)) and insulin (INS) in serum. METHODS: Sixty healthy male rabbits were randomized into two groups: semi-hepatectomy group and control group, and the latter group received sham operation. Levels of serum IL-2, TNF, IGF, TT(3), TT(4) and INS were determined 1 day before operation and 24 hours, 1 week and 4 weeks after operation. RESULTS: The IL-2 level in semi-hepatectomy group was elevated significantly at 24 hours after operation (P<0.01), then dropped to the level lower than that before the operation 1 week after operation, and then increased gradually to preoperative level 4 weeks later. The levels of TNF lowered 24 hours after operation (P<0.05), then began to rise, and became significantly higher than control group 4 weeks after operation (P<0.01). The levels of TT(3) and TT(4) significantly lowered 24 hours after operation and recovered gradually 1 week after operation (both P<0.01) until to the preoperative levels 4 weeks later. TT(3) recovered especially significantly. The level of INS and IGF elevated significantly 24 hours after operation and reached its peak level 1 week after operation (both P<0.01), then recovered to the preoperative level 4 weeks later. The various mediators showed no changes in control group. CONCLUSION: Sick hypothyroid syndrome and hyperinsulinism appeared after semi-hepatectomy. Decrease in IL-2 level contributes to lowered immunity after operative trauma, and elevation of TNF level contributes to apoptosis of hepatocytes, necrosis and inflammation. An elevation of IGF level contributes to regeneration of liver cells.


Assuntos
Citocinas/sangue , Hepatectomia , Insulina/sangue , Tiroxina/sangue , Animais , Fator de Crescimento Insulin-Like I/metabolismo , Interleucina-2/sangue , Masculino , Coelhos , Fator de Necrose Tumoral alfa/sangue
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...