Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
Más filtros




Base de datos
Asunto de la revista
Intervalo de año de publicación
1.
J Cancer Res Clin Oncol ; 150(2): 101, 2024 Feb 23.
Artículo en Inglés | MEDLINE | ID: mdl-38393390

RESUMEN

PURPOSE: CSMed® wound dressing, a dressing with various herb extracts, was tested for its therapeutic effect in radiation dermatitis of breast and head-and-neck cancer patients. METHODS: This study included 20 breast cancer patients and 10 head-and-neck cancer patients. Half of the irradiated area was covered with CSMed® and the other half was under routine treatment. The severity of radiation dermatitis was evaluated with radiation therapy oncology group (RTOG) grade throughout the treatment and the follow-up period. The RTOG grade between the dressed and undressed area were compared to illustrate the therapeutic effect of CSMed® dressing. RESULTS: The results showed that CSMed® dressed area had significant lower RTOG score at 3-7 weeks and final record during the treatment, and 1-3 weeks during follow-up than undressed area. CONCLUSIONS: This indicated that CSMed® can delay the onset, reduce the severity, and enhance healing of radiation dermatitis. CSMed® can be used for prophylaxis and management of radiation dermatitis.


Asunto(s)
Neoplasias de la Mama , Neoplasias de Cabeza y Cuello , Radiodermatitis , Femenino , Humanos , Vendajes , Neoplasias de la Mama/radioterapia , Neoplasias de Cabeza y Cuello/radioterapia , Hospitales , Estudios Prospectivos , Radiodermatitis/etiología , Radiodermatitis/prevención & control
2.
IEEE Trans Neural Netw Learn Syst ; 33(11): 6961-6975, 2022 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-34288878

RESUMEN

The goal of supervised hashing is to construct hash mappings from collections of images and semantic annotations such that semantically relevant images are embedded nearby in the learned binary hash representations. Existing deep supervised hashing approaches that employ classification frameworks with a classification training objective for learning hash codes often encode class labels as one-hot or multi-hot vectors. We argue that such label encodings do not well reflect semantic relations among classes and instead, effective class label representations ought to be learned from data, which could provide more discriminative signals for hashing. In this article, we introduce Adaptive Labeling Deep Hashing (AdaLabelHash) that learns binary hash codes based on learnable class label representations. We treat the class labels as the vertices of a K -dimensional hypercube, which are trainable variables and adapted together with network weights during the backward network training procedure. The label representations, referred to as codewords, are the target outputs of hash mapping learning. In the label space, semantically relevant images are then expressed by the codewords that are nearby regarding Hamming distances, yielding compact and discriminative binary hash representations. Furthermore, we find that the learned label representations well reflect semantic relations. Our approach is easy to realize and can simultaneously construct both the label representations and the compact binary embeddings. Quantitative and qualitative evaluations on several popular benchmarks validate the superiority of AdaLabelHash in learning effective binary codes for image search.

3.
IEEE Trans Neural Netw Learn Syst ; 31(9): 3145-3158, 2020 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-31545744

RESUMEN

Learning effective representations that exhibit semantic content is crucial to image retrieval applications. Recent advances in deep learning have made significant improvements in performance on a number of visual recognition tasks. Studies have also revealed that visual features extracted from a deep network learned on a large-scale image data set (e.g., ImageNet) for classification are generic and perform well on new recognition tasks in different domains. Nevertheless, when applied to image retrieval, such deep representations do not attain performance as impressive as used for classification. This is mainly because the deep features are optimized for classification rather than for the desired retrieval task. We introduce the cross-batch reference (CBR), a novel training mechanism that enables the optimization of deep networks with a retrieval criterion. With the CBR, the networks leverage both the samples in a single minibatch and the samples in the others for weight updates, enhancing the stochastic gradient descent (SGD) training by enabling interbatch information passing. This interbatch communication is implemented as a cross-batch retrieval process in which the networks are trained to maximize the mean average precision (mAP) that is a popular performance measure in retrieval. Maximizing the cross-batch mAP is equivalent to centralizing the samples relevant to each other in the feature space and separating the samples irrelevant to each other. The learned features can discriminate between relevant and irrelevant samples and thus are suitable for retrieval. To circumvent the discrete, nondifferentiable mAP maximization, we derive an approximate, differentiable lower bound that can be easily optimized in deep networks. Furthermore, the mAP loss can be used alone or with a classification loss. Experiments on several data sets demonstrate that our CBR learning provides favorable performance, validating its effectiveness.

4.
IEEE Trans Pattern Anal Mach Intell ; 40(2): 437-451, 2018 02.
Artículo en Inglés | MEDLINE | ID: mdl-28207384

RESUMEN

This paper presents a simple yet effective supervised deep hash approach that constructs binary hash codes from labeled data for large-scale image search. We assume that the semantic labels are governed by several latent attributes with each attribute on or off, and classification relies on these attributes. Based on this assumption, our approach, dubbed supervised semantics-preserving deep hashing (SSDH), constructs hash functions as a latent layer in a deep network and the binary codes are learned by minimizing an objective function defined over classification error and other desirable hash codes properties. With this design, SSDH has a nice characteristic that classification and retrieval are unified in a single learning model. Moreover, SSDH performs joint learning of image representations, hash codes, and classification in a point-wised manner, and thus is scalable to large-scale datasets. SSDH is simple and can be realized by a slight enhancement of an existing deep architecture for classification; yet it is effective and outperforms other hashing approaches on several benchmarks and large datasets. Compared with state-of-the-art approaches, SSDH achieves higher retrieval accuracy, while the classification performance is not sacrificed.

5.
Cogn Sci ; 34(8): 1574-93, 2010 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-21564262

RESUMEN

Judging similarities among objects, events, and experiences is one of the most basic cognitive abilities, allowing us to make predictions and generalizations. The main assumption in similarity judgment is that people selectively attend to salient features of stimuli and judge their similarities on the basis of the common and distinct features of the stimuli. However, it is unclear how people select features from stimuli and how they weigh features. Here, we present a computational method that helps address these questions. Our procedure combines image-processing techniques with a machine-learning algorithm and assesses feature weights that can account for both similarity and categorization judgment data. Our analysis suggests that a small number of local features are particularly important to explain our behavioral data.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA