Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 8 de 8
Filtrar
Más filtros













Base de datos
Intervalo de año de publicación
1.
Nature ; 606(7912): 137-145, 2022 06.
Artículo en Inglés | MEDLINE | ID: mdl-35614217

RESUMEN

Nerve injury leads to chronic pain and exaggerated sensitivity to gentle touch (allodynia) as well as a loss of sensation in the areas in which injured and non-injured nerves come together1-3. The mechanisms that disambiguate these mixed and paradoxical symptoms are unknown. Here we longitudinally and non-invasively imaged genetically labelled populations of fibres that sense noxious stimuli (nociceptors) and gentle touch (low-threshold afferents) peripherally in the skin for longer than 10 months after nerve injury, while simultaneously tracking pain-related behaviour in the same mice. Fully denervated areas of skin initially lost sensation, gradually recovered normal sensitivity and developed marked allodynia and aversion to gentle touch several months after injury. This reinnervation-induced neuropathic pain involved nociceptors that sprouted into denervated territories precisely reproducing the initial pattern of innervation, were guided by blood vessels and showed irregular terminal connectivity in the skin and lowered activation thresholds mimicking low-threshold afferents. By contrast, low-threshold afferents-which normally mediate touch sensation as well as allodynia in intact nerve territories after injury4-7-did not reinnervate, leading to an aberrant innervation of tactile end organs such as Meissner corpuscles with nociceptors alone. Genetic ablation of nociceptors fully abrogated reinnervation allodynia. Our results thus reveal the emergence of a form of chronic neuropathic pain that is driven by structural plasticity, abnormal terminal connectivity and malfunction of nociceptors during reinnervation, and provide a mechanistic framework for the paradoxical sensory manifestations that are observed clinically and can impose a heavy burden on patients.


Asunto(s)
Hiperalgesia , Neuralgia , Nociceptores , Piel , Animales , Dolor Crónico/fisiopatología , Hiperalgesia/fisiopatología , Mecanorreceptores/patología , Ratones , Neuralgia/fisiopatología , Nociceptores/patología , Piel/inervación , Piel/fisiopatología
2.
IEEE Trans Pattern Anal Mach Intell ; 44(1): 416-427, 2022 01.
Artículo en Inglés | MEDLINE | ID: mdl-32750817

RESUMEN

Learning the similarity between images constitutes the foundation for numerous vision tasks. The common paradigm is discriminative metric learning, which seeks an embedding that separates different training classes. However, the main challenge is to learn a metric that not only generalizes from training to novel, but related, test samples. It should also transfer to different object classes. So what complementary information is missed by the discriminative paradigm? Besides finding characteristics that separate between classes, we also need them to likely occur in novel categories, which is indicated if they are shared across training classes. This work investigates how to learn such characteristics without the need for extra annotations or training data. By formulating our approach as a novel triplet sampling strategy, it can be easily applied on top of recent ranking loss frameworks. Experiments show that, independent of the underlying network architecture and the specific ranking loss, our approach significantly improves performance in deep metric learning, leading to new the state-of-the-art results on various standard benchmark datasets.


Asunto(s)
Algoritmos , Aprendizaje Profundo , Benchmarking
3.
IEEE Trans Pattern Anal Mach Intell ; 44(11): 8306-8320, 2022 11.
Artículo en Inglés | MEDLINE | ID: mdl-34529564

RESUMEN

Deep metric learning (DML) is a cornerstone of many computer vision applications. It aims at learning a mapping from the input domain to an embedding space, where semantically similar objects are located nearby and dissimilar objects far from another. The target similarity on the training data is defined by user in form of ground-truth class labels. However, while the embedding space learns to mimic the user-provided similarity on the training data, it should also generalize to novel categories not seen during training. Besides user-provided groundtruth training labels, a lot of additional visual factors (such as viewpoint changes or shape peculiarities) exist and imply different notions of similarity between objects, affecting the generalization on the images unseen during training. However, existing approaches usually directly learn a single embedding space on all available training data, struggling to encode all different types of relationships, and do not generalize well. We propose to build a more expressive representation by jointly splitting the embedding space and the data hierarchically into smaller sub-parts. We successively focus on smaller subsets of the training data, reducing its variance and learning a different embedding subspace for each data subset. Moreover, the subspaces are learned jointly to cover not only the intricacies, but the breadth of the data as well. Only after that, we build the final embedding from the subspaces in the conquering stage. The proposed algorithm acts as a transparent wrapper that can be placed around arbitrary existing DML methods. Our approach significantly improves upon the state-of-the-art on image retrieval, clustering, and re-identification tasks evaluated using CUB200-2011, CARS196, Stanford Online Products, In-shop Clothes, and PKU VehicleID datasets.


Asunto(s)
Algoritmos , Análisis por Conglomerados
4.
PLoS One ; 16(11): e0259718, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-34818376

RESUMEN

Finding objects and motifs across artworks is of great importance for art history as it helps to understand individual works and analyze relations between them. The advent of digitization has produced extensive digital art collections with many research opportunities. However, manual approaches are inadequate to handle this amount of data, and it requires appropriate computer-based methods to analyze them. This article presents a visual search algorithm and user interface to support art historians to find objects and motifs in extensive datasets. Artistic image collections are subject to significant domain shifts induced by large variations in styles, artistic media, and materials. This poses new challenges to most computer vision models which are trained on photographs. To alleviate this problem, we introduce a multi-style feature aggregation that projects images into the same distribution, leading to more accurate and style-invariant search results. Our retrieval system is based on a voting procedure combined with fast nearest-neighbor search and enables finding and localizing motifs within an extensive image collection in seconds. The presented approach significantly improves the state-of-the-art in terms of accuracy and search time on various datasets and applies to large and inhomogeneous collections. In addition to the search algorithm, we introduce a user interface that allows art historians to apply our algorithm in practice. The interface enables users to search for single regions, multiple regions regarding different connection types and holds an interactive feedback system to improve retrieval results further. With our methodological contribution and easy-to-use user interface, this work manifests further progress towards a computer-based analysis of visual art.


Asunto(s)
Algoritmos , Arte , Análisis por Conglomerados
5.
PLoS One ; 15(12): e0243039, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-33326435

RESUMEN

The cuneiform script provides a glimpse into our ancient history. However, reading age-old clay tablets is time-consuming and requires years of training. To simplify this process, we propose a deep-learning based sign detector that locates and classifies cuneiform signs in images of clay tablets. Deep learning requires large amounts of training data in the form of bounding boxes around cuneiform signs, which are not readily available and costly to obtain in the case of cuneiform script. To tackle this problem, we make use of existing transliterations, a sign-by-sign representation of the tablet content in Latin script. Since these do not provide sign localization, we propose a weakly supervised approach: We align tablet images with their corresponding transliterations to localize the transliterated signs in the tablet image, before using these localized signs in place of annotations to re-train the sign detector. A better sign detector in turn boosts the quality of the alignments. We combine these steps in an iterative process that enables training a cuneiform sign detector from transliterations only. While our method works weakly supervised, a small number of annotations further boost the performance of the cuneiform sign detector which we evaluate on a large collection of clay tablets from the Neo-Assyrian period. To enable experts to directly apply the sign detector in their study of cuneiform texts, we additionally provide a web application for the analysis of clay tablets with a trained cuneiform sign detector.


Asunto(s)
Lenguaje/historia , Lectura , Aprendizaje Profundo , Historia Antigua , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Medio Oriente , Redes Neurales de la Computación
6.
IEEE Trans Pattern Anal Mach Intell ; 37(6): 1134-47, 2015 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-26357338

RESUMEN

The high complexity of multi-scale, category-level object detection in cluttered scenes is efficiently handled by Hough voting methods. However, the main shortcoming of the approach is that mutually dependent local observations are independently casting their votes for intrinsically global object properties such as object scale. Object hypotheses are then assumed to be a mere sum of their part votes. Popular representation schemes are, however, based on a dense sampling of semi-local image features, which are consequently mutually dependent. We take advantage of part dependencies and incorporate them into probabilistic Hough voting by deriving an objective function that connects three intimately related problems: i) grouping mutually dependent parts, ii) solving the correspondence problem conjointly for dependent parts, and iii) finding concerted object hypotheses using extended groups rather than based on local observations alone. Early commitments are avoided by not restricting parts to only a single vote for a locally best correspondence and we learn a weighting of parts during training to reflect their differing relevance for an object. Experiments successfully demonstrate the benefit of incorporating part dependencies through grouping into Hough voting. The joint optimization of groupings, correspondences, and votes not only improves the detection accuracy over standard Hough voting and a sliding window baseline, but it also reduces the computational complexity by significantly decreasing the number of candidate hypotheses.

7.
Med Image Comput Comput Assist Interv ; 17(Pt 2): 154-61, 2014.
Artículo en Inglés | MEDLINE | ID: mdl-25485374

RESUMEN

In this work we propose a novel framework for generic event monitoring in live cell culture videos, built on the assumption that unpredictable observations should correspond to biological events. We use a small set of event-free data to train a multioutput multikernel Gaussian process model that operates as an event predictor by performing autoregression on a bank of heterogeneous features extracted from consecutive frames of a video sequence. We show that the prediction error of this model can be used as a probability measure of the presence of relevant events, that can enable users to perform further analysis or monitoring of large-scale non-annotated data. We validate our approach in two phase-contrast sequence data sets containing mitosis and apoptosis events: a new private dataset of human bone cancer (osteosarcoma) cells and a benchmark dataset of stem cells.


Asunto(s)
Ciclo Celular , Rastreo Celular/métodos , Microscopía de Contraste de Fase/métodos , Osteosarcoma/patología , Reconocimiento de Normas Patrones Automatizadas/métodos , Células Madre/citología , Técnica de Sustracción , Algoritmos , Células Cultivadas , Simulación por Computador , Interpretación Estadística de Datos , Humanos , Aumento de la Imagen/métodos , Interpretación de Imagen Asistida por Computador/métodos , Modelos Estadísticos , Reproducibilidad de los Resultados , Sensibilidad y Especificidad
8.
IEEE Trans Pattern Anal Mach Intell ; 32(3): 501-16, 2010 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-20075474

RESUMEN

Real-world scene understanding requires recognizing object categories in novel visual scenes. This paper describes a composition system that automatically learns structured, hierarchical object representations in an unsupervised manner without requiring manual segmentation or manual object localization. A central concept for learning object models in the challenging, general case of unconstrained scenes, large intraclass variations, large numbers of categories, and lacking supervision information is to exploit the compositional nature of our (visual) world. The compositional nature of visual objects significantly limits their representation complexity and renders learning of structured object models statistically and computationally tractable. We propose a robust descriptor for local image parts and show how characteristic compositions of parts can be learned that are based on an unspecific part vocabulary shared between all categories. Moreover, a Bayesian network is presented that comprises all the compositional constituents together with scene context and object shape. Object recognition is then formulated as a statistical inference problem in this probabilistic model.


Asunto(s)
Inteligencia Artificial , Reconocimiento de Normas Patrones Automatizadas/métodos , Percepción Visual/fisiología , Algoritmos , Teorema de Bayes , Análisis por Conglomerados , Humanos , Modelos Estadísticos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA