Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
1.
IEEE Trans Pattern Anal Mach Intell ; 45(6): 6659-6673, 2023 06.
Artículo en Inglés | MEDLINE | ID: mdl-33566759

RESUMEN

The lack of large-scale real datasets with annotations makes transfer learning a necessity for video activity understanding. We aim to develop an effective method for few-shot transfer learning for first-person action classification. We leverage independently trained local visual cues to learn representations that can be transferred from a source domain, which provides primitive action labels, to a different target domain - using only a handful of examples. Visual cues we employ include object-object interactions, hand grasps and motion within regions that are a function of hand locations. We employ a framework based on meta-learning to extract the distinctive and domain invariant components of the deployed visual cues. This enables transfer of action classification models across public datasets captured with diverse scene and action configurations. We present comparative results of our transfer learning methodology and report superior results over state-of-the-art action classification approaches for both inter-class and inter-dataset transfer.


Asunto(s)
Algoritmos , Aprendizaje , Humanos , Señales (Psicología)
2.
Int J Comput Assist Radiol Surg ; 16(5): 799-808, 2021 May.
Artículo en Inglés | MEDLINE | ID: mdl-33881732

RESUMEN

PURPOSE:  : Tracking of tools and surgical activity is becoming more and more important in the context of computer assisted surgery. In this work, we present a data generation framework, dataset and baseline methods to facilitate further research in the direction of markerless hand and instrument pose estimation in realistic surgical scenarios. METHODS:  : We developed a rendering pipeline to create inexpensive and realistic synthetic data for model pretraining. Subsequently, we propose a pipeline to capture and label real data with hand and object pose ground truth in an experimental setup to gather high-quality real data. We furthermore present three state-of-the-art RGB-based pose estimation baselines. RESULTS:  : We evaluate three baseline models on the proposed datasets. The best performing baseline achieves an average tool 3D vertex error of 16.7 mm on synthetic data as well as 13.8 mm on real data which is comparable to the state-of-the art in RGB-based hand/object pose estimation. CONCLUSION:  : To the best of our knowledge, we propose the first synthetic and real data generation pipelines to generate hand and object pose labels for open surgery. We present three baseline models for RGB based object and object/hand pose estimation based on RGB frames. Our realistic synthetic data generation pipeline may contribute to overcome the data bottleneck in the surgical domain and can easily be transferred to other medical applications.


Asunto(s)
Aprendizaje Profundo , Mano/diagnóstico por imagen , Imagenología Tridimensional/métodos , Cirugía Asistida por Computador/métodos , Algoritmos , Calibración , Humanos , Quirófanos , Ortopedia/métodos , Reproducibilidad de los Resultados
3.
Med Image Comput Comput Assist Interv ; 17(Pt 1): 593-600, 2014.
Artículo en Inglés | MEDLINE | ID: mdl-25333167

RESUMEN

Detection of new or rapidly evolving melanocytic lesions is crucial for early diagnosis and treatment of melanoma. We propose a fully automated pre-screening system for detecting new lesions or changes in existing ones, on the order of 2 - 3mm, over almost the entire body surface. Our solution is based on a multi-camera 3D stereo system. The system captures 3D textured scans of a subject at different times and then brings these scans into correspondence by aligning them with a learned, parametric, non-rigid 3D body model. This means that captured skin textures are in accurate alignment across scans, facilitating the detection of new or changing lesions. The integration of lesion segmentation with a deformable 3D body model is a key contribution that makes our approach robust to changes in illumination and subject pose.


Asunto(s)
Dermoscopía/métodos , Imagenología Tridimensional/métodos , Melanoma/patología , Reconocimiento de Normas Patrones Automatizadas/métodos , Fotograbar/métodos , Neoplasias Cutáneas/patología , Imagen de Cuerpo Entero/métodos , Adulto , Algoritmos , Inteligencia Artificial , Simulación por Computador , Femenino , Humanos , Aumento de la Imagen/métodos , Interpretación de Imagen Asistida por Computador/métodos , Modelos Biológicos , Reproducibilidad de los Resultados , Sensibilidad y Especificidad , Adulto Joven
4.
IEEE Trans Biomed Eng ; 61(2): 557-65, 2014 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-24081839

RESUMEN

We present a new technique for melanocytic lesion segmentation, Mimicking Expert Dermatologists' Segmentations (MEDS), and extensive tests of its accuracy, speed, and robustness. MEDS combines a thresholding scheme reproducing the cognitive process of dermatologists with a number of optimizations that may be of independent interest. MEDS is simple, with a single parameter tuning its "tightness". It is extremely fast, segmenting medium-resolution images in a fraction of a second even with the modest computational resources of a cell phone-an improvement of an order of magnitude or more over state-of-the-art techniques. And it is extremely accurate: very experienced dermatologists disagree with its segmentations less than they disagree with the segmentations of state-of-the-art techniques, and in fact less than they disagree with the segmentations of dermatologists of moderate experience.


Asunto(s)
Dermoscopía/métodos , Procesamiento de Imagen Asistido por Computador/métodos , Melanoma/patología , Neoplasias Cutáneas/patología , Humanos , Análisis de Componente Principal
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA