Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 9 de 9
Filtrar
1.
Comput Biol Med ; 174: 108464, 2024 May.
Artículo en Inglés | MEDLINE | ID: mdl-38613894

RESUMEN

Pulmonary Embolisms (PE) represent a leading cause of cardiovascular death. While medical imaging, through computed tomographic pulmonary angiography (CTPA), represents the gold standard for PE diagnosis, it is still susceptible to misdiagnosis or significant diagnosis delays, which may be fatal for critical cases. Despite the recently demonstrated power of deep learning to bring a significant boost in performance in a wide range of medical imaging tasks, there are still very few published researches on automatic pulmonary embolism detection. Herein we introduce a deep learning based approach, which efficiently combines computer vision and deep neural networks for pulmonary embolism detection in CTPA. Our method brings novel contributions along three orthogonal axes: (1) automatic detection of anatomical structures; (2) anatomical aware pretraining, and (3) a dual-hop deep neural net for PE detection. We obtain state-of-the-art results on the publicly available multicenter large-scale RSNA dataset.


Asunto(s)
Angiografía por Tomografía Computarizada , Aprendizaje Profundo , Embolia Pulmonar , Embolia Pulmonar/diagnóstico por imagen , Humanos , Angiografía por Tomografía Computarizada/métodos , Redes Neurales de la Computación
2.
Sci Rep ; 12(1): 15378, 2022 09 13.
Artículo en Inglés | MEDLINE | ID: mdl-36100646

RESUMEN

In this paper we propose a three stages analysis of the evolution of Covid19 in Romania. There are two main issues when it comes to pandemic prediction. The first one is the fact that the numbers reported of infected and recovered are unreliable, however the number of deaths is more accurate. The second issue is that there were many factors which affected the evolution of the pandemic. In this paper we propose an analysis in three stages. The first stage is based on the classical SIR model which we do using a neural network. This provides a first set of daily parameters. In the second stage we propose a refinement of the SIR model in which we separate the deceased into a distinct category. By using the first estimate and a grid search, we give a daily estimation of the parameters. The third stage is used to define a notion of turning points (local extremes) for the parameters. We call a regime the time between these points. We outline a general way based on time varying parameters of SIRD to make predictions.


Asunto(s)
COVID-19 , COVID-19/epidemiología , Sistemas de Computación , Humanos , Redes Neurales de la Computación , Pandemias , Rumanía/epidemiología
3.
IEEE Trans Pattern Anal Mach Intell ; 44(11): 7638-7656, 2022 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-34648435

RESUMEN

We propose a dual system for unsupervised object segmentation in video, which brings together two modules with complementary properties: a space-time graph that discovers objects in videos and a deep network that learns powerful object features. The system uses an iterative knowledge exchange policy. A novel spectral space-time clustering process on the graph produces unsupervised segmentation masks passed to the network as pseudo-labels. The net learns to segment in single frames what the graph discovers in video and passes back to the graph strong image-level features that improve its node-level features in the next iteration. Knowledge is exchanged for several cycles until convergence. The graph has one node per each video pixel, but the object discovery is fast. It uses a novel power iteration algorithm computing the main space-time cluster as the principal eigenvector of a special Feature-Motion matrix without actually computing the matrix. The thorough experimental analysis validates our theoretical claims and proves the effectiveness of the cyclical knowledge exchange. We also perform experiments on the supervised scenario, incorporating features pretrained with human supervision. We achieve state-of-the-art level on unsupervised and supervised scenarios on four challenging datasets: DAVIS, SegTrack, YouTube-Objects, and DAVSOD. We will make our code publicly available.

4.
Sensors (Basel) ; 21(3)2021 Jan 27.
Artículo en Inglés | MEDLINE | ID: mdl-33514019

RESUMEN

When driving, people make decisions based on current traffic as well as their desired route. They have a mental map of known routes and are often able to navigate without needing directions. Current published self-driving models improve their performances when using additional GPS information. Here we aim to push forward self-driving research and perform route planning even in the complete absence of GPS at inference time. Our system learns to predict in real-time vehicle's current location and future trajectory, on a known map, given only the raw video stream and the final destination. Trajectories consist of instant steering commands that depend on present traffic, as well as longer-term navigation decisions towards a specific destination. Along with our novel proposed approach to localization and navigation from visual data, we also introduce a novel large dataset in an urban environment, which consists of video and GPS streams collected with a smartphone while driving. The GPS is automatically processed to obtain supervision labels and to create an analytical representation of the traversed map. In tests, our solution outperforms published state of the art methods on visual localization and steering and provides reliable navigation assistance between any two known locations. We also show that our system can adapt to short and long-term changes in weather conditions or the structure of the urban environment. We make the entire dataset and the code publicly available.

5.
Sensors (Basel) ; 20(24)2020 Dec 10.
Artículo en Inglés | MEDLINE | ID: mdl-33322014

RESUMEN

This paper proposes a protocol for the acquisition and processing of biophysical signals in virtual reality applications, particularly in phobia therapy experiments. This protocol aims to ensure that the measurement and processing phases are performed effectively, to obtain clean data that can be used to estimate the users' anxiety levels. The protocol has been designed after analyzing the experimental data of seven subjects who have been exposed to heights in a virtual reality environment. The subjects' level of anxiety has been estimated based on the real-time evaluation of a nonlinear function that has as parameters various features extracted from the biophysical signals. The highest classification accuracy was obtained using a combination of seven heart rate and electrodermal activity features in the time domain and frequency domain.


Asunto(s)
Trastornos Fóbicos , Realidad Virtual , Ansiedad/diagnóstico , Trastornos de Ansiedad/diagnóstico , Frecuencia Cardíaca , Humanos , Interfaz Usuario-Computador
6.
Sensors (Basel) ; 20(2)2020 Jan 15.
Artículo en Inglés | MEDLINE | ID: mdl-31952289

RESUMEN

In this paper, we investigate various machine learning classifiers used in our Virtual Reality (VR) system for treating acrophobia. The system automatically estimates fear level based on multimodal sensory data and a self-reported emotion assessment. There are two modalities of expressing fear ratings: the 2-choice scale, where 0 represents relaxation and 1 stands for fear; and the 4-choice scale, with the following correspondence: 0-relaxation, 1-low fear, 2-medium fear and 3-high fear. A set of features was extracted from the sensory signals using various metrics that quantify brain (electroencephalogram-EEG) and physiological linear and non-linear dynamics (Heart Rate-HR and Galvanic Skin Response-GSR). The novelty consists in the automatic adaptation of exposure scenario according to the subject's affective state. We acquired data from acrophobic subjects who had undergone an in vivo pre-therapy exposure session, followed by a Virtual Reality therapy and an in vivo evaluation procedure. Various machine and deep learning classifiers were implemented and tested, with and without feature selection, in both a user-dependent and user-independent fashion. The results showed a very high cross-validation accuracy on the training set and good test accuracies, ranging from 42.5% to 89.5%. The most important features of fear level classification were GSR, HR and the values of the EEG in the beta frequency range. For determining the next exposure scenario, a dominant role was played by the target fear level, a parameter computed by taking into account the patient's estimated fear level.


Asunto(s)
Aprendizaje Profundo , Miedo/clasificación , Trastornos Fóbicos , Procesamiento de Señales Asistido por Computador , Adulto , Ansiedad , Diagnóstico por Computador , Electroencefalografía , Respuesta Galvánica de la Piel/fisiología , Frecuencia Cardíaca/fisiología , Humanos , Trastornos Fóbicos/diagnóstico , Trastornos Fóbicos/fisiopatología , Trastornos Fóbicos/terapia , Adulto Joven
7.
Sensors (Basel) ; 19(7)2019 Apr 11.
Artículo en Inglés | MEDLINE | ID: mdl-30978980

RESUMEN

There has been steady progress in the field of affective computing over the last two decades that has integrated artificial intelligence techniques in the construction of computational models of emotion. Having, as a purpose, the development of a system for treating phobias that would automatically determine fear levels and adapt exposure intensity based on the user's current affective state, we propose a comparative study between various machine and deep learning techniques (four deep neural network models, a stochastic configuration network, Support Vector Machine, Linear Discriminant Analysis, Random Forest and k-Nearest Neighbors), with and without feature selection, for recognizing and classifying fear levels based on the electroencephalogram (EEG) and peripheral data from the DEAP (Database for Emotion Analysis using Physiological signals) database. Fear was considered an emotion eliciting low valence, high arousal and low dominance. By dividing the ratings of valence/arousal/dominance emotion dimensions, we propose two paradigms for fear level estimation-the two-level (0-no fear and 1-fear) and the four-level (0-no fear, 1-low fear, 2-medium fear, 3-high fear) paradigms. Although all the methods provide good classification accuracies, the highest F scores have been obtained using the Random Forest Classifier-89.96% and 85.33% for the two-level and four-level fear evaluation modality.


Asunto(s)
Fenómenos Biofísicos/fisiología , Electroencefalografía , Emociones/fisiología , Miedo/fisiología , Adulto , Miedo/clasificación , Femenino , Humanos , Aprendizaje Automático , Masculino , Adulto Joven
8.
IEEE Trans Pattern Anal Mach Intell ; 36(7): 1312-24, 2014 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-26353305

RESUMEN

Boundary detection is a fundamental computer vision problem that is essential for a variety of tasks, such as contour and region segmentation, symmetry detection and object recognition and categorization. We propose a generalized formulation for boundary detection, with closed-form solution, applicable to the localization of different types of boundaries, such as object edges in natural images and occlusion boundaries from video. Our generalized boundary detection method (Gb) simultaneously combines low-level and mid-level image representations in a single eigenvalue problem and solves for the optimal continuous boundary orientation and strength. The closed-form solution to boundary detection enables our algorithm to achieve state-of-the-art results at a significantly lower computational cost than current methods. We also propose two complementary novel components that can seamlessly be combined with Gb: first, we introduce a soft-segmentation procedure that provides region input layers to our boundary detection algorithm for a significant improvement in accuracy, at negligible computational cost; second, we present an efficient method for contour grouping and reasoning, which when applied as a final post-processing stage, further increases the boundary detection performance.

9.
IEEE Trans Pattern Anal Mach Intell ; 27(10): 1631-43, 2005 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-16237997

RESUMEN

This paper presents an online feature selection mechanism for evaluating multiple features while tracking and adjusting the set of features used to improve tracking performance. Our hypothesis is that the features that best discriminate between object and background are also best for tracking the object. Given a set of seed features, we compute log likelihood ratios of class conditional sample densities from object and background to form a new set of candidate features tailored to the local object/background discrimination task. The two-class variance ratio is used to rank these new features according to how well they separate sample distributions of object and background pixels. This feature evaluation mechanism is embedded in a mean-shift tracking system that adaptively selects the top-ranked discriminative features for tracking. Examples are presented that demonstrate how this method adapts to changing appearances of both tracked object and scene background. We note susceptibility of the variance ratio feature selection method to distraction by spatially correlated background clutter and develop an additional approach that seeks to minimize the likelihood of distraction.


Asunto(s)
Algoritmos , Inteligencia Artificial , Interpretación de Imagen Asistida por Computador/métodos , Movimiento , Sistemas en Línea , Reconocimiento de Normas Patrones Automatizadas/métodos , Técnica de Sustracción , Sistemas de Computación , Análisis Discriminante , Aumento de la Imagen/métodos , Almacenamiento y Recuperación de la Información/métodos , Movimiento (Física) , Grabación en Video/métodos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA