Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 19 de 19
Filtrar
1.
Int J Neural Syst ; 34(5): 2450024, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38533631

RESUMO

Emotion recognition plays an essential role in human-human interaction since it is a key to understanding the emotional states and reactions of human beings when they are subject to events and engagements in everyday life. Moving towards human-computer interaction, the study of emotions becomes fundamental because it is at the basis of the design of advanced systems to support a broad spectrum of application areas, including forensic, rehabilitative, educational, and many others. An effective method for discriminating emotions is based on ElectroEncephaloGraphy (EEG) data analysis, which is used as input for classification systems. Collecting brain signals on several channels and for a wide range of emotions produces cumbersome datasets that are hard to manage, transmit, and use in varied applications. In this context, the paper introduces the Empátheia system, which explores a different EEG representation by encoding EEG signals into images prior to their classification. In particular, the proposed system extracts spatio-temporal image encodings, or atlases, from EEG data through the Processing and transfeR of Interaction States and Mappings through Image-based eNcoding (PRISMIN) framework, thus obtaining a compact representation of the input signals. The atlases are then classified through the Empátheia architecture, which comprises branches based on convolutional, recurrent, and transformer models designed and tuned to capture the spatial and temporal aspects of emotions. Extensive experiments were conducted on the Shanghai Jiao Tong University (SJTU) Emotion EEG Dataset (SEED) public dataset, where the proposed system significantly reduced its size while retaining high performance. The results obtained highlight the effectiveness of the proposed approach and suggest new avenues for data representation in emotion recognition from EEG signals.


Assuntos
Encéfalo , Emoções , Humanos , China , Eletroencefalografia/métodos , Comportamento Compulsivo
2.
Comput Methods Programs Biomed ; 245: 108037, 2024 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-38271793

RESUMO

BACKGROUND: aortic stenosis is a common heart valve disease that mainly affects older people in developed countries. Its early detection is crucial to prevent the irreversible disease progression and, eventually, death. A typical screening technique to detect stenosis uses echocardiograms; however, variations introduced by other tissues, camera movements, and uneven lighting can hamper the visual inspection, leading to misdiagnosis. To address these issues, effective solutions involve employing deep learning algorithms to assist clinicians in detecting and classifying stenosis by developing models that can predict this pathology from single heart views. Although promising, the visual information conveyed by a single image may not be sufficient for an accurate diagnosis, especially when using an automatic system; thus, this indicates that different solutions should be explored. METHODOLOGY: following this rationale, this paper proposes a novel deep learning architecture, composed of a multi-view, multi-scale feature extractor, and a transformer encoder (MV-MS-FETE) to predict stenosis from parasternal long and short-axis views. In particular, starting from the latter, the designed model extracts relevant features at multiple scales along its feature extractor component and takes advantage of a transformer encoder to perform the final classification. RESULTS: experiments were performed on the recently released Tufts medical echocardiogram public dataset, which comprises 27,788 images split into training, validation, and test sets. Due to the recent release of this collection, tests were also conducted on several state-of-the-art models to create multi-view and single-view benchmarks. For all models, standard classification metrics were computed (e.g., precision, F1-score). The obtained results show that the proposed approach outperforms other multi-view methods in terms of accuracy and F1-score and has more stable performance throughout the training procedure. Furthermore, the experiments also highlight that multi-view methods generally perform better than their single-view counterparts. CONCLUSION: this paper introduces a novel multi-view and multi-scale model for aortic stenosis recognition, as well as three benchmarks to evaluate it, effectively providing multi-view and single-view comparisons that fully highlight the model's effectiveness in aiding clinicians in performing diagnoses while also producing several baselines for the aortic stenosis recognition task.


Assuntos
Estenose da Valva Aórtica , Humanos , Idoso , Constrição Patológica , Estenose da Valva Aórtica/diagnóstico por imagem , Ecocardiografia , Coração , Algoritmos
3.
Sensors (Basel) ; 23(5)2023 Feb 28.
Artigo em Inglês | MEDLINE | ID: mdl-36904859

RESUMO

During flight, unmanned aerial vehicles (UAVs) need several sensors to follow a predefined path and reach a specific destination. To this aim, they generally exploit an inertial measurement unit (IMU) for pose estimation. Usually, in the UAV context, an IMU entails a three-axis accelerometer and a three-axis gyroscope. However, as happens for many physical devices, they can present some misalignment between the real value and the registered one. These systematic or occasional errors can derive from different sources and could be related to the sensor itself or to external noise due to the place where it is located. Hardware calibration requires special equipment, which is not always available. In any case, even if possible, it can be used to solve the physical problem and sometimes requires removing the sensor from its location, which is not always feasible. At the same time, solving the problem of external noise usually requires software procedures. Moreover, as reported in the literature, even two IMUs from the same brand and the same production chain could produce different measurements under identical conditions. This paper proposes a soft calibration procedure to reduce the misalignment created by systematic errors and noise based on the grayscale or RGB camera built-in on the drone. Based on the transformer neural network architecture trained in a supervised learning fashion on pairs of short videos shot by the UAV's camera and the correspondent UAV measurements, the strategy does not require any special equipment. It is easily reproducible and could be used to increase the trajectory accuracy of the UAV during the flight.

4.
Curr Psychol ; 42(10): 8595-8614, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-34703195

RESUMO

Inspired by the Conservation of Resource theory (Hobfoll, 1989), this study investigated the role of a broad set of personal vulnerabilities, social, and work-related stressors and resources as predictors of workers' well-being during the COVID-19 outbreak. Participants were 594 workers in Italy. Results showed that personality predispostions, such as positivity, neuroticism and conscientiousness as well as key aspects of the individuals' relationship with their work (such as job insecurity, type of employment contract or trust in the organization) emerged as factors promoting (or hampering) workers' adjustment during the COVID -19 outbreak. Interactions between stressors and resources were also found and discussed. Supplementary Information: The online version contains supplementary material available at 10.1007/s12144-021-02408-w.

5.
Neural Netw ; 153: 386-398, 2022 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-35785610

RESUMO

Improving existing neural network architectures can involve several design choices such as manipulating the loss functions, employing a diverse learning strategy, exploiting gradient evolution at training time, optimizing the network hyper-parameters, or increasing the architecture depth. The latter approach is a straightforward solution, since it directly enhances the representation capabilities of a network; however, the increased depth generally incurs in the well-known vanishing gradient problem. In this paper, borrowing from different methods addressing this issue, we introduce an interlaced multi-task learning strategy, defined SIRe, to reduce the vanishing gradient in relation to the object classification task. The presented methodology directly improves a convolutional neural network (CNN) by preserving information from the input image through interlaced auto-encoders (AEs), and further refines the base network architecture by means of skip and residual connections. To validate the presented methodology, a simple CNN and various implementations of famous networks are extended via the SIRe strategy and extensively tested on five collections, i.e., MNIST, Fashion-MNIST, CIFAR-10, CIFAR-100, and Caltech-256; where the SIRe-extended architectures achieve significantly increased performances across all models and datasets, thus confirming the presented approach effectiveness.


Assuntos
Aprendizagem , Redes Neurais de Computação
6.
Int J Neural Syst ; 32(10): 2250040, 2022 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-35881015

RESUMO

Human feelings expressed through verbal (e.g. voice) and non-verbal communication channels (e.g. face or body) can influence either human actions or interactions. In the literature, most of the attention was given to facial expressions for the analysis of emotions conveyed through non-verbal behaviors. Despite this, psychology highlights that the body is an important indicator of the human affective state in performing daily life activities. Therefore, this paper presents a novel method for affective action and interaction recognition from videos, exploiting multi-view representation learning and only full-body handcrafted characteristics selected following psychological and proxemic studies. Specifically, 2D skeletal data are extracted from RGB video sequences to derive diverse low-level skeleton features, i.e. multi-views, modeled through the bag-of-visual-words clustering approach generating a condition-related codebook. In this way, each affective action and interaction within a video can be represented as a frequency histogram of codewords. During the learning phase, for each affective class, training samples are used to compute its global histogram of codewords stored in a database and later used for the recognition task. In the recognition phase, the video frequency histogram representation is matched against the database of class histograms and classified as the closest affective class in terms of Euclidean distance. The effectiveness of the proposed system is evaluated on a specifically collected dataset containing 6 emotion for both actions and interactions, on which the proposed system obtains 93.64% and 90.83% accuracy, respectively. In addition, the devised strategy also achieves in line performances with other literature works based on deep learning when tested on a public collection containing 6 emotions plus a neutral state, demonstrating the effectiveness of the presented approach and confirming the findings in psychological and proxemic studies.


Assuntos
Algoritmos , Expressão Facial , Análise por Conglomerados , Atividades Humanas , Humanos , Esqueleto
7.
Comput Methods Programs Biomed ; 221: 106833, 2022 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-35537296

RESUMO

BACKGROUND: over the last year, the severe acute respiratory syndrome coronavirus-2 (SARS-CoV-2) and its variants have highlighted the importance of screening tools with high diagnostic accuracy for new illnesses such as COVID-19. In that regard, deep learning approaches have proven as effective solutions for pneumonia classification, especially when considering chest-x-rays images. However, this lung infection can also be caused by other viral, bacterial or fungi pathogens. Consequently, efforts are being poured toward distinguishing the infection source to help clinicians to diagnose the correct disease origin. Following this tendency, this study further explores the effectiveness of established neural network architectures on the pneumonia classification task through the transfer learning paradigm. METHODOLOGY: to present a comprehensive comparison, 12 well-known ImageNet pre-trained models were fine-tuned and used to discriminate among chest-x-rays of healthy people, and those showing pneumonia symptoms derived from either a viral (i.e., generic or SARS-CoV-2) or bacterial source. Furthermore, since a common public collection distinguishing between such categories is currently not available, two distinct datasets of chest-x-rays images, describing the aforementioned sources, were combined and employed to evaluate the various architectures. RESULTS: the experiments were performed using a total of 6330 images split between train, validation, and test sets. For all models, standard classification metrics were computed (e.g., precision, f1-score), and most architectures obtained significant performances, reaching, among the others, up to 84.46% average f1-score when discriminating the four identified classes. Moreover, execution times, areas under the receiver operating characteristic (AUROC), confusion matrices, activation maps computed via the Grad-CAM algorithm, and additional experiments to assess the robustness of each model using only 50%, 20%, and 10% of the training set were also reported to present an informed discussion on the networks classifications. CONCLUSION: this paper examines the effectiveness of well-known architectures on a joint collection of chest-x-rays presenting pneumonia cases derived from either viral or bacterial sources, with particular attention to SARS-CoV-2 contagions for viral pathogens; demonstrating that existing architectures can effectively diagnose pneumonia sources and suggesting that the transfer learning paradigm could be a crucial asset in diagnosing future unknown illnesses.


Assuntos
COVID-19 , Aprendizado Profundo , Pneumonia , COVID-19/diagnóstico por imagem , Humanos , Pneumonia/diagnóstico por imagem , SARS-CoV-2 , Raios X
8.
Int J Neural Syst ; 32(5): 2250015, 2022 May.
Artigo em Inglês | MEDLINE | ID: mdl-35209810

RESUMO

The increasing availability of wireless access points (APs) is leading toward human sensing applications based on Wi-Fi signals as support or alternative tools to the widespread visual sensors, where the signals enable to address well-known vision-related problems such as illumination changes or occlusions. Indeed, using image synthesis techniques to translate radio frequencies to the visible spectrum can become essential to obtain otherwise unavailable visual data. This domain-to-domain translation is feasible because both objects and people affect electromagnetic waves, causing radio and optical frequencies variations. In the literature, models capable of inferring radio-to-visual features mappings have gained momentum in the last few years since frequency changes can be observed in the radio domain through the channel state information (CSI) of Wi-Fi APs, enabling signal-based feature extraction, e.g. amplitude. On this account, this paper presents a novel two-branch generative neural network that effectively maps radio data into visual features, following a teacher-student design that exploits a cross-modality supervision strategy. The latter conditions signal-based features in the visual domain to completely replace visual data. Once trained, the proposed method synthesizes human silhouette and skeleton videos using exclusively Wi-Fi signals. The approach is evaluated on publicly available data, where it obtains remarkable results for both silhouette and skeleton videos generation, demonstrating the effectiveness of the proposed cross-modality supervision strategy.


Assuntos
Ondas de Rádio , Tecnologia sem Fio , Humanos , Esqueleto
9.
Comput Biol Med ; 132: 104347, 2021 05.
Artigo em Inglês | MEDLINE | ID: mdl-33799218

RESUMO

BACKGROUND AND OBJECTIVES: Electroencephalography (EEG) measures the electrical brain activity in real-time by using sensors placed on the scalp. Artifacts due to eye movements and blinking, muscular/cardiac activity and generic electrical disturbances, have to be recognized and eliminated to allow a correct interpretation of the Useful Brain Signals (UBS). Independent Component Analysis (ICA) is effective to split the signal into Independent Components (IC) whose re-projection on 2D topographies of the scalp (images also called Topoplots) allows to recognize/separate artifacts and UBS. Topoplot analysis, a gold standard for EEG, is usually carried out offline either visually by human experts or through automated strategies, both unenforceable when a fast response is required as in online Brain-Computer Interfaces (BCI). We present a fully automatic, effective, fast, scalable framework for artifacts recognition from EEG signals represented in IC Topoplots to be used in online BCI. METHODS: The proposed architecture, optimized to contain three 2D Convolutional Neural Networks (CNN), divides Topoplots in 4 classes: 3 types of artifacts and UBS. The framework architecture is described and the results are presented, discussed and indirectly compared with those obtained from state-of-the-art competitive strategies. RESULTS: Experiments on public EEG datasets showed overall accuracy, sensitivity and specificity greater than 98%, taking 1.4 s on a standard PC for 32 Topoplots, i.e. for an EEG system with at least 32 sensors. CONCLUSIONS: The proposed framework is faster than other automatic methods based on IC analysis and fast enough to be used in EEG-based online BCI. In addition, its scalable architecture and ease of training are necessary conditions to apply it in BCI, where difficult operating conditions caused by uncontrolled muscle spasms, eye rotations or head movements, produce specific artifacts that need to be recognized and dealt with.


Assuntos
Artefatos , Couro Cabeludo , Algoritmos , Piscadela , Encéfalo , Eletroencefalografia , Humanos , Processamento de Sinais Assistido por Computador
10.
Int J Neural Syst ; 31(2): 2050068, 2021 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-33200620

RESUMO

Deception detection is a relevant ability in high stakes situations such as police interrogatories or court trials, where the outcome is highly influenced by the interviewed person behavior. With the use of specific devices, e.g. polygraph or magnetic resonance, the subject is aware of being monitored and can change his behavior, thus compromising the interrogation result. For this reason, video analysis-based methods for automatic deception detection are receiving ever increasing interest. In this paper, a deception detection approach based on RGB videos, leveraging both facial features and stacked generalization ensemble, is proposed. First, a face, which is well-known to present several meaningful cues for deception detection, is identified, aligned, and masked to build video signatures. These signatures are constructed starting from five different descriptors, which allow the system to capture both static and dynamic facial characteristics. Then, video signatures are given as input to four base-level algorithms, which are subsequently fused applying the stacked generalization technique, resulting in a more robust meta-level classifier used to predict deception. By exploiting relevant cues via specific features, the proposed system achieves improved performances on a public dataset of famous court trials, with respect to other state-of-the-art methods based on facial features, highlighting the effectiveness of the proposed method.


Assuntos
Sinais (Psicologia) , Enganação , Algoritmos , Humanos
11.
Pattern Recognit Lett ; 140: 95-100, 2020 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-33041409

RESUMO

Computer Tomography (CT) imaging of the chest is a valid diagnosis tool to detect COVID-19 promptly and to control the spread of the disease. In this work we propose a light Convolutional Neural Network (CNN) design, based on the model of the SqueezeNet, for the efficient discrimination of COVID-19 CT images with respect to other community-acquired pneumonia and/or healthy CT images. The architecture allows to an accuracy of 85.03% with an improvement of about 3.2% in the first dataset arrangement and of about 2.1% in the second dataset arrangement. The obtained gain, though of low entity, can be really important in medical diagnosis and, in particular, for Covid-19 scenario. Also the average classification time on a high-end workstation, 1.25 s, is very competitive with respect to that of more complex CNN designs, 13.41 s, witch require pre-processing. The proposed CNN can be executed on medium-end laptop without GPU acceleration in 7.81 s: this is impossible for methods requiring GPU acceleration. The performance of the method can be further improved with efficient pre-processing strategies for witch GPU acceleration is not necessary.

12.
Sensors (Basel) ; 20(18)2020 Sep 18.
Artigo em Inglês | MEDLINE | ID: mdl-32962168

RESUMO

Person re-identification is concerned with matching people across disjointed camera views at different places and different time instants. This task results of great interest in computer vision, especially in video surveillance applications where the re-identification and tracking of persons are required on uncontrolled crowded spaces and after long time periods. The latter aspects are responsible for most of the current unsolved problems of person re-identification, in fact, the presence of many people in a location as well as the passing of hours or days give arise to important visual appearance changes of people, for example, clothes, lighting, and occlusions; thus making person re-identification a very hard task. In this paper, for the first time in the state-of-the-art, a meta-feature based Long Short-Term Memory (LSTM) hashing model for person re-identification is presented. Starting from 2D skeletons extracted from RGB video streams, the proposed method computes a set of novel meta-features based on movement, gait, and bone proportions. These features are analysed by a network composed of a single LSTM layer and two dense layers. The first layer is used to create a pattern of the person's identity, then, the seconds are used to generate a bodyprint hash through binary coding. The effectiveness of the proposed method is tested on three challenging datasets, that is, iLIDS-VID, PRID 2011, and MARS. In particular, the reported results show that the proposed method, which is not based on visual appearance of people, is fully competitive with respect to other methods based on visual features. In addition, thanks to its skeleton model abstraction, the method results to be a concrete contribute to address open problems, such as long-term re-identification and severe illumination changes, which tend to heavily influence the visual appearance of persons.


Assuntos
Algoritmos , Memória de Longo Prazo , Marcha , Humanos
13.
Int J Neural Syst ; 30(4): 2050016, 2020 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-32114840

RESUMO

Moving object detection in video streams plays a key role in many computer vision applications. In particular, separation between background and foreground items represents a main prerequisite to carry out more complex tasks, such as object classification, vehicle tracking, and person re-identification. Despite the progress made in recent years, a main challenge of moving object detection still regards the management of dynamic aspects, including bootstrapping and illumination changes. In addition, the recent widespread of Pan-Tilt-Zoom (PTZ) cameras has made the management of these aspects even more complex in terms of performance due to their mixed movements (i.e. pan, tilt, and zoom). In this paper, a combined keypoint clustering and neural background subtraction method, based on Self-Organized Neural Network (SONN), for real-time moving object detection in video sequences acquired by PTZ cameras is proposed. Initially, the method performs a spatio-temporal tracking of the sets of moving keypoints to recognize the foreground areas and to establish the background. Then, it adopts a neural background subtraction, localized in these areas, to accomplish a foreground detection able to manage bootstrapping and gradual illumination changes. Experimental results on three well-known public datasets, and comparisons with different key works of the current literature, show the efficiency of the proposed method in terms of modeling and background subtraction.


Assuntos
Interpretação de Imagem Assistida por Computador/métodos , Redes Neurais de Computação , Reconhecimento Automatizado de Padrão/métodos , Técnica de Subtração , Análise por Conglomerados , Conjuntos de Dados como Assunto , Humanos , Modelos Teóricos
14.
J Biomed Inform ; 89: 81-100, 2019 01.
Artigo em Inglês | MEDLINE | ID: mdl-30521854

RESUMO

Strokes, surgeries, or degenerative diseases can impair motor abilities and balance. Long-term rehabilitation is often the only way to recover, as completely as possible, these lost skills. To be effective, this type of rehabilitation should follow three main rules. First, rehabilitation exercises should be able to keep patient's motivation high. Second, each exercise should be customizable depending on patient's needs. Third, patient's performance should be evaluated objectively, i.e., by measuring patient's movements with respect to an optimal reference model. To meet the just reported requirements, in this paper, an interactive and low-cost full body rehabilitation framework for the generation of 3D immersive serious games is proposed. The framework combines two Natural User Interfaces (NUIs), for hand and body modeling, respectively, and a Head Mounted Display (HMD) to provide the patient with an interactive and highly defined Virtual Environment (VE) for playing with stimulating rehabilitation exercises. The paper presents the overall architecture of the framework, including the environment for the generation of the pilot serious games and the main features of the used hand and body models. The effectiveness of the proposed system is shown on a group of ninety-two patients. In a first stage, a pool of seven rehabilitation therapists has evaluated the results of the patients on the basis of three reference rehabilitation exercises, confirming a significant gradual recovery of the patients' skills. Moreover, the feedbacks received by the therapists and patients, who have used the system, have pointed out remarkable results in terms of motivation, usability, and customization. In a second stage, by comparing the current state-of-the-art in rehabilitation area with the proposed system, we have observed that the latter can be considered a concrete contribution in terms of versatility, immersivity, and novelty. In a final stage, by training a Gated Recurrent Unit Recurrent Neural Network (GRU-RNN) with healthy subjects (i.e., baseline), we have also provided a reference model to objectively evaluate the degree of the patients' performance. To estimate the effectiveness of this last aspect of the proposed approach, we have used the NTU RGB + D Action Recognition dataset obtaining comparable results with the current literature in action recognition.


Assuntos
Terapia por Exercício/métodos , Reabilitação/métodos , Jogos de Vídeo , Terapia de Exposição à Realidade Virtual/métodos , Humanos
15.
Sensors (Basel) ; 18(3)2018 Mar 10.
Artigo em Inglês | MEDLINE | ID: mdl-29534448

RESUMO

Hand rehabilitation is fundamental after stroke or surgery. Traditional rehabilitation requires a therapist and implies high costs, stress for the patient, and subjective evaluation of the therapy effectiveness. Alternative approaches, based on mechanical and tracking-based gloves, can be really effective when used in virtual reality (VR) environments. Mechanical devices are often expensive, cumbersome, patient specific and hand specific, while tracking-based devices are not affected by these limitations but, especially if based on a single tracking sensor, could suffer from occlusions. In this paper, the implementation of a multi-sensors approach, the Virtual Glove (VG), based on the simultaneous use of two orthogonal LEAP motion controllers, is described. The VG is calibrated and static positioning measurements are compared with those collected with an accurate spatial positioning system. The positioning error is lower than 6 mm in a cylindrical region of interest of radius 10 cm and height 21 cm. Real-time hand tracking measurements are also performed, analysed and reported. Hand tracking measurements show that VG operated in real-time (60 fps), reduced occlusions, and managed two LEAP sensors correctly, without any temporal and spatial discontinuity when skipping from one sensor to the other. A video demonstrating the good performance of VG is also collected and presented in the Supplementary Materials. Results are promising but further work must be done to allow the calculation of the forces exerted by each finger when constrained by mechanical tools (e.g., peg-boards) and for reducing occlusions when grasping these tools. Although the VG is proposed for rehabilitation purposes, it could also be used for tele-operation of tools and robots, and for other VR applications.


Assuntos
Luvas Protetoras , Mãos , Força da Mão , Humanos , Reabilitação do Acidente Vascular Cerebral , Interface Usuário-Computador , Realidade Virtual
16.
PLoS One ; 13(3): e0193508, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-29494621

RESUMO

Among relevant consequences of organizational socialization, a key factor is the promotion of organizational citizenship behaviors toward individuals (i.e. OCBI). However, the relation between organizational socialization and OCBI has received little attention. This study tests the validity of a moderated mediation model in which we examine the mediating effect of a decreased interpersonal strain on the relationship between organizational socialization and OCBI, and the moderation role of a positive personal resource in reducing interpersonal strain when an unsuccessful socialization subsists. A cross-sectional study was conducted on 765 new recruits of the Guardia di Finanza-a military Police Force reporting to the Italian Minister of Economy. Findings confirm our hypothesis that interpersonal strain mediates the relationship between organizational socialization and OCBI. The index of moderated mediation results significant, showing that this effect exists at different levels of positivity. Theoretical and practical implications for promoting pro-organizational behaviors are discussed.


Assuntos
Cultura Organizacional , Socialização , Adulto , Estudos Transversais , Feminino , Humanos , Relações Interpessoais , Masculino , Comportamento Social , Adulto Jovem
17.
Comput Biol Med ; 43(11): 1927-40, 2013 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-24209938

RESUMO

Post-stroke patients and people suffering from hand diseases often need rehabilitation therapy. The recovery of original skills, when possible, is closely related to the frequency, quality, and duration of rehabilitative therapy. Rehabilitation gloves are tools used both to facilitate rehabilitation and to control improvements by an evaluation system. Mechanical gloves have high cost, are often cumbersome, are not re-usable and, hence, not usable with the healthy hand to collect patient-specific hand mobility information to which rehabilitation should tend. The approach we propose is the virtual glove, a system that, unlike tools based on mechanical haptic interfaces, uses a set of video cameras surrounding the patient hand to collect a set of synchronized videos used to track hand movements. The hand tracking is associated with a numerical hand model that is used to calculate physical, geometrical and mechanical parameters, and to implement some boundary constraints such as joint dimensions, shape, joint angles, and so on. Besides being accurate, the proposed system is aimed to be low cost, not bulky (touch-less), easy to use, and re-usable. Previous works described the virtual glove general concepts, the hand model, and its characterization including system calibration strategy. The present paper provides the virtual glove overall design, both in real-time and in off-line modalities. In particular, the real-time modality is described and implemented and a marker-based hand tracking algorithm, including a marker positioning, coloring, labeling, detection and classification strategy, is presented for the off-line modality. Moreover, model based hand tracking experimental measurements are reported, discussed and compared with the corresponding poses of the real hand. An error estimation strategy is also presented and used for the collected measurements. System limitations and future work for system improvement are also discussed.


Assuntos
Vestuário , Mãos/anatomia & histologia , Mãos/fisiologia , Modelos Biológicos , Reabilitação/instrumentação , Software , Adulto , Humanos , Masculino , Movimento , Reabilitação do Acidente Vascular Cerebral
18.
Comput Math Methods Med ; 2013: 213901, 2013.
Artigo em Inglês | MEDLINE | ID: mdl-23840276

RESUMO

Texture analysis is the process of highlighting key characteristics thus providing an exhaustive and unambiguous mathematical description of any object represented in a digital image. Each characteristic is connected to a specific property of the object. In some cases the mentioned properties represent aspects visually perceptible which can be detected by developing operators based on Computer Vision techniques. In other cases these properties are not visually perceptible and their computation is obtained by developing operators based on Image Understanding approaches. Pixels composing high quality medical images can be considered the result of a stochastic process since they represent morphological or physiological processes. Empirical observations have shown that these images have visually perceptible and hidden significant aspects. For these reasons, the operators can be developed by means of a statistical approach. In this paper we present a set of customized first and second order statistics based operators to perform advanced texture analysis of Magnetic Resonance Imaging (MRI) images. In particular, we specify the main rules defining the role of an operator and its relationship with other operators. Extensive experiments carried out on a wide dataset of MRI images of different body regions demonstrating usefulness and accuracy of the proposed approach are also reported.


Assuntos
Interpretação de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética/estatística & dados numéricos , Osso e Ossos/anatomia & histologia , Encéfalo/anatomia & histologia , Biologia Computacional , Simulação por Computador , Bases de Dados Factuais/estatística & dados numéricos , Coração/anatomia & histologia , Humanos , Fígado/anatomia & histologia , Modelos Anatômicos , Processos Estocásticos
19.
Cogn Process ; 7(2): 121-8, 2006 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-16683175

RESUMO

This paper reports on the research activities performed by the Pictorial Computing Laboratory at the University of Rome, La Sapienza, during the last 5 years. Such work, essentially is based on the study of humancomputer interaction, spans from metamodels of interaction down to prototypes of interactive systems for both synchronous multimedia communication and groupwork, annotation systems for web pages, also encompassing theoretical and practical issues of visual languages and environments also including pattern recognition algorithms. Some applications are also considered like e-learning and collaborative work.


Assuntos
Academias e Institutos , Percepção Visual/fisiologia , Humanos , Modelos Neurológicos , Neurologia/educação , Estimulação Luminosa , Cidade de Roma
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA