Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 17 de 17
Filtrar
1.
Sensors (Basel) ; 22(18)2022 Sep 06.
Artigo em Inglês | MEDLINE | ID: mdl-36146070

RESUMO

Computer-aided diagnosis (CAD) systems can be used to process breast ultrasound (BUS) images with the goal of enhancing the capability of diagnosing breast cancer. Many CAD systems operate by analyzing the region-of-interest (ROI) that contains the tumor in the BUS image using conventional texture-based classification models and deep learning-based classification models. Hence, the development of these systems requires automatic methods to localize the ROI that contains the tumor in the BUS image. Deep learning object-detection models can be used to localize the ROI that contains the tumor, but the ROI generated by one model might be better than the ROIs generated by other models. In this study, a new method, called the edge-based selection method, is proposed to analyze the ROIs generated by different deep learning object-detection models with the goal of selecting the ROI that improves the localization of the tumor region. The proposed method employs edge maps computed for BUS images using the recently introduced Dense Extreme Inception Network (DexiNed) deep learning edge-detection model. To the best of our knowledge, our study is the first study that has employed a deep learning edge-detection model to detect the tumor edges in BUS images. The proposed edge-based selection method is applied to analyze the ROIs generated by four deep learning object-detection models. The performance of the proposed edge-based selection method and the four deep learning object-detection models is evaluated using two BUS image datasets. The first dataset, which is used to perform cross-validation evaluation analysis, is a private dataset that includes 380 BUS images. The second dataset, which is used to perform generalization evaluation analysis, is a public dataset that includes 630 BUS images. For both the cross-validation evaluation analysis and the generalization evaluation analysis, the proposed method obtained the overall ROI detection rate, mean precision, mean recall, and mean F1-score values of 98%, 0.91, 0.90, and 0.90, respectively. Moreover, the results show that the proposed edge-based selection method outperformed the four deep learning object-detection models as well as three baseline-combining methods that can be used to combine the ROIs generated by the four deep learning object-detection models. These findings suggest the potential of employing our proposed method to analyze the ROIs generated using different deep learning object-detection models to select the ROI that improves the localization of the tumor region.


Assuntos
Neoplasias da Mama , Aprendizado Profundo , Neoplasias da Mama/diagnóstico por imagem , Neoplasias da Mama/patologia , Diagnóstico por Computador , Feminino , Humanos , Ultrassonografia Mamária/métodos
2.
Sensors (Basel) ; 20(8)2020 Apr 24.
Artigo em Inglês | MEDLINE | ID: mdl-32344557

RESUMO

Game-based rehabilitation systems provide an effective tool to engage cerebral palsy patients in physical exercises within an exciting and entertaining environment. A crucial factor to ensure the effectiveness of game-based rehabilitation systems is to assess the correctness of the movements performed by the patient during the game-playing sessions. In this study, we propose a game-based rehabilitation system for upper-limb cerebral palsy that includes three game-based exercises and a computerized assessment method. The game-based exercises aim to engage the participant in shoulder flexion, shoulder horizontal abduction/adduction, and shoulder adduction physical exercises that target the right arm. Human interaction with the game-based rehabilitation system is achieved using a Kinect sensor that tracks the skeleton joints of the participant. The computerized assessment method aims to assess the correctness of the right arm movements during each game-playing session by analyzing the tracking data acquired by the Kinect sensor. To evaluate the performance of the computerized assessment method, two groups of participants volunteered to participate in the game-based exercises. The first group included six cerebral palsy children and the second group included twenty typically developing subjects. For every participant, the computerized assessment method was employed to assess the correctness of the right arm movements in each game-playing session and these computer-based assessments were compared with matching gold standard evaluations provided by an experienced physiotherapist. The results reported in this study suggest the feasibility of employing the computerized assessment method to evaluate the correctness of the right arm movements during the game-playing sessions.


Assuntos
Paralisia Cerebral/terapia , Reabilitação do Acidente Vascular Cerebral/métodos , Criança , Pré-Escolar , Terapia por Exercício/métodos , Feminino , Humanos , Articulações/fisiologia , Masculino , Ombro/fisiologia , Esqueleto/fisiologia , Extremidade Superior/fisiologia
3.
Sensors (Basel) ; 20(23)2020 Nov 30.
Artigo em Inglês | MEDLINE | ID: mdl-33265900

RESUMO

This study aims to enable effective breast ultrasound image classification by combining deep features with conventional handcrafted features to classify the tumors. In particular, the deep features are extracted from a pre-trained convolutional neural network model, namely the VGG19 model, at six different extraction levels. The deep features extracted at each level are analyzed using a features selection algorithm to identify the deep feature combination that achieves the highest classification performance. Furthermore, the extracted deep features are combined with handcrafted texture and morphological features and processed using features selection to investigate the possibility of improving the classification performance. The cross-validation analysis, which is performed using 380 breast ultrasound images, shows that the best combination of deep features is obtained using a feature set, denoted by CONV features that include convolution features extracted from all convolution blocks of the VGG19 model. In particular, the CONV features achieved mean accuracy, sensitivity, and specificity values of 94.2%, 93.3%, and 94.9%, respectively. The analysis also shows that the performance of the CONV features degrades substantially when the features selection algorithm is not applied. The classification performance of the CONV features is improved by combining these features with handcrafted morphological features to achieve mean accuracy, sensitivity, and specificity values of 96.1%, 95.7%, and 96.3%, respectively. Furthermore, the cross-validation analysis demonstrates that the CONV features and the combined CONV and morphological features outperform the handcrafted texture and morphological features as well as the fine-tuned VGG19 model. The generalization performance of the CONV features and the combined CONV and morphological features is demonstrated by performing the training using the 380 breast ultrasound images and the testing using another dataset that includes 163 images. The results suggest that the combined CONV and morphological features can achieve effective breast ultrasound image classifications that increase the capability of detecting malignant tumors and reduce the potential of misclassifying benign tumors.


Assuntos
Neoplasias da Mama , Aprendizado Profundo , Ultrassonografia , Mama/diagnóstico por imagem , Neoplasias da Mama/diagnóstico por imagem , Feminino , Humanos , Redes Neurais de Computação
4.
Sensors (Basel) ; 18(8)2018 Aug 20.
Artigo em Inglês | MEDLINE | ID: mdl-30127311

RESUMO

Accurate recognition and understating of human emotions is an essential skill that can improve the collaboration between humans and machines. In this vein, electroencephalogram (EEG)-based emotion recognition is considered an active research field with challenging issues regarding the analyses of the nonstationary EEG signals and the extraction of salient features that can be used to achieve accurate emotion recognition. In this paper, an EEG-based emotion recognition approach with a novel time-frequency feature extraction technique is presented. In particular, a quadratic time-frequency distribution (QTFD) is employed to construct a high resolution time-frequency representation of the EEG signals and capture the spectral variations of the EEG signals over time. To reduce the dimensionality of the constructed QTFD-based representation, a set of 13 time- and frequency-domain features is extended to the joint time-frequency-domain and employed to quantify the QTFD-based time-frequency representation of the EEG signals. Moreover, to describe different emotion classes, we have utilized the 2D arousal-valence plane to develop four emotion labeling schemes of the EEG signals, such that each emotion labeling scheme defines a set of emotion classes. The extracted time-frequency features are used to construct a set of subject-specific support vector machine classifiers to classify the EEG signals of each subject into the different emotion classes that are defined using each of the four emotion labeling schemes. The performance of the proposed approach is evaluated using a publicly available EEG dataset, namely the DEAPdataset. Moreover, we design three performance evaluation analyses, namely the channel-based analysis, feature-based analysis and neutral class exclusion analysis, to quantify the effects of utilizing different groups of EEG channels that cover various regions in the brain, reducing the dimensionality of the extracted time-frequency features and excluding the EEG signals that correspond to the neutral class, on the capability of the proposed approach to discriminate between different emotion classes. The results reported in the current study demonstrate the efficacy of the proposed QTFD-based approach in recognizing different emotion classes. In particular, the average classification accuracies obtained in differentiating between the various emotion classes defined using each of the four emotion labeling schemes are within the range of 73.8 % ⁻ 86.2 % . Moreover, the emotion classification accuracies achieved by our proposed approach are higher than the results reported in several existing state-of-the-art EEG-based emotion recognition studies.


Assuntos
Encéfalo/fisiologia , Eletroencefalografia , Emoções , Máquina de Vetores de Suporte , Feminino , Humanos , Masculino
5.
Sensors (Basel) ; 18(10)2018 Oct 16.
Artigo em Inglês | MEDLINE | ID: mdl-30332743

RESUMO

Curvilinear ultrasound transducers are commonly used in various needle insertion interventions, but localizing the needle in curvilinear ultrasound images is usually challenging. In this paper, a new method is proposed to localize the needle in curvilinear ultrasound images by exciting the needle using a piezoelectric buzzer and imaging the excited needle using a curvilinear ultrasound transducer to acquire a power Doppler image and a B-mode image. The needle-induced Doppler responses that appear in the power Doppler image are analyzed to estimate the needle axis initially and identify the candidate regions that are expected to include the needle. The candidate needle regions in the B-mode image are analyzed to improve the localization of the needle axis. The needle tip is determined by analyzing the intensity variations of the power Doppler and B-mode images around the needle axis. The proposed method is employed to localize different needles that are inserted in three ex vivo animal tissue types at various insertion angles, and the results demonstrate the capability of the method to achieve automatic, reliable and accurate needle localization. Furthermore, the proposed method outperformed two existing needle localization methods.


Assuntos
Interpretação de Imagem Assistida por Computador/métodos , Ultrassonografia Doppler/métodos , Animais , Bovinos , Desenho de Equipamento , Estudos de Viabilidade , Fígado/diagnóstico por imagem , Músculo Esquelético/diagnóstico por imagem , Agulhas , Ultrassonografia Doppler/instrumentação
6.
Sensors (Basel) ; 17(9)2017 Aug 23.
Artigo em Inglês | MEDLINE | ID: mdl-28832513

RESUMO

This paper presents an EEG-based brain-computer interface system for classifying eleven motor imagery (MI) tasks within the same hand. The proposed system utilizes the Choi-Williams time-frequency distribution (CWD) to construct a time-frequency representation (TFR) of the EEG signals. The constructed TFR is used to extract five categories of time-frequency features (TFFs). The TFFs are processed using a hierarchical classification model to identify the MI task encapsulated within the EEG signals. To evaluate the performance of the proposed approach, EEG data were recorded for eighteen intact subjects and four amputated subjects while imagining to perform each of the eleven hand MI tasks. Two performance evaluation analyses, namely channel- and TFF-based analyses, are conducted to identify the best subset of EEG channels and the TFFs category, respectively, that enable the highest classification accuracy between the MI tasks. In each evaluation analysis, the hierarchical classification model is trained using two training procedures, namely subject-dependent and subject-independent procedures. These two training procedures quantify the capability of the proposed approach to capture both intra- and inter-personal variations in the EEG signals for different MI tasks within the same hand. The results demonstrate the efficacy of the approach for classifying the MI tasks within the same hand. In particular, the classification accuracies obtained for the intact and amputated subjects are as high as 88 . 8 % and 90 . 2 % , respectively, for the subject-dependent training procedure, and 80 . 8 % and 87 . 8 % , respectively, for the subject-independent training procedure. These results suggest the feasibility of applying the proposed approach to control dexterous prosthetic hands, which can be of great benefit for individuals suffering from hand amputations.


Assuntos
Interfaces Cérebro-Computador , Eletroencefalografia , Mãos , Humanos , Imaginação , Interface Usuário-Computador
7.
Med Phys ; 49(8): 4999-5013, 2022 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-35608237

RESUMO

BACKGROUND: Ultrasound is employed in needle interventions to visualize the anatomical structures and track the needle. Nevertheless, needle detection in ultrasound images is a difficult task, specifically at steep insertion angles. PURPOSE: A new method is presented to enable effective needle detection using ultrasound B-mode and power Doppler analyses. METHODS: A small buzzer is used to excite the needle and an ultrasound system is utilized to acquire B-mode and power Doppler images for the needle. The B-mode and power Doppler images are processed using Radon transform and local-phase analysis to initially detect the axis of the needle. The detection of the needle axis is improved by processing the power Doppler image using alpha shape analysis to define a region of interest (ROI) that contains the needle. Also, a set of feature maps is extracted from the ROI in the B-mode image. The feature maps are processed using a machine learning classifier to construct a likelihood image that visualizes the posterior needle likelihoods of the pixels. Radon transform is applied to the likelihood image to achieve an improved needle axis detection. Additionally, the region in the B-mode image surrounding the needle axis is analyzed to identify the needle tip using a custom-made probabilistic approach. Our method was utilized to detect needles inserted in ex vivo animal tissues at shallow [ 20 ∘ - 40 ∘ $20^\circ -40^\circ$ ), moderate [ 40 ∘ - 60 ∘ $40^\circ -60^\circ$ ), and steep [ 60 ∘ - 85 ∘ $60^\circ -85^\circ$ ] angles. RESULTS: Our method detected the needles with failure rates equal to 0% and mean angle, axis, and tip errors less than or equal to 0.7°, 0.6 mm, and 0.7 mm, respectively. Additionally, our method achieved favorable results compared to two recently introduced needle detection methods. CONCLUSIONS: The results indicate the potential of applying our method to achieve effective needle detection in ultrasound images.


Assuntos
Agulhas , Radônio , Animais , Ultrassonografia/métodos , Ultrassonografia Doppler , Ultrassonografia de Intervenção
8.
PeerJ Comput Sci ; 7: e498, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-33977136

RESUMO

Several higher education institutions have harnessed e-learning tools to empower the application of different learning models that enrich the educational process. Nevertheless, the reliance on commercial or open-source platforms, in some cases, to deliver e-learning could impact system acceptability, usability, and capability. Therefore, this study suggests design methods to develop effective learning management capabilities such as attendance, coordination, course folder, course section homepage, learning materials, syllabus, emails, and student tracking within a university portal named MyGJU. In particular, mechanisms to facilitate system setup, data integrity, information security, e-learning data reuse, version control automation, and multi-user collaboration have been applied to enable the e-learning modules in MyGJU to overcome some of the drawbacks of their counterparts in Moodle. Such system improvements are required to motivate both educators and students to engage in online learning. Besides, features comparisons between MyGJU with Moodle and in-house systems have been conducted for reference. Also, the system deployment outcomes and user survey results confirm the wide acceptance among instructors and students to use MyGJU as a first point of contact, as opposed to Moodle, for basic e-learning tasks. Further, the results illustrate that the in-house e-learning modules in MyGJU are engaging, easy to use, useful, and interactive.

9.
Med Phys ; 47(6): 2356-2379, 2020 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-32160309

RESUMO

PURPOSE: Ultrasound imaging is used in many minimally invasive needle insertion procedures to track the advancing needle, but localizing the needle in ultrasound images can be challenging, particularly at steep insertion angles. Previous methods have been introduced to localize the needle in ultrasound images, but the majority of these methods are based on ultrasound B-mode image analysis that is affected by the needle visibility. To address this limitation, we propose a two-phase, signature-based method to achieve reliable and accurate needle localization in curvilinear ultrasound images based on the beamformed radio frequency (RF) signals that are acquired using conventional ultrasound imaging systems. METHODS: In the first phase of our proposed method, the beamformed RF signals are divided into overlapping segments and these segments are processed to extract needle-specific features to identify the needle echoes. The features are analyzed using a support vector machine classifier to synthesize a quantitative image that highlights the needle. The quantitative image is processed using the Radon transform to achieve a reliable and accurate signature-based estimation of the needle axis. In the second phase, the accuracy of the needle axis estimation is improved by processing the RF samples located around the signature-based estimation of the needle axis using local phase analysis combined with the Radon transform. Moreover, a probabilistic approach is employed to identify the needle tip. The proposed method is used to localize needles with two different sizes inserted in ex vivo animal tissue specimens at various insertion angles. RESULTS: Our proposed method achieved reliable and accurate needle localization for an extended range of needle insertion angles with failure rates of 0% and mean angle, axis, and tip errors smaller than or equal to 0 . 7 ∘ , 0.6 mm, and 0.7 mm, respectively. Moreover, our proposed method outperformed a recently introduced needle localization method that is based on B-mode image analysis. CONCLUSIONS: These results suggest the potential of employing our signature-based method to achieve reliable and accurate needle localization during ultrasound-guided needle insertion procedures.


Assuntos
Processamento de Imagem Assistida por Computador , Agulhas , Animais , Imagens de Fantasmas , Ultrassonografia , Ultrassonografia de Intervenção
10.
Data Brief ; 33: 106534, 2020 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-33299909

RESUMO

The aim of this paper is to present a dataset for Wi-Fi-based human activity recognition. The dataset is comprised of five experiments performed by 30 different subjects in three different indoor environments. The experiments performed in the first two environments are of a line-of-sight (LOS) nature, while the experiments performed in the third environment are of a non-line-of-sight (NLOS) nature. Each subject performed 20 trials for each of the experiments which makes the overall number of recorded trials in the dataset equals to 3000 trials (30 subjects × 5 experiments × 20 trials). To record the data, we used the channel state information (CSI) tool [1] to capture the exchanged Wi-Fi packets between a Wi-Fi transmitter and receiver. The utilized transmitter and receiver are retrofitted with the Intel 5300 network interface card which enabled us to capture the CSI values that are contained in the recorded transmissions. Unlike other publicly available human activity datasets, this dataset provides researchers with the ability to test their developed methodologies on both LOS and NLOS environments, in addition to many different variations of human movements, such as walking, falling, turning, and pen pick up from the ground.

11.
Data Brief ; 31: 105668, 2020 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-32462061

RESUMO

This paper presents a dataset for Wi-Fi-based human-to-human interaction recognition that comprises twelve different interactions performed by 40 different pairs of subjects in an indoor environment. Each pair of subjects performed ten trials of each of the twelve interactions and the total number of trials recorded in our dataset for all the 40 pairs of subjects is 4800 trials (i.e., 40 pairs of subjects × 12 interactions × 10 trials). The publicly available CSI tool [1] is used to record the Wi-Fi signals transmitted from a commercial off-the-shelf access point, namely the Sagemcom 2704 access point, to a desktop computer that is equipped with an Intel 5300 network interface card. The recorded Wi-Fi signals consist of the Received Signal Strength Indicator (RSSI) values and the Channel State Information (CSI) values. Unlike the publicly available Wi-Fi-based human activity datasets, which mainly have focused on activities performed by a single human, our dataset provides a collection of Wi-Fi signals that are recorded for 40 different pairs of subjects while performing twelve two-person interactions. The presented dataset can be exploited to advance Wi-Fi-based human activity recognition in different aspects, such as the use of various machine learning algorithms to recognize different human-to-human interactions.

12.
Neurosci Lett ; 698: 113-120, 2019 04 17.
Artigo em Inglês | MEDLINE | ID: mdl-30630057

RESUMO

Decoding the movements of different fingers within the same hand can increase the control's dimensions of the electroencephalography (EEG)-based brain-computer interface (BCI) systems. This in turn enables the subjects who are using assistive devices to better perform various dexterous tasks. However, decoding the movements performed by different fingers within the same hand by analyzing the EEG signals is considered a challenging task. In this paper, we present a new EEG-based BCI system for decoding the movements of each finger within the same hand based on analyzing the EEG signals using a quadratic time-frequency distribution (QTFD), namely the Choi-William distribution (CWD). In particular, the CWD is employed to characterize the time-varying spectral components of the EEG signals and extract features that can capture movement-related information encapsulated within the EEG signals. The extracted CWD-based features are used to build a two-layer classification framework that decodes finger movements within the same hand. The performance of the proposed system is evaluated by recording the EEG signals for eighteen healthy subjects while performing twelve finger movements using their right hands. The results demonstrate the efficacy of the proposed system to decode finger movements within the same hand of each subject.


Assuntos
Eletroencefalografia , Dedos/fisiologia , Mãos/fisiologia , Movimento/fisiologia , Adulto , Algoritmos , Interfaces Cérebro-Computador , Eletroencefalografia/métodos , Feminino , Humanos , Imaginação/fisiologia , Masculino , Adulto Jovem
13.
Med Image Anal ; 50: 145-166, 2018 12.
Artigo em Inglês | MEDLINE | ID: mdl-30336383

RESUMO

Three-dimensional (3D) motorized curvilinear ultrasound probes provide an effective, low-cost tool to guide needle interventions, but localizing and tracking the needle in 3D ultrasound volumes is often challenging. In this study, a new method is introduced to localize and track the needle using 3D motorized curvilinear ultrasound probes. In particular, a low-cost camera mounted on the probe is employed to estimate the needle axis. The camera-estimated axis is used to identify a volume of interest (VOI) in the ultrasound volume that enables high needle visibility. This VOI is analyzed using local phase analysis and the random sample consensus algorithm to refine the camera-estimated needle axis. The needle tip is determined by searching the localized needle axis using a probabilistic approach. Dynamic needle tracking in a sequence of 3D ultrasound volumes is enabled by iteratively applying a Kalman filter to estimate the VOI that includes the needle in the successive ultrasound volume and limiting the localization analysis to this VOI. A series of ex vivo animal experiments are conducted to evaluate the accuracy of needle localization and tracking. The results show that the proposed method can localize the needle in individual ultrasound volumes with maximum error rates of 0.7 mm for the needle axis, 1.7° for the needle angle, and 1.2 mm for the needle tip. Moreover, the proposed method can track the needle in a sequence of ultrasound volumes with maximum error rates of 1.0 mm for the needle axis, 2.0° for the needle angle, and 1.7 mm for the needle tip. These results suggest the feasibility of applying the proposed method to localize and track the needle using 3D motorized curvilinear ultrasound probes.


Assuntos
Imageamento Tridimensional , Ultrassonografia/métodos , Imageamento Tridimensional/instrumentação , Imageamento Tridimensional/métodos , Agulhas
14.
Comput Math Methods Med ; 2016: 6740956, 2016.
Artigo em Inglês | MEDLINE | ID: mdl-28127383

RESUMO

Ultrasound imaging is commonly used for breast cancer diagnosis, but accurate interpretation of breast ultrasound (BUS) images is often challenging and operator-dependent. Computer-aided diagnosis (CAD) systems can be employed to provide the radiologists with a second opinion to improve the diagnosis accuracy. In this study, a new CAD system is developed to enable accurate BUS image classification. In particular, an improved texture analysis is introduced, in which the tumor is divided into a set of nonoverlapping regions of interest (ROIs). Each ROI is analyzed using gray-level cooccurrence matrix features and a support vector machine classifier to estimate its tumor class indicator. The tumor class indicators of all ROIs are combined using a voting mechanism to estimate the tumor class. In addition, morphological analysis is employed to classify the tumor. A probabilistic approach is used to fuse the classification results of the multiple-ROI texture analysis and morphological analysis. The proposed approach is applied to classify 110 BUS images that include 64 benign and 46 malignant tumors. The accuracy, specificity, and sensitivity obtained using the proposed approach are 98.2%, 98.4%, and 97.8%, respectively. These results demonstrate that the proposed approach can effectively be used to differentiate benign and malignant tumors.


Assuntos
Neoplasias da Mama/diagnóstico por imagem , Mama/diagnóstico por imagem , Interpretação de Imagem Assistida por Computador/métodos , Processamento de Imagem Assistida por Computador/métodos , Ultrassonografia Mamária , Área Sob a Curva , Diagnóstico por Computador , Reações Falso-Positivas , Feminino , Humanos , Modelos Estatísticos , Distribuição Normal , Reconhecimento Automatizado de Padrão/métodos , Probabilidade , Reprodutibilidade dos Testes , Sensibilidade e Especificidade , Software
15.
Annu Int Conf IEEE Eng Med Biol Soc ; 2016: 319-322, 2016 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-28268341

RESUMO

Late development and evolution of high degree-of-freedom (DOF) robotic hands have seen great technological strides to enhance the quality of life for amputated people. A robust hand kinematic estimation mechanisms have shown promising results to control robotic hands that can mimic the human hand functions and perform daily life hand dexterous tasks. In this paper, we propose an ensemble-based regression approach for continuous estimation of wrist and fingers movements from surface Electromyography (sEMG) signals. The proposed approach extracts time-domain features from the sEMG signals, and uses Gradient Boosted Regression Tree (GBRT) ensembles to estimate the kinematics of the wrist and fingers. Furthermore, we propose two different performance evaluation procedures to demonstrate the efficacy of the approach in providing a feasible approach towards accurately estimating hand kinematics.


Assuntos
Algoritmos , Eletromiografia/métodos , Dedos/fisiologia , Movimento/fisiologia , Punho/fisiologia , Fenômenos Biomecânicos , Exercício Físico/fisiologia , Humanos
16.
Artigo em Inglês | MEDLINE | ID: mdl-26737412

RESUMO

With the aging of society population, efficient tracking of elderly activities of daily living (ADLs) has gained interest. Advancements of assisting computing and sensor technologies have made it possible to support elderly people to perform real-time acquisition and monitoring for emergency and medical care. In an earlier study, we proposed an anatomical-plane-based human activity representation for elderly fall detection, namely, motion-pose geometric descriptor (MPGD). In this paper, we present a prediction framework that utilizes the MPGD to construct an accumulated histograms-based representation of an ongoing human activity. The accumulated histograms of MPGDs are then used to train a set of support-vector-machine classifiers with a probabilistic output to predict fall in an ongoing human activity. Evaluation results of the proposed framework, using real case scenarios, demonstrate the efficacy of the framework in providing a feasible approach towards accurately predicting elderly falls.


Assuntos
Acidentes por Quedas , Monitorização Ambulatorial/métodos , Atividades Cotidianas , Idoso , Algoritmos , Humanos , Processamento de Imagem Assistida por Computador , Monitorização Ambulatorial/instrumentação , Máquina de Vetores de Suporte , Gravação em Vídeo/instrumentação , Gravação em Vídeo/métodos
17.
Artigo em Inglês | MEDLINE | ID: mdl-25571343

RESUMO

Falls are a common cause of injuries and traumas for elderly and could be life threatening. Delivering a prompt medical support after a fall is essential to prevent lasting injuries. Therefore, effective fall detection could provide urgent support and dramatically reduce the risk of such mishaps. In this paper, we propose a hierarchical classification framework based on a novel anatomical-plane-based representation for elderly fall detection. The framework obtains human skeletal joints, using Microsoft Kinect sensors, and transforms them to a human representation. The representation is then utilized to classify the sensor input sequences and provide a semantic meaning of different human activities. Evaluation results of the proposed framework, using real case scenarios, demonstrate the efficacy of the framework in providing a feasible approach towards accurately detecting elderly falls.


Assuntos
Acidentes por Quedas/prevenção & controle , Idoso , Humanos , Interpretação de Imagem Assistida por Computador , Monitorização Fisiológica , Movimento , Postura , Risco
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA