Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 22
Filtrar
Mais filtros








Base de dados
Intervalo de ano de publicação
1.
Epilepsy Behav ; 154: 109735, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38522192

RESUMO

Seizure events can manifest as transient disruptions in the control of movements which may be organized in distinct behavioral sequences, accompanied or not by other observable features such as altered facial expressions. The analysis of these clinical signs, referred to as semiology, is subject to observer variations when specialists evaluate video-recorded events in the clinical setting. To enhance the accuracy and consistency of evaluations, computer-aided video analysis of seizures has emerged as a natural avenue. In the field of medical applications, deep learning and computer vision approaches have driven substantial advancements. Historically, these approaches have been used for disease detection, classification, and prediction using diagnostic data; however, there has been limited exploration of their application in evaluating video-based motion detection in the clinical epileptology setting. While vision-based technologies do not aim to replace clinical expertise, they can significantly contribute to medical decision-making and patient care by providing quantitative evidence and decision support. Behavior monitoring tools offer several advantages such as providing objective information, detecting challenging-to-observe events, reducing documentation efforts, and extending assessment capabilities to areas with limited expertise. The main applications of these could be (1) improved seizure detection methods; (2) refined semiology analysis for predicting seizure type and cerebral localization. In this paper, we detail the foundation technologies used in vision-based systems in the analysis of seizure videos, highlighting their success in semiology detection and analysis, focusing on work published in the last 7 years. We systematically present these methods and indicate how the adoption of deep learning for the analysis of video recordings of seizures could be approached. Additionally, we illustrate how existing technologies can be interconnected through an integrated system for video-based semiology analysis. Each module can be customized and improved by adapting more accurate and robust deep learning approaches as these evolve. Finally, we discuss challenges and research directions for future studies.


Assuntos
Aprendizado Profundo , Convulsões , Gravação em Vídeo , Humanos , Convulsões/diagnóstico , Convulsões/fisiopatologia , Gravação em Vídeo/métodos , Eletroencefalografia/métodos
2.
Heliyon ; 9(6): e16763, 2023 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-37303525

RESUMO

Advances in machine learning and contactless sensors have enabled the understanding complex human behaviors in a healthcare setting. In particular, several deep learning systems have been introduced to enable comprehensive analysis of neuro-developmental conditions such as Autism Spectrum Disorder (ASD). This condition affects children from their early developmental stages onwards, and diagnosis relies entirely on observing the child's behavior and detecting behavioral cues. However, the diagnosis process is time-consuming as it requires long-term behavior observation, and the scarce availability of specialists. We demonstrate the effect of a region-based computer vision system to help clinicians and parents analyze a child's behavior. For this purpose, we adopt and enhance a dataset for analyzing autism-related actions using videos of children captured in uncontrolled environments (e.g. videos collected with consumer-grade cameras, in varied environments). The data is pre-processed by detecting the target child in the video to reduce the impact of background noise. Motivated by the effectiveness of temporal convolutional models, we propose both light-weight and conventional models capable of extracting action features from video frames and classifying autism-related behaviors by analyzing the relationships between frames in a video. By extensively evaluating feature extraction and learning strategies, we demonstrate that the highest performance is attained through the use of an Inflated 3D Convnet and Multi-Stage Temporal Convolutional Network. Our model achieved a Weighted F1-score of 0.83 for the classification of the three autism-related actions. We also propose a light-weight solution by employing the ESNet backbone with the same action recognition model, achieving a competitive 0.71 Weighted F1-score, and enabling potential deployment on embedded systems. Experimental results demonstrate the ability of our proposed models to recognize autism-related actions from videos captured in an uncontrolled environment, and thus can assist clinicians in analyzing ASD.

3.
Comput Methods Programs Biomed ; 232: 107451, 2023 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-36893580

RESUMO

BACKGROUND AND OBJECTIVES: Advanced artificial intelligence and machine learning have great potential to redefine how skin lesions are detected, mapped, tracked and documented. Here, we propose a 3D whole-body imaging system known as 3DSkin-mapper to enable automated detection, evaluation and mapping of skin lesions. METHODS: A modular camera rig arranged in a cylindrical configuration was designed to automatically capture images of the entire skin surface of a subject synchronously from multiple angles. Based on the images, we developed algorithms for 3D model reconstruction, data processing and skin lesion detection and tracking based on deep convolutional neural networks. We also introduced a customised, user-friendly, and adaptable interface that enables individuals to interactively visualise, manipulate, and annotate the images. The interface includes built-in features such as mapping 2D skin lesions onto the corresponding 3D model. RESULTS: The proposed system is developed for skin lesion screening, the focus of this paper is to introduce the system instead of clinical study. Using synthetic and real images we demonstrate the effectiveness of the proposed system by providing multiple views of a target skin lesion, enabling further 3D geometry analysis and longitudinal tracking. Skin lesions are identified as outliers which deserve more attention from a skin cancer physician. Our detector leverages expert annotated labels to learn representations of skin lesions, while capturing the effects of anatomical variability. It takes only a few seconds to capture the entire skin surface, and about half an hour to process and analyse the images. CONCLUSIONS: Our experiments show that the proposed system allows fast and easy whole body 3D imaging. It can be used by dermatological clinics to conduct skin screening, detect and track skin lesions over time, identify suspicious lesions, and document pigmented lesions. The system can potentially save clinicians time and effort significantly. The 3D imaging and analysis has the potential to change the paradigm of whole body photography with many applications in skin diseases, including inflammatory and pigmentary disorders. With reduced time requirements for recording and documenting high-quality skin information, doctors could spend more time providing better-quality treatment based on more detailed and accurate information.


Assuntos
Neoplasias Cutâneas , Imagem Corporal Total , Humanos , Inteligência Artificial , Redes Neurais de Computação , Neoplasias Cutâneas/diagnóstico por imagem , Algoritmos
4.
Comput Med Imaging Graph ; 95: 102027, 2022 01.
Artigo em Inglês | MEDLINE | ID: mdl-34959100

RESUMO

With the remarkable success of representation learning for prediction problems, we have witnessed a rapid expansion of the use of machine learning and deep learning for the analysis of digital pathology and biopsy image patches. However, learning over patch-wise features using convolutional neural networks limits the ability of the model to capture global contextual information and comprehensively model tissue composition. The phenotypical and topological distribution of constituent histological entities play a critical role in tissue diagnosis. As such, graph data representations and deep learning have attracted significant attention for encoding tissue representations, and capturing intra- and inter- entity level interactions. In this review, we provide a conceptual grounding for graph analytics in digital pathology, including entity-graph construction and graph architectures, and present their current success for tumor localization and classification, tumor invasion and staging, image retrieval, and survival prediction. We provide an overview of these methods in a systematic manner organized by the graph representation of the input image, scale, and organ on which they operate. We also outline the limitations of existing techniques, and suggest potential future research directions in this domain.


Assuntos
Aprendizado Profundo , Neoplasias , Humanos , Aprendizado de Máquina , Redes Neurais de Computação
5.
Annu Int Conf IEEE Eng Med Biol Soc ; 2021: 2601-2604, 2021 11.
Artigo em Inglês | MEDLINE | ID: mdl-34891786

RESUMO

Inpatient falls are a serious safety issue in hospitals and healthcare facilities. Recent advances in video analytics for patient monitoring provide a non-intrusive avenue to reduce this risk through continuous activity monitoring. However, in- bed fall risk assessment systems have received less attention in the literature. The majority of prior studies have focused on fall event detection, and do not consider the circumstances that may indicate an imminent inpatient fall. Here, we propose a video-based system that can monitor the risk of a patient falling, and alert staff of unsafe behaviour to help prevent falls before they occur. We propose an approach that leverages recent advances in human localisation and skeleton pose estimation to extract spatial features from video frames recorded in a simulated environment. We demonstrate that body positions can be effectively recognised and provide useful evidence for fall risk assessment. This work highlights the benefits of video-based models for analysing behaviours of interest, and demonstrates how such a system could enable sufficient lead time for healthcare professionals to respond and address patient needs, which is necessary for the development of fall intervention programs.


Assuntos
Acidentes por Quedas , Pacientes Internados , Acidentes por Quedas/prevenção & controle , Hospitais , Humanos , Monitorização Fisiológica , Medição de Risco
6.
Annu Int Conf IEEE Eng Med Biol Soc ; 2021: 3613-3616, 2021 11.
Artigo em Inglês | MEDLINE | ID: mdl-34892020

RESUMO

Recent advances in deep learning have enabled the development of automated frameworks for analysing medical images and signals, including analysis of cervical cancer. Many previous works focus on the analysis of isolated cervical cells, or do not offer explainable methods to explore and understand how the proposed models reach their classification decisions on multi-cell images which contain multiple cells. Here, we evaluate various state-of-the-art deep learning models and attention-based frameworks to classify multiple cervical cells. Our aim is to provide interpretable deep learning models by comparing their explainability through the gradients visualization. We demonstrate the importance of using images that contain multiple cells over using isolated single-cell images. We show the effectiveness of the residual channel attention model for extracting important features from a group of cells, and demonstrate this model's efficiency for multiple cervical cells classification. This work highlights the benefits of attention networks to exploit relations and distributions within multi-cell images for cervical cancer analysis. Such an approach can assist clinicians in understanding a model's prediction by providing interpretable results.


Assuntos
Redes Neurais de Computação , Neoplasias do Colo do Útero , Feminino , Humanos
7.
Sensors (Basel) ; 21(14)2021 Jul 12.
Artigo em Inglês | MEDLINE | ID: mdl-34300498

RESUMO

With the advances of data-driven machine learning research, a wide variety of prediction problems have been tackled. It has become critical to explore how machine learning and specifically deep learning methods can be exploited to analyse healthcare data. A major limitation of existing methods has been the focus on grid-like data; however, the structure of physiological recordings are often irregular and unordered, which makes it difficult to conceptualise them as a matrix. As such, graph neural networks have attracted significant attention by exploiting implicit information that resides in a biological system, with interacting nodes connected by edges whose weights can be determined by either temporal associations or anatomical junctions. In this survey, we thoroughly review the different types of graph architectures and their applications in healthcare. We provide an overview of these methods in a systematic manner, organized by their domain of application including functional connectivity, anatomical structure, and electrical-based analysis. We also outline the limitations of existing techniques and discuss potential directions for future research.


Assuntos
Aprendizado Profundo , Atenção , Aprendizado de Máquina , Redes Neurais de Computação
8.
IEEE J Biomed Health Inform ; 25(1): 69-76, 2021 01.
Artigo em Inglês | MEDLINE | ID: mdl-32310808

RESUMO

The prospective identification of children likely to develop schizophrenia is a vital tool to support early interventions that can mitigate the risk of progression to clinical psychosis. Electroencephalographic (EEG) patterns from brain activity and deep learning techniques are valuable resources in achieving this identification. We propose automated techniques that can process raw EEG waveforms to identify children who may have an increased risk of schizophrenia compared to typically developing children. We also analyse abnormal features that remain during developmental follow-up over a period of   âˆ¼ 4 years in children with a vulnerability to schizophrenia initially assessed when aged 9 to 12 years. EEG data from participants were captured during the recording of a passive auditory oddball paradigm. We undertake a holistic study to identify brain abnormalities, first by exploring traditional machine learning algorithms using classification methods applied to hand-engineered features (event-related potential components). Then, we compare the performance of these methods with end-to-end deep learning techniques applied to raw data. We demonstrate via average cross-validation performance measures that recurrent deep convolutional neural networks can outperform traditional machine learning methods for sequence modeling. We illustrate the intuitive salient information of the model with the location of the most relevant attributes of a post-stimulus window. This baseline identification system in the area of mental illness supports the evidence of developmental and disease effects in a pre-prodromal phase of psychosis. These results reinforce the benefits of deep learning to support psychiatric classification and neuroscientific research more broadly.


Assuntos
Aprendizado Profundo , Esquizofrenia , Criança , Eletroencefalografia , Humanos , Redes Neurais de Computação , Estudos Prospectivos , Esquizofrenia/diagnóstico
9.
Annu Int Conf IEEE Eng Med Biol Soc ; 2020: 184-187, 2020 07.
Artigo em Inglês | MEDLINE | ID: mdl-33017960

RESUMO

Recent advances in deep learning have enabled the development of automated frameworks for analysing medical images and signals. For analysis of physiological recordings, models based on temporal convolutional networks and recurrent neural networks have demonstrated encouraging results and an ability to capture complex patterns and dependencies in the data. However, representations that capture the entirety of the raw signal are suboptimal as not all portions of the signal are equally important. As such, attention mechanisms are proposed to divert focus to regions of interest, reducing computational cost and enhancing accuracy. Here, we evaluate attention-based frameworks for the classification of physiological signals in different clinical domains. We evaluated our methodology on three classification scenarios: neurogenerative disorders, neurological status and seizure type. We demonstrate that attention networks can outperform traditional deep learning models for sequence modelling by identifying the most relevant attributes of an input signal for decision making. This work highlights the benefits of attention-based models for analysing raw data in the field of biomedical research.


Assuntos
Atenção , Redes Neurais de Computação , Bases de Dados Genéticas , Humanos , Convulsões
10.
Annu Int Conf IEEE Eng Med Biol Soc ; 2020: 569-575, 2020 07.
Artigo em Inglês | MEDLINE | ID: mdl-33018053

RESUMO

Classification of seizure type is a key step in the clinical process for evaluating an individual who presents with seizures. It determines the course of clinical diagnosis and treatment, and its impact stretches beyond the clinical domain to epilepsy research and the development of novel therapies. Automated identification of seizure type may facilitate understanding of the disease, and seizure detection and prediction have been the focus of recent research that has sought to exploit the benefits of machine learning and deep learning architectures. Nevertheless, there is not yet a definitive solution for automating the classification of seizure type, a task that must currently be performed by an expert epileptologist. Inspired by recent advances in neural memory networks (NMNs), we introduce a novel approach for the classification of seizure type using electrophysiological data. We first explore the performance of traditional deep learning techniques which use convolutional and recurrent neural networks, and enhance these architectures by using external memory modules with trainable neural plasticity. We show that our model achieves a state-of-the-art weighted F1 score of 0.945 for seizure type classification on the TUH EEG Seizure Corpus with the IBM TUSZ preprocessed data. This work highlights the potential of neural memory networks to support the field of epilepsy research, along with biomedical research and signal analysis more broadly.


Assuntos
Eletroencefalografia , Epilepsia , Epilepsia/diagnóstico , Humanos , Memória , Redes Neurais de Computação , Convulsões/diagnóstico
11.
Neural Netw ; 127: 67-81, 2020 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-32334342

RESUMO

In the domain of machine learning, Neural Memory Networks (NMNs) have recently achieved impressive results in a variety of application areas including visual question answering, trajectory prediction, object tracking, and language modelling. However, we observe that the attention based knowledge retrieval mechanisms used in current NMNs restrict them from achieving their full potential as the attention process retrieves information based on a set of static connection weights. This is suboptimal in a setting where there are vast differences among samples in the data domain; such as anomaly detection where there is no consistent criteria for what constitutes an anomaly. In this paper, we propose a plastic neural memory access mechanism which exploits both static and dynamic connection weights in the memory read, write and output generation procedures. We demonstrate the effectiveness and flexibility of the proposed memory model in three challenging anomaly detection tasks in the medical domain: abnormal EEG identification, MRI tumour type classification and schizophrenia risk detection in children. In all settings, the proposed approach outperforms the current state-of-the-art. Furthermore, we perform an in-depth analysis demonstrating the utility of neural plasticity for the knowledge retrieval process and provide evidence on how the proposed memory model generates sparse yet informative memory outputs.


Assuntos
Eletroencefalografia/métodos , Aprendizado de Máquina , Imageamento por Ressonância Magnética/métodos , Redes Neurais de Computação , Plasticidade Neuronal , Atenção/fisiologia , Neoplasias Encefálicas/diagnóstico por imagem , Bases de Dados Factuais/tendências , Eletroencefalografia/tendências , Humanos , Aprendizado de Máquina/tendências , Imageamento por Ressonância Magnética/tendências , Memória/fisiologia , Plasticidade Neuronal/fisiologia
12.
Sci Rep ; 9(1): 4729, 2019 03 18.
Artigo em Inglês | MEDLINE | ID: mdl-30894584

RESUMO

Thermal Imaging (Infrared-Imaging-IRI) is a promising new technique for psychophysiological research and application. Unlike traditional physiological measures (like skin conductance and heart rate), it is uniquely contact-free, substantially enhancing its ecological validity. Investigating facial regions and subsequent reliable signal extraction from IRI data is challenging due to head motion artefacts. Exploiting its potential thus depends on advances in analytical methods. Here, we developed a novel semi-automated thermal signal extraction method employing deep learning algorithms for facial landmark identification. We applied this method to physiological responses elicited by a sudden auditory stimulus, to determine if facial temperature changes induced by a stimulus of a loud sound can be detected. We compared thermal responses with psycho-physiological sensor-based tools of galvanic skin response (GSR) and electrocardiography (ECG). We found that the temperatures of selected facial regions, particularly the nose tip, significantly decreased after the auditory stimulus. Additionally, this response was quite rapid at around 4-5 seconds, starting less than 2 seconds following the GSR changes. These results demonstrate that our methodology offers a sensitive and robust tool to capture facial physiological changes with minimal manual intervention and manual pre-processing of signals. Newer methodological developments for reliable temperature extraction promise to boost IRI use as an ecologically-valid technique in social and affective neuroscience.


Assuntos
Estimulação Acústica , Aprendizado Profundo , Face/fisiologia , Algoritmos , Temperatura Corporal , Eletrocardiografia , Face/diagnóstico por imagem , Resposta Galvânica da Pele , Humanos , Projetos de Pesquisa/normas , Espectroscopia de Luz Próxima ao Infravermelho/métodos
13.
IEEE J Biomed Health Inform ; 23(6): 2583-2591, 2019 11.
Artigo em Inglês | MEDLINE | ID: mdl-30714935

RESUMO

A substantial proportion of patients with functional neurological disorders (FND) are being incorrectly diagnosed with epilepsy because their semiology resembles that of epileptic seizures (ES). Misdiagnosis may lead to unnecessary treatment and its associated complications. Diagnostic errors often result from an overreliance on specific clinical features. Furthermore, the lack of electrophysiological changes in patients with FND can also be seen in some forms of epilepsy, making diagnosis extremely challenging. Therefore, understanding semiology is an essential step for differentiating between ES and FND. Existing sensor-based and marker-based systems require physical contact with the body and are vulnerable to clinical situations such as patient positions, illumination changes, and motion discontinuities. Computer vision and deep learning are advancing to overcome these limitations encountered in the assessment of diseases and patient monitoring; however, they have not been investigated for seizure disorder scenarios. Here, we propose and compare two marker-free deep learning models, a landmark-based and a region-based model, both of which are capable of distinguishing between seizures from video recordings. We quantify semiology by using either a fusion of reference points and flow fields, or through the complete analysis of the body. Average leave-one-subject-out cross-validation accuracies for the landmark-based and region-based approaches of 68.1% and 79.6% in our dataset collected from 35 patients, reveal the benefit of video analytics to support automated identification of semiology in the challenging conditions of a hospital setting.


Assuntos
Epilepsia/diagnóstico por imagem , Interpretação de Imagem Assistida por Computador/métodos , Monitorização Fisiológica/métodos , Gravação em Vídeo/métodos , Aprendizado Profundo , Humanos
14.
Seizure ; 65: 65-71, 2019 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-30616221

RESUMO

PURPOSE: The recent explosion of artificial intelligence techniques in video analytics has highlighted the clinical relevance in capturing and quantifying semiology during epileptic seizures; however, we lack an automated anomaly identification system for aberrant behaviors. In this paper, we describe a novel system that is trained with known clinical manifestations from patients with mesial temporal and extra-temporal lobe epilepsy and presents aberrant semiology to physicians. METHODS: We propose a simple end-to-end-architecture based on convolutional and recurrent neural networks to extract spatiotemporal representations and to create motion capture libraries from 119 seizures of 28 patients. The cosine similarity distance between a test representation and the libraries from five aberrant seizures separate to the main dataset is subsequently used to identify test seizures with unusual patterns that do not conform to known behavior. RESULTS: Cross-validation evaluations are performed to validate the quantification of motion features and to demonstrate the robustness of the motion capture libraries for identifying epilepsy types. The system to identify unusual epileptic seizures successfully detects out of the five seizures categorized as aberrant cases. CONCLUSIONS: The proposed approach is capable of modeling clinical manifestations of known behaviors in natural clinical settings, and effectively identify aberrant seizures using a simple strategy based on motion capture libraries of spatiotemporal representations and similarities between hidden states. Detecting anomalies is essential to alert clinicians to the occurrence of unusual events, and we show how this can be achieved using pre-learned database of semiology stored in health records.


Assuntos
Encéfalo/fisiopatologia , Diagnóstico por Computador/métodos , Epilepsia do Lobo Temporal/diagnóstico , Epilepsia do Lobo Temporal/fisiopatologia , Convulsões/diagnóstico , Eletroencefalografia , Feminino , Humanos , Masculino , Redes Neurais de Computação , Reprodutibilidade dos Testes , Convulsões/fisiopatologia , Gravação em Vídeo
15.
Annu Int Conf IEEE Eng Med Biol Soc ; 2019: 2099-2105, 2019 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-31946315

RESUMO

In epilepsy, semiology refers to the study of patient behavior and movement, and their temporal evolution during epileptic seizures. Understanding semiology provides clues to the cerebral networks underpinning the epileptic episode and is a vital resource in the pre-surgical evaluation. Recent advances in video analytics have been helpful in capturing and quantifying epileptic seizures. Nevertheless, the automated representation of the evolution of semiology, as examined by neurologists, has not been appropriately investigated. From initial seizure symptoms until seizure termination, motion patterns of isolated clinical manifestations vary over time. Furthermore, epileptic seizures frequently evolve from one clinical manifestation to another, and their understanding cannot be overlooked during a presurgery evaluation. Here, we propose a system capable of computing motion signatures from videos of face and hand semiology to provide quantitative information on the motion, and the correlation between motions. Each signature is derived from a sparse saliency representation established by the magnitude of the optical flow field. The developed computer-aided tool provides a novel approach for physicians to analyze semiology as a flow of signals without interfering in the healthcare environment. We detect and quantify semiology using detectors based on deep learning and via a novel signature scheme, which is independent of the amount of data and seizure differences. The system reinforces the benefits of computer vision for non-obstructive clinical applications to quantify epileptic seizures recorded in real-life healthcare conditions.


Assuntos
Diagnóstico por Computador , Epilepsia/diagnóstico , Movimento , Convulsões/diagnóstico , Eletroencefalografia , Face , Mãos , Humanos , Gravação em Vídeo
16.
Annu Int Conf IEEE Eng Med Biol Soc ; 2019: 1625-1629, 2019 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-31946208

RESUMO

Epilepsy monitoring involves the study of videos to assess clinical signs (semiology) to assist with the diagnosis of seizures. Recent advances in the application of vision-based approaches to epilepsy analysis have demonstrated significant potential to automate this assessment. Nevertheless, current proposed computer vision based techniques are unable to accurately quantify specific facial modifications, e.g. mouth motions, which are examined by neurologists to distinguish between seizure types. 2D approaches that analyse facial landmarks have been proposed to quantify mouth motions, however, they are unable to fully represent motions in the mouth and cheeks (ictal pouting) due to a lack of landmarks in the the cheek regions. Additionally, 2D region-based techniques based on the detection of the mouth have limitations when dealing with large pose variations, and thus make a fair comparison between samples difficult due to the variety of poses present. 3D approaches, on the other hand, retain rich information about the shape and appearance of faces, simplifying alignment for comparison between sequences. In this paper, we propose a novel network method based on a 3D reconstruction of the face and deep learning to detect and quantify mouth semiology in our video dataset of 20 seizures, recorded from patients with mesial temporal and extra-temporal lobe epilepsy. The proposed network is capable of distinguishing between seizures of both types of epilepsy. An average classification accuracy of 89% demonstrates the benefits of computer vision and deep learning for clinical applications of non-contact systems to identify semiology commonly encountered in a natural clinical setting.


Assuntos
Epilepsia , Eletroencefalografia , Face , Humanos , Boca , Convulsões
17.
Annu Int Conf IEEE Eng Med Biol Soc ; 2019: 6529-6532, 2019 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-31947337

RESUMO

Recent breakthroughs in computer vision offer an exciting avenue to develop new remote, and non-intrusive patient monitoring techniques. A very challenging topic to address is the automated recognition of breathing disorders during sleep. Due to its complexity, this task has rarely been explored in the literature on real patients using such marker-free approaches. Here, we propose an approach based on deep learning architectures capable of classifying breathing disorders. The classification is performed on depth maps recorded with 3D cameras from 76 patients referred to a sleep laboratory that present a range of breathing disorders. Our system is capable of classifying individual breathing events as normal or abnormal with an accuracy of 61.8%, hence our results show that computer vision and deep learning are viable tools for assessing locally or remotely breathing quality during sleep.


Assuntos
Aprendizado Profundo , Respiração , Humanos , Sono
18.
Annu Int Conf IEEE Eng Med Biol Soc ; 2018: 332-335, 2018 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-30440405

RESUMO

Electrophysiological observation plays a major role in epilepsy evaluation. However, human interpretation of brain signals is subjective and prone to misdiagnosis. Automating this process, especially seizure detection relying on scalpbased Electroencephalography (EEG) and intracranial EEG, has been the focus of research over recent decades. Nevertheless, its numerous challenges have inhibited a definitive solution. Inspired by recent advances in deep learning, here we describe a new classification approach for EEG time series based on Recurrent Neural Networks (RNNs) via the use of Long- Short Term Memory (LSTM) networks. The proposed deep network effectively learns and models discriminative temporal patterns from EEG sequential data. Especially, the features are automatically discovered from the raw EEG data without any pre-processing step, eliminating humans from laborious feature design task. Our light-weight system has a low computational complexity and reduced memory requirement for large training datasets. On a public dataset, a multi-fold cross-validation scheme of the proposed architecture exhibited an average validation accuracy of 95.54% and an average AUC of 0.9582 of the ROC curve among all sets defined in the experiment. This work reinforces the benefits of deep learning to be further attended in clinical applications and neuroscientific research.


Assuntos
Epilepsia , Encéfalo , Eletroencefalografia , Humanos , Redes Neurais de Computação , Convulsões
19.
Annu Int Conf IEEE Eng Med Biol Soc ; 2018: 3578-3581, 2018 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-30441151

RESUMO

Visual motion clues such as facial expression and pose are natural semiology features which an epileptologist observes to identify epileptic seizures. However, these cues have not been effectively exploited for automatic detection due to the diverse variations in seizure appearance within and between patients. Here we present a multi-modal analysis approach to quantitatively classify patients with mesial temporal lobe (MTLE) and extra-temporal lobe (ETLE) epilepsy, relying on the fusion of facial expressions and pose dynamics. We propose a new deep learning approach that leverages recent advances in Convolutional Neural Networks (CNN) and Long Short-Term Memory (LSTM) networks to automatically extract spatiotemporal features from facial and pose semiology using recorded videos. A video dataset from 12 patients with MTLE and 6 patients with ETLEin an Australian hospital has been collected for experiments. Our experiments show that facial semiology and body movements can be effectively recognized and tracked, and that they provide useful evidence to identify the type of epilepsy. A multi-fold cross-validation of the fusion model exhibited an average test accuracy of 92.10%, while a leave-one-subject-out cross-validation scheme, which is the first in the literature, achieves an accuracy of 58.49%. The proposed approach is capable of modelling semiology features which effectively discriminate between seizures arising from temporal and extra-temporal brain areas. Our approach can be used as a virtual assistant, which will save time, improve patient safety and provide objective clinical analysis to assist with clinical decision making.


Assuntos
Epilepsia , Convulsões , Humanos
20.
Epilepsy Behav ; 87: 46-58, 2018 10.
Artigo em Inglês | MEDLINE | ID: mdl-30173017

RESUMO

During seizures, a myriad of clinical manifestations may occur. The analysis of these signs, known as seizure semiology, gives clues to the underlying cerebral networks involved. When patients with drug-resistant epilepsy are monitored to assess their suitability for epilepsy surgery, semiology is a vital component to the presurgical evaluation. Specific patterns of facial movements, head motions, limb posturing and articulations, and hand and finger automatisms may be useful in distinguishing between mesial temporal lobe epilepsy (MTLE) and extratemporal lobe epilepsy (ETLE). However, this analysis is time-consuming and dependent on clinical experience and training. Given this limitation, an automated analysis of semiological patterns, i.e., detection, quantification, and recognition of body movement patterns, has the potential to help increase the diagnostic precision of localization. While a few single modal quantitative approaches are available to assess seizure semiology, the automated quantification of patients' behavior across multiple modalities has seen limited advances in the literature. This is largely due to multiple complicated variables commonly encountered in the clinical setting, such as analyzing subtle physical movements when the patient is covered or room lighting is inadequate. Semiology encompasses the stepwise/temporal progression of signs that is reflective of the integration of connected neuronal networks. Thus, single signs in isolation are far less informative. Taking this into account, here, we describe a novel modular, hierarchical, multimodal system that aims to detect and quantify semiologic signs recorded in 2D monitoring videos. Our approach can jointly learn semiologic features from facial, body, and hand motions based on computer vision and deep learning architectures. A dataset collected from an Australian quaternary referral epilepsy unit analyzing 161 seizures arising from the temporal (n = 90) and extratemporal (n = 71) brain regions has been used in our system to quantitatively classify these types of epilepsy according to the semiology detected. A leave-one-subject-out (LOSO) cross-validation of semiological patterns from the face, body, and hands reached classification accuracies ranging between 12% and 83.4%, 41.2% and 80.1%, and 32.8% and 69.3%, respectively. The proposed hierarchical multimodal system is a potential stepping-stone towards developing a fully automated semiology analysis system to support the assessment of epilepsy.


Assuntos
Automatismo/fisiopatologia , Aprendizado Profundo , Epilepsia do Lobo Temporal/diagnóstico , Epilepsia/diagnóstico , Face/fisiopatologia , Mãos/fisiopatologia , Movimento/fisiologia , Monitorização Neurofisiológica/métodos , Convulsões/diagnóstico , Fenômenos Biomecânicos , Conjuntos de Dados como Assunto , Humanos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA