Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 29
Filtrar
Más filtros

Banco de datos
País/Región como asunto
Tipo del documento
País de afiliación
Intervalo de año de publicación
1.
Epilepsy Behav ; 154: 109735, 2024 May.
Artículo en Inglés | MEDLINE | ID: mdl-38522192

RESUMEN

Seizure events can manifest as transient disruptions in the control of movements which may be organized in distinct behavioral sequences, accompanied or not by other observable features such as altered facial expressions. The analysis of these clinical signs, referred to as semiology, is subject to observer variations when specialists evaluate video-recorded events in the clinical setting. To enhance the accuracy and consistency of evaluations, computer-aided video analysis of seizures has emerged as a natural avenue. In the field of medical applications, deep learning and computer vision approaches have driven substantial advancements. Historically, these approaches have been used for disease detection, classification, and prediction using diagnostic data; however, there has been limited exploration of their application in evaluating video-based motion detection in the clinical epileptology setting. While vision-based technologies do not aim to replace clinical expertise, they can significantly contribute to medical decision-making and patient care by providing quantitative evidence and decision support. Behavior monitoring tools offer several advantages such as providing objective information, detecting challenging-to-observe events, reducing documentation efforts, and extending assessment capabilities to areas with limited expertise. The main applications of these could be (1) improved seizure detection methods; (2) refined semiology analysis for predicting seizure type and cerebral localization. In this paper, we detail the foundation technologies used in vision-based systems in the analysis of seizure videos, highlighting their success in semiology detection and analysis, focusing on work published in the last 7 years. We systematically present these methods and indicate how the adoption of deep learning for the analysis of video recordings of seizures could be approached. Additionally, we illustrate how existing technologies can be interconnected through an integrated system for video-based semiology analysis. Each module can be customized and improved by adapting more accurate and robust deep learning approaches as these evolve. Finally, we discuss challenges and research directions for future studies.


Asunto(s)
Aprendizaje Profundo , Convulsiones , Grabación en Video , Humanos , Convulsiones/diagnóstico , Convulsiones/fisiopatología , Grabación en Video/métodos , Electroencefalografía/métodos
2.
Sensors (Basel) ; 21(14)2021 Jul 12.
Artículo en Inglés | MEDLINE | ID: mdl-34300498

RESUMEN

With the advances of data-driven machine learning research, a wide variety of prediction problems have been tackled. It has become critical to explore how machine learning and specifically deep learning methods can be exploited to analyse healthcare data. A major limitation of existing methods has been the focus on grid-like data; however, the structure of physiological recordings are often irregular and unordered, which makes it difficult to conceptualise them as a matrix. As such, graph neural networks have attracted significant attention by exploiting implicit information that resides in a biological system, with interacting nodes connected by edges whose weights can be determined by either temporal associations or anatomical junctions. In this survey, we thoroughly review the different types of graph architectures and their applications in healthcare. We provide an overview of these methods in a systematic manner, organized by their domain of application including functional connectivity, anatomical structure, and electrical-based analysis. We also outline the limitations of existing techniques and discuss potential directions for future research.


Asunto(s)
Aprendizaje Profundo , Atención , Aprendizaje Automático , Redes Neurales de la Computación
3.
Epilepsy Behav ; 82: 17-24, 2018 05.
Artículo en Inglés | MEDLINE | ID: mdl-29574299

RESUMEN

Semiology observation and characterization play a major role in the presurgical evaluation of epilepsy. However, the interpretation of patient movements has subjective and intrinsic challenges. In this paper, we develop approaches to attempt to automatically extract and classify semiological patterns from facial expressions. We address limitations of existing computer-based analytical approaches of epilepsy monitoring, where facial movements have largely been ignored. This is an area that has seen limited advances in the literature. Inspired by recent advances in deep learning, we propose two deep learning models, landmark-based and region-based, to quantitatively identify changes in facial semiology in patients with mesial temporal lobe epilepsy (MTLE) from spontaneous expressions during phase I monitoring. A dataset has been collected from the Mater Advanced Epilepsy Unit (Brisbane, Australia) and is used to evaluate our proposed approach. Our experiments show that a landmark-based approach achieves promising results in analyzing facial semiology, where movements can be effectively marked and tracked when there is a frontal face on visualization. However, the region-based counterpart with spatiotemporal features achieves more accurate results when confronted with extreme head positions. A multifold cross-validation of the region-based approach exhibited an average test accuracy of 95.19% and an average AUC of 0.98 of the ROC curve. Conversely, a leave-one-subject-out cross-validation scheme for the same approach reveals a reduction in accuracy for the model as it is affected by data limitations and achieves an average test accuracy of 50.85%. Overall, the proposed deep learning models have shown promise in quantifying ictal facial movements in patients with MTLE. In turn, this may serve to enhance the automated presurgical epilepsy evaluation by allowing for standardization, mitigating bias, and assessing key features. The computer-aided diagnosis may help to support clinical decision-making and prevent erroneous localization and surgery.


Asunto(s)
Identificación Biométrica/métodos , Diagnóstico por Computador/métodos , Epilepsia/diagnóstico , Grabación en Video/métodos , Australia/epidemiología , Identificación Biométrica/normas , Diagnóstico por Computador/normas , Epilepsia/epidemiología , Epilepsia/fisiopatología , Cara/anatomía & histología , Cara/fisiología , Humanos , Masculino , Movimiento/fisiología , Examen Neurológico/métodos , Examen Neurológico/normas , Reproducibilidad de los Resultados , Grabación en Video/normas
4.
Epilepsy Behav ; 87: 46-58, 2018 10.
Artículo en Inglés | MEDLINE | ID: mdl-30173017

RESUMEN

During seizures, a myriad of clinical manifestations may occur. The analysis of these signs, known as seizure semiology, gives clues to the underlying cerebral networks involved. When patients with drug-resistant epilepsy are monitored to assess their suitability for epilepsy surgery, semiology is a vital component to the presurgical evaluation. Specific patterns of facial movements, head motions, limb posturing and articulations, and hand and finger automatisms may be useful in distinguishing between mesial temporal lobe epilepsy (MTLE) and extratemporal lobe epilepsy (ETLE). However, this analysis is time-consuming and dependent on clinical experience and training. Given this limitation, an automated analysis of semiological patterns, i.e., detection, quantification, and recognition of body movement patterns, has the potential to help increase the diagnostic precision of localization. While a few single modal quantitative approaches are available to assess seizure semiology, the automated quantification of patients' behavior across multiple modalities has seen limited advances in the literature. This is largely due to multiple complicated variables commonly encountered in the clinical setting, such as analyzing subtle physical movements when the patient is covered or room lighting is inadequate. Semiology encompasses the stepwise/temporal progression of signs that is reflective of the integration of connected neuronal networks. Thus, single signs in isolation are far less informative. Taking this into account, here, we describe a novel modular, hierarchical, multimodal system that aims to detect and quantify semiologic signs recorded in 2D monitoring videos. Our approach can jointly learn semiologic features from facial, body, and hand motions based on computer vision and deep learning architectures. A dataset collected from an Australian quaternary referral epilepsy unit analyzing 161 seizures arising from the temporal (n = 90) and extratemporal (n = 71) brain regions has been used in our system to quantitatively classify these types of epilepsy according to the semiology detected. A leave-one-subject-out (LOSO) cross-validation of semiological patterns from the face, body, and hands reached classification accuracies ranging between 12% and 83.4%, 41.2% and 80.1%, and 32.8% and 69.3%, respectively. The proposed hierarchical multimodal system is a potential stepping-stone towards developing a fully automated semiology analysis system to support the assessment of epilepsy.


Asunto(s)
Automatismo/fisiopatología , Aprendizaje Profundo , Epilepsia del Lóbulo Temporal/diagnóstico , Epilepsia/diagnóstico , Cara/fisiopatología , Mano/fisiopatología , Movimiento/fisiología , Monitorización Neurofisiológica/métodos , Convulsiones/diagnóstico , Fenómenos Biomecánicos , Conjuntos de Datos como Asunto , Humanos
5.
Heliyon ; 9(6): e16763, 2023 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-37303525

RESUMEN

Advances in machine learning and contactless sensors have enabled the understanding complex human behaviors in a healthcare setting. In particular, several deep learning systems have been introduced to enable comprehensive analysis of neuro-developmental conditions such as Autism Spectrum Disorder (ASD). This condition affects children from their early developmental stages onwards, and diagnosis relies entirely on observing the child's behavior and detecting behavioral cues. However, the diagnosis process is time-consuming as it requires long-term behavior observation, and the scarce availability of specialists. We demonstrate the effect of a region-based computer vision system to help clinicians and parents analyze a child's behavior. For this purpose, we adopt and enhance a dataset for analyzing autism-related actions using videos of children captured in uncontrolled environments (e.g. videos collected with consumer-grade cameras, in varied environments). The data is pre-processed by detecting the target child in the video to reduce the impact of background noise. Motivated by the effectiveness of temporal convolutional models, we propose both light-weight and conventional models capable of extracting action features from video frames and classifying autism-related behaviors by analyzing the relationships between frames in a video. By extensively evaluating feature extraction and learning strategies, we demonstrate that the highest performance is attained through the use of an Inflated 3D Convnet and Multi-Stage Temporal Convolutional Network. Our model achieved a Weighted F1-score of 0.83 for the classification of the three autism-related actions. We also propose a light-weight solution by employing the ESNet backbone with the same action recognition model, achieving a competitive 0.71 Weighted F1-score, and enabling potential deployment on embedded systems. Experimental results demonstrate the ability of our proposed models to recognize autism-related actions from videos captured in an uncontrolled environment, and thus can assist clinicians in analyzing ASD.

6.
IEEE J Biomed Health Inform ; 27(2): 968-979, 2023 02.
Artículo en Inglés | MEDLINE | ID: mdl-36409802

RESUMEN

Generative Adversarial Networks (GANs) are a revolutionary innovation in machine learning that enables the generation of artificial data. Artificial data synthesis is valuable especially in the medical field where it is difficult to collect and annotate real data due to privacy issues, limited access to experts, and cost. While adversarial training has led to significant breakthroughs in the computer vision field, biomedical research has not yet fully exploited the capabilities of generative models for data generation, and for more complex tasks such as biosignal modality transfer. We present a broad analysis on adversarial learning on biosignal data. Our study is the first in the machine learning community to focus on synthesizing 1D biosignal data using adversarial models. We consider three types of deep generative adversarial networks: a classical GAN, an adversarial AE, and a modality transfer GAN; individually designed for biosignal synthesis and modality transfer purposes. We evaluate these methods on multiple datasets for different biosignal modalites, including phonocardiogram (PCG), electrocardiogram (ECG), vectorcardiogram and 12-lead electrocardiogram. We follow subject-independent evaluation protocols, by evaluating the proposed models' performance on completely unseen data to demonstrate generalizability. We achieve superior results in generating biosignals, specifically in conditional generation, by synthesizing realistic samples while preserving domain-relevant characteristics. We also demonstrate insightful results in biosignal modality transfer that can generate expanded representations from fewer input-leads, ultimately making the clinical monitoring setting more convenient for the patient. Furthermore our longer duration ECGs generated, maintain clear ECG rhythmic regions, which has been proven using ad-hoc segmentation models.


Asunto(s)
Investigación Biomédica , Aprendizaje Profundo , Humanos , Electrocardiografía , Aprendizaje Automático , Privacidad , Procesamiento de Imagen Asistido por Computador
7.
IEEE J Biomed Health Inform ; 26(2): 527-538, 2022 02.
Artículo en Inglés | MEDLINE | ID: mdl-34314363

RESUMEN

Recently, researchers in the biomedical community have introduced deep learning-based epileptic seizure prediction models using electroencephalograms (EEGs) that can anticipate an epileptic seizure by differentiating between the pre-ictal and interictal stages of the subject's brain. Despite having the appearance of a typical anomaly detection task, this problem is complicated by subject-specific characteristics in EEG data. Therefore, studies that investigate seizure prediction widely employ subject-specific models. However, this approach is not suitable in situations where a target subject has limited (or no) data for training. Subject-independent models can address this issue by learning to predict seizures from multiple subjects, and therefore are of greater value in practice. In this study, we propose a subject-independent seizure predictor using Geometric Deep Learning (GDL). In the first stage of our GDL-based method we use graphs derived from physical connections in the EEG grid. We subsequently seek to synthesize subject-specific graphs using deep learning. The models proposed in both stages achieve state-of-the-art performance using a one-hour early seizure prediction window on two benchmark datasets (CHB-MIT-EEG: 95.38% with 23 subjects and Siena-EEG: 96.05% with 15 subjects). To the best of our knowledge, this is the first study that proposes synthesizing subject-specific graphs for seizure prediction. Furthermore, through model interpretation we outline how this method can potentially contribute towards Scalp EEG-based seizure localization.


Asunto(s)
Aprendizaje Profundo , Algoritmos , Electroencefalografía/métodos , Humanos , Cuero Cabelludo , Convulsiones/diagnóstico
8.
IEEE J Biomed Health Inform ; 26(7): 2898-2908, 2022 07.
Artículo en Inglés | MEDLINE | ID: mdl-35061595

RESUMEN

OBJECTIVE: This paper proposes a novel framework for lung sound event detection, segmenting continuous lung sound recordings into discrete events and performing recognition of each event. METHODS: We propose the use of a multi-branch TCN architecture and exploit a novel fusion strategy to combine the resultant features from these branches. This not only allows the network to retain the most salient information across different temporal granularities and disregards irrelevant information, but also allows our network to process recordings of arbitrary length. RESULTS: The proposed method is evaluated on multiple public and in-house benchmarks, containing irregular and noisy recordings of the respiratory auscultation process for the identification of auscultation events including inhalation, crackles, and rhonchi. Moreover, we provide an end-to-end model interpretation pipeline. CONCLUSION: Our analysis of different feature fusion strategies shows that the proposed feature concatenation method leads to better suppression of non-informative features, which drastically reduces the classifier overhead resulting in a robust lightweight network. SIGNIFICANCE: Lung sound event detection is a primary diagnostic step for numerous respiratory diseases. The proposed method provides a cost-effective and efficient alternative to exhaustive manual segmentation, and provides more accurate segmentation than existing methods. The end-to-end model interpretability helps to build the required trust in the system for use in clinical settings.


Asunto(s)
Ruidos Respiratorios , Grabaciones de Sonido , Algoritmos , Auscultación/métodos , Humanos , Pulmón
9.
Comput Med Imaging Graph ; 95: 102027, 2022 01.
Artículo en Inglés | MEDLINE | ID: mdl-34959100

RESUMEN

With the remarkable success of representation learning for prediction problems, we have witnessed a rapid expansion of the use of machine learning and deep learning for the analysis of digital pathology and biopsy image patches. However, learning over patch-wise features using convolutional neural networks limits the ability of the model to capture global contextual information and comprehensively model tissue composition. The phenotypical and topological distribution of constituent histological entities play a critical role in tissue diagnosis. As such, graph data representations and deep learning have attracted significant attention for encoding tissue representations, and capturing intra- and inter- entity level interactions. In this review, we provide a conceptual grounding for graph analytics in digital pathology, including entity-graph construction and graph architectures, and present their current success for tumor localization and classification, tumor invasion and staging, image retrieval, and survival prediction. We provide an overview of these methods in a systematic manner organized by the graph representation of the input image, scale, and organ on which they operate. We also outline the limitations of existing techniques, and suggest potential future research directions in this domain.


Asunto(s)
Aprendizaje Profundo , Neoplasias , Humanos , Aprendizaje Automático , Redes Neurales de la Computación
10.
Ecol Evol ; 11(11): 6649-6656, 2021 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-34141247

RESUMEN

Drones and machine learning-based automated detection methods are being used by ecologists to conduct wildlife surveys with increasing frequency. When traditional survey methods have been evaluated, a range of factors have been found to influence detection probabilities, including individual differences among conspecific animals, which can thus introduce biases into survey counts. There has been no such evaluation of drone-based surveys using automated detection in a natural setting. This is important to establish since any biases in counts made using these methods will need to be accounted for, to provide accurate data and improve decision-making for threatened species. In this study, a rare opportunity to survey a ground-truthed, individually marked population of 48 koalas in their natural habitat allowed for direct comparison of the factors impacting detection probability in both ground observation and drone surveys with manual and automated detection. We found that sex and host tree preferences impacted detection in ground surveys and in manual analysis of drone imagery with female koalas likely to be under-represented, and koalas higher in taller trees detected less frequently when present. Tree species composition of a forest stand also impacted on detections. In contrast, none of these factors impacted on automated detection. This suggests that the combination of drone-captured imagery and machine learning does not suffer from the same biases that affect conventional ground surveys. This provides further evidence that drones and machine learning are promising tools for gathering reliable detection data to better inform the management of threatened populations.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA