Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Mais filtros








Base de dados
Intervalo de ano de publicação
1.
Behav Res Methods ; 56(4): 3300-3314, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-38448726

RESUMO

Eye movements offer valuable insights for clinical interventions, diagnostics, and understanding visual perception. The process usually involves recording a participant's eye movements and analyzing them in terms of various gaze events. Manual identification of these events is extremely time-consuming. Although the field has seen the development of automatic event detection and classification methods, these methods have primarily focused on distinguishing events when participants remain stationary. With increasing interest in studying gaze behavior in freely moving participants, such as during daily activities like walking, new methods are required to automatically classify events in data collected under unrestricted conditions. Existing methods often rely on additional information from depth cameras or inertial measurement units (IMUs), which are not typically integrated into mobile eye trackers. To address this challenge, we present a framework for classifying gaze events based solely on eye-movement signals and scene video footage. Our approach, the Automatic Classification of gaze Events in Dynamic and Natural Viewing (ACE-DNV), analyzes eye movements in terms of velocity and direction and leverages visual odometry to capture head and body motion. Additionally, ACE-DNV assesses changes in image content surrounding the point of gaze. We evaluate the performance of ACE-DNV using a publicly available dataset and showcased its ability to discriminate between gaze fixation, gaze pursuit, gaze following, and gaze shifting (saccade) events. ACE-DNV exhibited comparable performance to previous methods, while eliminating the necessity for additional devices such as IMUs and depth cameras. In summary, ACE-DNV simplifies the automatic classification of gaze events in natural and dynamic environments. The source code is accessible at https://github.com/arnejad/ACE-DNV .


Assuntos
Movimentos Oculares , Tecnologia de Rastreamento Ocular , Fixação Ocular , Humanos , Movimentos Oculares/fisiologia , Fixação Ocular/fisiologia , Percepção Visual/fisiologia , Gravação em Vídeo/métodos , Masculino , Adulto , Feminino
2.
Annu Int Conf IEEE Eng Med Biol Soc ; 2022: 3749-3752, 2022 07.
Artigo em Inglês | MEDLINE | ID: mdl-36086352

RESUMO

Automatic Brain Tumor Segmentation (BraTS) from MRI plays a key role in diagnosing and treating brain tumors. Although 3D U-Nets achieve state-of-the-art results in BraTS, their clinical use is limited due to requiring high-end GPU with high memory. To address the limitation, we utilize several techniques for customizing a memory-efficient yet ac-curate deep framework based on 2D U-nets. In the framework, the simultaneous multi-label tumor segmentation is decomposed into fusion of sequential single-label (binary) segmentation tasks. In addition to reducing the memory consumption, it may also improve the segmentation accuracy since each U-net focuses on a sub-task, simpler than whole BraTS segmentation task. Extensive data augmentations on multi-modal MRI and the batch dice-loss function are also employed to further increase the generalization accuracy. Experiments on BraTS 2020 demonstrate that our framework almost achieves state-of-the-art results. Dice scores of 0.905, 0.903, and 0.822 for whole tumor, tumor core, and enhancing tumor are accomplished on the testing set. Moreover, our customized framework is executable on budget-GPUs with minimum requirement of only 2G RAM. Clinical relevance- We develop a memory-efficient deep Brain tumor segmentation tool that significantly reduces the hardware requirement of tumor segmentation while maintaining comparable accuracy and time. These advantages make our framework suitable for widespread use in clinical applications, especially in low-income regions. We plan to release the framework as a part of a free clinical brain imaging analysis tool. The code for this framework is publicly available:https://github.com/Nima-Hs/BraTS.


Assuntos
Neoplasias Encefálicas , Processamento de Imagem Assistida por Computador , Encéfalo/patologia , Neoplasias Encefálicas/diagnóstico por imagem , Neoplasias Encefálicas/patologia , Humanos , Processamento de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética/métodos , Neuroimagem
3.
Annu Int Conf IEEE Eng Med Biol Soc ; 2022: 2140-2143, 2022 07.
Artigo em Inglês | MEDLINE | ID: mdl-36086643

RESUMO

Current segmentation tools of brain MRI provide quantitative structural information for diagnosing neurological disorders. However, their clinical application is generally limited due to high memory usage and time consumption. Although 3D CNN-based segmentation methods have recently achieved the state-of-the-art and come up with timely available results, they heavily require high memory GPUs. In this paper, we customize a memory-efficient (GPU) brain structure segmentation framework, named FLBS, based on nnU-nets which enables our framework to adapt its architecture based on memory constraints dynamically. To further reduce the need for memory, we also reduce multi-label brain segmentation to the fusion of sequential single-label segmentations. In the first step, single label patches are extracted from the T1w and segmentation maps by locating the approximate area of each structure on the MNI305 template, including the safety margin. These considerations not only decrease the hardware usage but also maintains comparable computational time. Moreover, the target brain structures are customizable based on the specific clinical applications. We evaluate the performance in terms of Dice coefficient, runtime, and GPU requirement on OASIS-3 and CoRR-BNU1 datasets. The validation results show our comparable accuracies with state-of-the-arts and confirm the generalizability on unseen datasets while significantly reducing GPU requirements and maintaining runtime duration. Our framework is also executable on a budget GPU with a minimum requirement of 4G RAM. Clinical Relevance- We develop a memory-efficient deep Brain MRI segmentation tool that significantly reduces the hardware requirement of MRI segmentation while maintaining comparable accuracy and time. These advantages make FLBS suitable for widespread use in clinical applications especially for clinics with a limited budget. We plan to release the framework as a part of a free clinical brain imaging analysis tool. The code for this framework is publicly available.


Assuntos
Processamento de Imagem Assistida por Computador , Neuroimagem , Encéfalo/diagnóstico por imagem , Processamento de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética/métodos , Neuroimagem/métodos , Registros
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA