Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 4 de 4
Filter
Add more filters










Database
Language
Publication year range
1.
J Neurosci Methods ; : 110251, 2024 Aug 14.
Article in English | MEDLINE | ID: mdl-39151656

ABSTRACT

BACKGROUND: Electroencephalography (EEG) and electrocorticography (ECoG) recordings have been used to decode finger movements by analyzing brain activity. Traditional methods focused on single bandpass power changes for movement decoding, utilizing machine learning models requiring manual feature extraction. NEW METHOD: This study introduces a 3D convolutional neural network (3D-CNN) model to decode finger movements using ECoG data. The model employs adaptive, explainable AI (xAI) techniques to interpret the physiological relevance of brain signals. ECoG signals from epilepsy patients during awake craniotomy were processed to extract power spectral density across multiple frequency bands. These data formed a 3D matrix used to train the 3D-CNN to predict finger trajectories. RESULTS: The 3D-CNN model showed significant accuracy in predicting finger movements, with root-mean-square error (RMSE) values of 0.26-0.38 for single finger movements and 0.20-0.24 for combined movements. Explainable AI techniques, Grad-CAM and SHAP, identified the high gamma (HG) band as crucial for movement prediction, showing specific cortical regions involved in different finger movements. These findings highlighted the physiological significance of the HG band in motor control. COMPARISON WITH EXISTING METHODS: The 3D-CNN model outperformed traditional machine learning approaches by effectively capturing spatial and temporal patterns in ECoG data. The use of xAI techniques provided clearer insights into the model's decision-making process, unlike the "black box" nature of standard deep learning models. CONCLUSIONS: The proposed 3D-CNN model, combined with xAI methods, enhances the decoding accuracy of finger movements from ECoG data. This approach offers a more efficient and interpretable solution for brain-computer interface (BCI) applications, emphasizing the HG band's role in motor control.

2.
Phys Med Biol ; 68(24)2023 Dec 13.
Article in English | MEDLINE | ID: mdl-37832565

ABSTRACT

The automated marker-free longitudinal Infrared (IR) breast image registration overcomes several challenges like no anatomic fiducial markers on the body surface, blurry boundaries, heat pattern variation by environmental and physiological factors, nonrigid deformation, etc, has the ability of quantitative pixel-wise analysis with the heat energy and patterns change in a time course study. To achieve the goal, scale-invariant feature transform, Harris corner, and Hessian matrix were employed to generate the feature points as anatomic fiducial markers, and hybrid genetic algorithm and particle swarm optimization minimizing the matching errors was used to find the appropriate corresponding pairs between the 1st IR image and thenth IR image. Moreover, the mechanism of the IR spectrogram hardware system has a high level of reproducibility. The performance of the proposed longitudinal image registration system was evaluated by the simulated experiments and the clinical trial. In the simulated experiments, the mean difference of our system is 1.64 mm, which increases 57.58% accuracy than manual determination and makes a 17.4% improvement than the previous study. In the clinical trial, 80 patients were captured several times of IR breast images during chemotherapy. Most of them were well aligned in the spatiotemporal domain. In the few cases with evident heat pattern dissipation and spatial deviation, it still provided a reliable comparison of vascular variation. Therefore, the proposed system is accurate and robust, which could be considered as a reliable tool for longitudinal approaches to breast cancer diagnosis.


Subject(s)
Algorithms , Breast Neoplasms , Humans , Female , Reproducibility of Results , Breast/diagnostic imaging , Breast Neoplasms/diagnostic imaging , Fiducial Markers
3.
Int J Neural Syst ; 33(10): 2350051, 2023 Oct.
Article in English | MEDLINE | ID: mdl-37632142

ABSTRACT

Complete reaching movements involve target sensing, motor planning, and arm movement execution, and this process requires the integration and communication of various brain regions. Previously, reaching movements have been decoded successfully from the motor cortex (M1) and applied to prosthetic control. However, most studies attempted to decode neural activities from a single brain region, resulting in reduced decoding accuracy during visually guided reaching motions. To enhance the decoding accuracy of visually guided forelimb reaching movements, we propose a parallel computing neural network using both M1 and medial agranular cortex (AGm) neural activities of rats to predict forelimb-reaching movements. The proposed network decodes M1 neural activities into the primary components of the forelimb movement and decodes AGm neural activities into internal feedforward information to calibrate the forelimb movement in a goal-reaching movement. We demonstrate that using AGm neural activity to calibrate M1 predicted forelimb movement can improve decoding performance significantly compared to neural decoders without calibration. We also show that the M1 and AGm neural activities contribute to controlling forelimb movement during goal-reaching movements, and we report an increase in the power of the local field potential (LFP) in beta and gamma bands over AGm in response to a change in the target distance, which may involve sensorimotor transformation and communication between the visual cortex and AGm when preparing for an upcoming reaching movement. The proposed parallel computing neural network with the internal feedback model improves prediction accuracy for goal-reaching movements.


Subject(s)
Goals , Upper Extremity , Animals , Feedback , Forelimb/physiology , Movement/physiology
4.
Ann Thorac Surg ; 114(3): 999-1006, 2022 09.
Article in English | MEDLINE | ID: mdl-34454902

ABSTRACT

BACKGROUND: We aimed to establish a radiomic prediction model for tumor spread through air spaces (STAS) in lung adenocarcinoma using radiomic values from high-grade subtypes (solid and micropapillary). METHODS: We retrospectively reviewed 327 patients with lung adenocarcinoma from 2 institutions (cohort 1: 227 patients; cohort 2: 100 patients) between March 2017 and March 2019. STAS was identified in 113 (34.6%) patients. A high-grade likelihood prediction model was constructed based on a historical cohort of 82 patients with "near-pure" pathologic subtype. The STAS prediction model based on the patch-wise mechanism identified the high-grade likelihood area for each voxel within the internal border of the tumor. STAS presence was indirectly predicted by a volume percentage threshold of the high-grade likelihood area. Performance was evaluated by receiver operating curve analysis with 10-repetition, 3-fold cross-validation in cohort 1, and was individually tested in cohort 2. RESULTS: Overall, 227 patients (STAS-positive: 77 [33.9%]) were enrolled for cross-validation (cohort 1) while 100 (STAS-positive: 36 [36.0%]) underwent individual testing (cohort 2). The gray level cooccurrence matrix (variance) and histogram (75th percentile) features were selected to construct the high-grade likelihood prediction model, which was used as the STAS prediction model. The proposed model achieved good performance in cohort 1 with an area under the curve, sensitivity, and specificity, of 81.44%, 86.75%, and 62.60%, respectively, and correspondingly, in cohort 2, they were 83.16%, 83.33%, and 63.90%, respectively. CONCLUSIONS: The proposed computed tomography-based radiomic prediction model could help guide preoperative prediction of STAS in early-stage lung adenocarcinoma and relevant surgeries.


Subject(s)
Adenocarcinoma of Lung , Lung Neoplasms , Adenocarcinoma of Lung/surgery , Humans , Lung Neoplasms/surgery , Neoplasm Invasiveness/pathology , Neoplasm Staging , Prognosis , Retrospective Studies
SELECTION OF CITATIONS
SEARCH DETAIL