Your browser doesn't support javascript.
loading
Montrer: 20 | 50 | 100
Résultats 1 - 20 de 6.802
Filtrer
1.
J Biophotonics ; : e202400138, 2024 Jul 01.
Article de Anglais | MEDLINE | ID: mdl-38952169

RÉSUMÉ

Neurological disorders such as Parkinson's disease (PD) often adversely affect the vascular system, leading to alterations in blood flow patterns. Functional near-infrared spectroscopy (fNIRS) is used to monitor hemodynamic changes via signal measurement. This study investigated the potential of using resting-state fNIRS data through a convolutional neural network (CNN) to evaluate PD with orthostatic hypotension. The CNN demonstrated significant efficacy in analyzing fNIRS data, and it outperformed the other machine learning methods. The results indicate that judicious input data selection can enhance accuracy by over 85%, while including the correlation matrix as an input further improves the accuracy to more than 90%. This study underscores the promising role of CNN-based fNIRS data analysis in the diagnosis and management of the PD. This approach enhances diagnostic accuracy, particularly in resting-state conditions, and can reduce the discomfort and risks associated with current diagnostic methods, such as the head-up tilt test.

2.
J Imaging Inform Med ; 2024 Jul 02.
Article de Anglais | MEDLINE | ID: mdl-38955963

RÉSUMÉ

Abnormalities in adrenal gland size may be associated with various diseases. Monitoring the volume of adrenal gland can provide a quantitative imaging indicator for such conditions as adrenal hyperplasia, adrenal adenoma, and adrenal cortical adenocarcinoma. However, current adrenal gland segmentation models have notable limitations in sample selection and imaging parameters, particularly the need for more training on low-dose imaging parameters, which limits the generalization ability of the models, restricting their widespread application in routine clinical practice. We developed a fully automated adrenal gland volume quantification and visualization tool based on the no new U-Net (nnU-Net) for the automatic segmentation of deep learning models to address these issues. We established this tool by using a large dataset with multiple parameters, machine types, radiation doses, slice thicknesses, scanning modes, phases, and adrenal gland morphologies to achieve high accuracy and broad adaptability. The tool can meet clinical needs such as screening, monitoring, and preoperative visualization assistance for adrenal gland diseases. Experimental results demonstrate that our model achieves an overall dice coefficient of 0.88 on all images and 0.87 on low-dose CT scans. Compared to other deep learning models and nnU-Net model tools, our model exhibits higher accuracy and broader adaptability in adrenal gland segmentation.

3.
Neurosci Bull ; 2024 Jul 02.
Article de Anglais | MEDLINE | ID: mdl-38956006

RÉSUMÉ

Unlocking task-related EEG spectra is crucial for neuroscience. Traditional convolutional neural networks (CNNs) effectively extract these features but face limitations like overfitting due to small datasets. To address this issue, we propose a lightweight CNN and assess its interpretability through the fully connected layer (FCL). Initially tested with two tasks (Task 1: open vs closed eyes, Task 2: interictal vs ictal stage), the CNN demonstrated enhanced spectral features in the alpha band for Task 1 and the theta band for Task 2, aligning with established neurophysiological characteristics. Subsequent experiments on two brain-computer interface tasks revealed a correlation between delta activity (around 1.55 Hz) and hand movement, with consistent results across pericentral electroencephalogram (EEG) channels. Compared to recent research, our method stands out by delivering task-related spectral features through FCL, resulting in significantly fewer trainable parameters while maintaining comparable interpretability. This indicates its potential suitability for a wider array of EEG decoding scenarios.

4.
Sci Rep ; 14(1): 15057, 2024 07 01.
Article de Anglais | MEDLINE | ID: mdl-38956224

RÉSUMÉ

Image segmentation is a critical and challenging endeavor in the field of medicine. A magnetic resonance imaging (MRI) scan is a helpful method for locating any abnormal brain tissue these days. It is a difficult undertaking for radiologists to diagnose and classify the tumor from several pictures. This work develops an intelligent method for accurately identifying brain tumors. This research investigates the identification of brain tumor types from MRI data using convolutional neural networks and optimization strategies. Two novel approaches are presented: the first is a novel segmentation technique based on firefly optimization (FFO) that assesses segmentation quality based on many parameters, and the other is a combination of two types of convolutional neural networks to categorize tumor traits and identify the kind of tumor. These upgrades are intended to raise the general efficacy of the MRI scan technique and increase identification accuracy. Using MRI scans from BBRATS2018, the testing is carried out, and the suggested approach has shown improved performance with an average accuracy of 98.6%.


Sujet(s)
Tumeurs du cerveau , Imagerie par résonance magnétique , , Imagerie par résonance magnétique/méthodes , Tumeurs du cerveau/imagerie diagnostique , Tumeurs du cerveau/anatomopathologie , Tumeurs du cerveau/classification , Humains , Traitement d'image par ordinateur/méthodes , Algorithmes , Encéphale/imagerie diagnostique , Encéphale/anatomopathologie
5.
Int J Med Robot ; 20(4): e2664, 2024 Aug.
Article de Anglais | MEDLINE | ID: mdl-38994900

RÉSUMÉ

BACKGROUND: This study aimed to develop a novel deep convolutional neural network called Dual-path Double Attention Transformer (DDA-Transformer) designed to achieve precise and fast knee joint CT image segmentation and to validate it in robotic-assisted total knee arthroplasty (TKA). METHODS: The femoral, tibial, patellar, and fibular segmentation performance and speed were evaluated and the accuracy of component sizing, bone resection and alignment of the robotic-assisted TKA system constructed using this deep learning network was clinically validated. RESULTS: Overall, DDA-Transformer outperformed six other networks in terms of the Dice coefficient, intersection over union, average surface distance, and Hausdorff distance. DDA-Transformer exhibited significantly faster segmentation speeds than nnUnet, TransUnet and 3D-Unet (p < 0.01). Furthermore, the robotic-assisted TKA system outperforms the manual group in surgical accuracy. CONCLUSIONS: DDA-Transformer exhibited significantly improved accuracy and robustness in knee joint segmentation, and this convenient and stable knee joint CT image segmentation network significantly improved the accuracy of the TKA procedure.


Sujet(s)
Arthroplastie prothétique de genou , Apprentissage profond , Articulation du genou , Interventions chirurgicales robotisées , Tomodensitométrie , Humains , Arthroplastie prothétique de genou/méthodes , Interventions chirurgicales robotisées/méthodes , Tomodensitométrie/méthodes , Articulation du genou/chirurgie , Articulation du genou/imagerie diagnostique , Mâle , , Femelle , Traitement d'image par ordinateur/méthodes , Chirurgie assistée par ordinateur/méthodes , Sujet âgé , Reproductibilité des résultats , Adulte d'âge moyen , Tibia/chirurgie , Tibia/imagerie diagnostique , Algorithmes , Fémur/chirurgie , Fémur/imagerie diagnostique , Imagerie tridimensionnelle/méthodes
6.
Sensors (Basel) ; 24(13)2024 Jun 27.
Article de Anglais | MEDLINE | ID: mdl-39000965

RÉSUMÉ

Regarding the difficulty of extracting the acquired fault signal features of bearings from a strong background noise vibration signal, coupled with the fact that one-dimensional (1D) signals provide limited fault information, an optimal time frequency fusion symmetric dot pattern (SDP) bearing fault feature enhancement and diagnosis method is proposed. Firstly, the vibration signals are transformed into two-dimensional (2D) features by the time frequency fusion algorithm SDP, which can multi-scale analyze the fluctuations of signals at minor scales, as well as enhance bearing fault features. Secondly, the bat algorithm is employed to optimize the SDP parameters adaptively. It can effectively improve the distinctions between various types of faults. Finally, the fault diagnosis model can be constructed by a deep convolutional neural network (DCNN). To validate the effectiveness of the proposed method, Case Western Reserve University's (CWRU) bearing fault dataset and bearing fault dataset laboratory experimental platform were used. The experimental results illustrate that the fault diagnosis accuracy of the proposed method is 100%, which proves the feasibility and effectiveness of the proposed method. By comparing with other 2D transformer methods, the experimental results illustrate that the proposed method achieves the highest accuracy in bearing fault diagnosis. It validated the superiority of the proposed methodology.

7.
Sensors (Basel) ; 24(13)2024 Jun 27.
Article de Anglais | MEDLINE | ID: mdl-39000974

RÉSUMÉ

Partially automated robotic systems, such as camera holders, represent a pivotal step towards enhancing efficiency and precision in surgical procedures. Therefore, this paper introduces an approach for real-time tool localization in laparoscopy surgery using convolutional neural networks. The proposed model, based on two Hourglass modules in series, can localize up to two surgical tools simultaneously. This study utilized three datasets: the ITAP dataset, alongside two publicly available datasets, namely Atlas Dione and EndoVis Challenge. Three variations of the Hourglass-based models were proposed, with the best model achieving high accuracy (92.86%) and frame rates (27.64 FPS), suitable for integration into robotic systems. An evaluation on an independent test set yielded slightly lower accuracy, indicating limited generalizability. The model was further analyzed using the Grad-CAM technique to gain insights into its functionality. Overall, this work presents a promising solution for automating aspects of laparoscopic surgery, potentially enhancing surgical efficiency by reducing the need for manual endoscope manipulation.


Sujet(s)
Laparoscopie , , Laparoscopie/méthodes , Humains , Interventions chirurgicales robotisées/méthodes , Algorithmes
8.
Sensors (Basel) ; 24(13)2024 Jun 30.
Article de Anglais | MEDLINE | ID: mdl-39001035

RÉSUMÉ

With the rapid development of the Internet of Things (IoT), the sophistication and intelligence of sensors are continually evolving, playing increasingly important roles in smart homes, industrial automation, and remote healthcare. However, these intelligent sensors face many security threats, particularly from malware attacks. Identifying and classifying malware is crucial for preventing such attacks. As the number of sensors and their applications grow, malware targeting sensors proliferates. Processing massive malware samples is challenging due to limited bandwidth and resources in IoT environments. Therefore, compressing malware samples before transmission and classification can improve efficiency. Additionally, sharing malware samples between classification participants poses security risks, necessitating methods that prevent sample exploitation. Moreover, the complex network environments also necessitate robust classification methods. To address these challenges, this paper proposes CSMC (Compressed Sensing Malware Classification), an efficient malware classification method based on compressed sensing. This method compresses malware samples before sharing and classification, thus facilitating more effective sharing and processing. By introducing deep learning, the method can extract malware family features during compression, which classical methods cannot achieve. Furthermore, the irreversibility of the method enhances security by preventing classification participants from exploiting malware samples. Experimental results demonstrate that for malware targeting Windows and Android operating systems, CSMC outperforms many existing methods based on compressed sensing and machine or deep learning. Additionally, experiments on sample reconstruction and noise demonstrate CSMC's capabilities in terms of security and robustness.

9.
Sensors (Basel) ; 24(13)2024 Jul 02.
Article de Anglais | MEDLINE | ID: mdl-39001094

RÉSUMÉ

Breathing is one of the body's most basic functions and abnormal breathing can indicate underlying cardiopulmonary problems. Monitoring respiratory abnormalities can help with early detection and reduce the risk of cardiopulmonary diseases. In this study, a 77 GHz frequency-modulated continuous wave (FMCW) millimetre-wave (mmWave) radar was used to detect different types of respiratory signals from the human body in a non-contact manner for respiratory monitoring (RM). To solve the problem of noise interference in the daily environment on the recognition of different breathing patterns, the system utilised breathing signals captured by the millimetre-wave radar. Firstly, we filtered out most of the static noise using a signal superposition method and designed an elliptical filter to obtain a more accurate image of the breathing waveforms between 0.1 Hz and 0.5 Hz. Secondly, combined with the histogram of oriented gradient (HOG) feature extraction algorithm, K-nearest neighbours (KNN), convolutional neural network (CNN), and HOG support vector machine (G-SVM) were used to classify four breathing modes, namely, normal breathing, slow and deep breathing, quick breathing, and meningitic breathing. The overall accuracy reached up to 94.75%. Therefore, this study effectively supports daily medical monitoring.


Sujet(s)
Algorithmes , , Radar , Respiration , Traitement du signal assisté par ordinateur , Machine à vecteur de support , Humains , Monitorage physiologique/méthodes , Monitorage physiologique/instrumentation
10.
Sensors (Basel) ; 24(13)2024 Jul 05.
Article de Anglais | MEDLINE | ID: mdl-39001155

RÉSUMÉ

Electrocardiography (ECG) has emerged as a ubiquitous diagnostic tool for the identification and characterization of diverse cardiovascular pathologies. Wearable health monitoring devices, equipped with on-device biomedical artificial intelligence (AI) processors, have revolutionized the acquisition, analysis, and interpretation of ECG data. However, these systems necessitate AI processors that exhibit flexible configuration, facilitate portability, and demonstrate optimal performance in terms of power consumption and latency for the realization of various functionalities. To address these challenges, this study proposes an instruction-driven convolutional neural network (CNN) processor. This processor incorporates three key features: (1) An instruction-driven CNN processor to support versatile ECG-based application. (2) A Processing element (PE) array design that simultaneously considers parallelism and data reuse. (3) An activation unit based on the CORDIC algorithm, supporting both Tanh and Sigmoid computations. The design has been implemented using 110 nm CMOS process technology, occupying a die area of 1.35 mm2 with 12.94 µW power consumption. It has been demonstrated with two typical ECG AI applications, including two-class (i.e., normal/abnormal) classification and five-class classification. The proposed 1-D CNN algorithm performs with a 97.95% accuracy for the two-class classification and 97.9% for the five-class classification, respectively.


Sujet(s)
Algorithmes , Électrocardiographie , , Traitement du signal assisté par ordinateur , Électrocardiographie/méthodes , Humains , Intelligence artificielle , Dispositifs électroniques portables
11.
Sensors (Basel) ; 24(13)2024 Jul 07.
Article de Anglais | MEDLINE | ID: mdl-39001176

RÉSUMÉ

Several advantages of directed energy deposition-arc (DED-arc) have garnered considerable research attention including high deposition rates and low costs. However, defects such as discontinuity and pores may occur during the manufacturing process. Defect identification is the key to monitoring and quality assessments of the additive manufacturing process. This study proposes a novel acoustic signal-based defect identification method for DED-arc via wavelet time-frequency diagrams. With the continuous wavelet transform, one-dimensional (1D) acoustic signals acquired in situ during manufacturing are converted into two-dimensional (2D) time-frequency diagrams to train, validate, and test the convolutional neural network (CNN) models. In this study, several CNN models were examined and compared, including AlexNet, ResNet-18, VGG-16, and MobileNetV3. The accuracy of the models was 96.35%, 97.92%, 97.01%, and 98.31%, respectively. The findings demonstrate that the energy distribution of normal and abnormal acoustic signals has significant differences in both the time and frequency domains. The proposed method is verified to identify defects effectively in the manufacturing process and advance the identification time.

12.
Diagnostics (Basel) ; 14(13)2024 Jun 24.
Article de Anglais | MEDLINE | ID: mdl-39001229

RÉSUMÉ

Skin lesion classification is vital for the early detection and diagnosis of skin diseases, facilitating timely intervention and treatment. However, existing classification methods face challenges in managing complex information and long-range dependencies in dermoscopic images. Therefore, this research aims to enhance the feature representation by incorporating local, global, and hierarchical features to improve the performance of skin lesion classification. We introduce a novel dual-track deep learning (DL) model in this research for skin lesion classification. The first track utilizes a modified Densenet-169 architecture that incorporates a Coordinate Attention Module (CoAM). The second track employs a customized convolutional neural network (CNN) comprising a Feature Pyramid Network (FPN) and Global Context Network (GCN) to capture multiscale features and global contextual information. The local features from the first track and the global features from second track are used for precise localization and modeling of the long-range dependencies. By leveraging these architectural advancements within the DenseNet framework, the proposed neural network achieved better performance compared to previous approaches. The network was trained and validated using the HAM10000 dataset, achieving a classification accuracy of 93.2%.

13.
Diagnostics (Basel) ; 14(13)2024 Jun 25.
Article de Anglais | MEDLINE | ID: mdl-39001234

RÉSUMÉ

This study focuses on developing a model for the precise determination of ultrasound image density and classification using convolutional neural networks (CNNs) for rapid, timely, and accurate identification of hypoxic-ischemic encephalopathy (HIE). Image density is measured by comparing two regions of interest on ultrasound images of the choroid plexus and brain parenchyma using the Delta E CIE76 value. These regions are then combined and serve as input to the CNN model for classification. The classification results of images into three groups (Normal, Moderate, and Intensive) demonstrate high model efficiency, with an overall accuracy of 88.56%, precision of 90% for Normal, 85% for Moderate, and 88% for Intensive. The overall F-measure is 88.40%, indicating a successful combination of accuracy and completeness in classification. This study is significant as it enables rapid and accurate identification of hypoxic-ischemic encephalopathy in newborns, which is crucial for the timely implementation of appropriate therapeutic measures and improving long-term outcomes for these patients. The application of such advanced techniques allows medical personnel to manage treatment more efficiently, reducing the risk of complications and improving the quality of care for newborns with HIE.

14.
Diagnostics (Basel) ; 14(13)2024 Jul 02.
Article de Anglais | MEDLINE | ID: mdl-39001307

RÉSUMÉ

Colon cancer is a prevalent and potentially fatal disease that demands early and accurate diagnosis for effective treatment. Traditional diagnostic approaches for colon cancer often face limitations in accuracy and efficiency, leading to challenges in early detection and treatment. In response to these challenges, this paper introduces an innovative method that leverages artificial intelligence, specifically convolutional neural network (CNN) and Fishier Mantis Optimizer, for the automated detection of colon cancer. The utilization of deep learning techniques, specifically CNN, enables the extraction of intricate features from medical imaging data, providing a robust and efficient diagnostic model. Additionally, the Fishier Mantis Optimizer, a bio-inspired optimization algorithm inspired by the hunting behavior of the mantis shrimp, is employed to fine-tune the parameters of the CNN, enhancing its convergence speed and performance. This hybrid approach aims to address the limitations of traditional diagnostic methods by leveraging the strengths of both deep learning and nature-inspired optimization to enhance the accuracy and effectiveness of colon cancer diagnosis. The proposed method was evaluated on a comprehensive dataset comprising colon cancer images, and the results demonstrate its superiority over traditional diagnostic approaches. The CNN-Fishier Mantis Optimizer model exhibited high sensitivity, specificity, and overall accuracy in distinguishing between cancer and non-cancer colon tissues. The integration of bio-inspired optimization algorithms with deep learning techniques not only contributes to the advancement of computer-aided diagnostic tools for colon cancer but also holds promise for enhancing the early detection and diagnosis of this disease, thereby facilitating timely intervention and improved patient prognosis. Various CNN designs, such as GoogLeNet and ResNet-50, were employed to capture features associated with colon diseases. However, inaccuracies were introduced in both feature extraction and data classification due to the abundance of features. To address this issue, feature reduction techniques were implemented using Fishier Mantis Optimizer algorithms, outperforming alternative methods such as Genetic Algorithms and simulated annealing. Encouraging results were obtained in the evaluation of diverse metrics, including sensitivity, specificity, accuracy, and F1-Score, which were found to be 94.87%, 96.19%, 97.65%, and 96.76%, respectively.

15.
Heliyon ; 10(12): e32733, 2024 Jun 30.
Article de Anglais | MEDLINE | ID: mdl-38975150

RÉSUMÉ

Current noninvasive methods of clinical practice often do not identify the causes of conductive hearing loss due to pathologic changes in the middle ear with sufficient certainty. Wideband acoustic immittance (WAI) measurement is noninvasive, inexpensive and objective. It is very sensitive to pathologic changes in the middle ear and therefore promising for diagnosis. However, evaluation of the data is difficult because of large interindividual variations. Machine learning methods like Convolutional neural networks (CNN) which might be able to deal with this overlaying pattern require a large amount of labeled measurement data for training and validation. This is difficult to provide given the low prevalence of many middle-ear pathologies. Therefore, this study proposes an approach in which the WAI training data of the CNN are simulated with a finite-element ear model and the Monte-Carlo method. With this approach, virtual populations of normal, otosclerotic, and disarticulated ears were generated, consistent with the averaged data of measured populations and well representing the qualitative characteristics of individuals. The CNN trained with the virtual data achieved for otosclerosis an AUC of 91.1 %, a sensitivity of 85.7 %, and a specificity of 85.2 %. For disarticulation, an AUC of 99.5 %, sensitivity of 100 %, and specificity of 93.1 % was achieved. Furthermore, it was estimated that specificity could potentially be increased to about 99 % in both pathological cases if stapes reflex threshold measurements were used to confirm the diagnosis. Thus, the procedures' performance is comparable to classifiers from other studies trained with real measurement data, and therefore the procedure offers great potential for the diagnosis of rare pathologies or early-stages pathologies. The clinical potential of these preliminary results remains to be evaluated on more measurement data and additional pathologies.

16.
Heliyon ; 10(12): e32400, 2024 Jun 30.
Article de Anglais | MEDLINE | ID: mdl-38975160

RÉSUMÉ

Pests are a significant challenge in paddy cultivation, resulting in a global loss of approximately 20 % of rice yield. Early detection of paddy insects can help to save these potential losses. Several ways have been suggested for identifying and categorizing insects in paddy fields, employing a range of advanced, noninvasive, and portable technologies. However, none of these systems have successfully incorporated feature optimization techniques with Deep Learning and Machine Learning. Hence, the current research provided a framework utilizing these techniques to detect and categorize images of paddy insects promptly. Initially, the suggested research will gather the image dataset and categorize it into two groups: one without paddy insects and the other with paddy insects. Furthermore, various pre-processing techniques, such as augmentation and image filtering, will be applied to enhance the quality of the dataset and eliminate any unwanted noise. To determine and analyze the deep characteristics of an image, the suggested architecture will incorporate 5 pre-trained Convolutional Neural Network models. Following that, feature selection techniques, including Principal Component Analysis (PCA), Recursive Feature Elimination (RFE), Linear Discriminant Analysis (LDA), and an optimization algorithm called Lion Optimization, were utilized in order to further reduce the redundant number of features that were collected for the study. Subsequently, the process of identifying the paddy insects will be carried out by employing 7 ML algorithms. Finally, a set of experimental data analysis has been conducted to achieve the objectives, and the proposed approach demonstrates that the extracted feature vectors of ResNet50 with Logistic Regression and PCA have achieved the highest accuracy, precisely 99.28 %. However, the present idea will significantly impact how paddy insects are diagnosed in the field.

17.
IEEE Access ; 12: 49122-49133, 2024.
Article de Anglais | MEDLINE | ID: mdl-38994038

RÉSUMÉ

There is a tendency for object detection systems using off-the-shelf algorithms to fail when deployed in complex scenes. The present work describes a case for detecting facial expression in post-surgical neonates (newborns) as a modality for predicting and classifying severe pain in the Neonatal Intensive Care Unit (NICU). Our initial testing showed that both an off-the-shelf face detector and a machine learning algorithm trained on adult faces failed to detect facial expression of neonates in the NICU. We improved accuracy in this complex scene by training a state-of-the-art "You-Only-Look-Once" (YOLO) face detection model using the USF-MNPAD-I dataset of neonate faces. At run-time our trained YOLO model showed a difference of 8.6% mean Average Precision (mAP) and 21.2% Area under the ROC Curve (AUC) for automatic classification of neonatal pain compared with manual pain scoring by NICU nurses. Given the challenges, time and effort associated with collecting ground truth from the faces of post-surgical neonates, here we share the weights from training our YOLO model with these facial expression data. These weights can facilitate the further development of accurate strategies for detecting facial expression, which can be used to predict the time to pain onset in combination with other sensory modalities (body movements, crying frequency, vital signs). Reliable predictions of time to pain onset in turn create a therapeutic window of time wherein NICU nurses and providers can implement safe and effective strategies to mitigate severe pain in this vulnerable patient population.

18.
Front Artif Intell ; 7: 1424190, 2024.
Article de Anglais | MEDLINE | ID: mdl-39015365

RÉSUMÉ

Human motion detection technology holds significant potential in medicine, health care, and physical exercise. This study introduces a novel approach to human activity recognition (HAR) using convolutional neural networks (CNNs) designed for individual sensor types to enhance the accuracy and address the challenge of diverse data shapes from accelerometers, gyroscopes, and barometers. Specific CNN models are constructed for each sensor type, enabling them to capture the characteristics of their respective sensors. These adapted CNNs are designed to effectively process varying data shapes and sensor-specific characteristics to accurately classify a wide range of human activities. The late-fusion technique is employed to combine predictions from various models to obtain comprehensive estimates of human activity. The proposed CNN-based approach is compared to a standard support vector machine (SVM) classifier using the one-vs-rest methodology. The late-fusion CNN model showed significantly improved performance, with validation and final test accuracies of 99.35 and 94.83% compared to the conventional SVM classifier at 87.07 and 83.10%, respectively. These findings provide strong evidence that combining multiple sensors and a barometer and utilizing an additional filter algorithm greatly improves the accuracy of identifying different human movement patterns.

19.
Front Comput Neurosci ; 18: 1397819, 2024.
Article de Anglais | MEDLINE | ID: mdl-39015744

RÉSUMÉ

Many studies have shown that the human visual system has two major functionally distinct cortical visual pathways: a ventral pathway, thought to be important for object recognition, and a dorsal pathway, thought to be important for spatial cognition. According to our and others previous studies, artificial neural networks with two segregated pathways can determine objects' identities and locations more accurately and efficiently than one-pathway artificial neural networks. In addition, we showed that these two segregated artificial cortical visual pathways can each process identity and spatial information of visual objects independently and differently. However, when using such networks to process multiple objects' identities and locations, a binding problem arises because the networks may not associate each object's identity with its location correctly. In a previous study, we constrained the binding problem by training the artificial identity pathway to retain relative location information of objects. This design uses a location map to constrain the binding problem. One limitation of that study was that we only considered two attributes of our objects (identity and location) and only one possible map (location) for binding. However, typically the brain needs to process and bind many attributes of an object, and any of these attributes could be used to constrain the binding problem. In our current study, using visual objects with multiple attributes (identity, luminance, orientation, and location) that need to be recognized, we tried to find the best map (among an identity map, a luminance map, an orientation map, or a location map) to constrain the binding problem. We found that in our experimental simulations, when visual attributes are independent of each other, a location map is always a better choice than the other kinds of maps examined for constraining the binding problem. Our findings agree with previous neurophysiological findings that show that the organization or map in many visual cortical areas is primarily retinotopic or spatial.

20.
Comput Biol Med ; 179: 108857, 2024 Jul 16.
Article de Anglais | MEDLINE | ID: mdl-39018882

RÉSUMÉ

Emotion recognition based on electroencephalogram (EEG) signals is crucial in understanding human affective states. Current research has limitations in extracting local features. The representation capabilities of local features are limited, making it difficult to comprehensively capture emotional information. In this study, a novel approach is proposed to enhance local representation learning through global-local integration with functional connectivity for EEG-based emotion recognition. By leveraging the functional connectivity of brain regions, EEG signals are divided into global embeddings that represent comprehensive brain connectivity patterns throughout the entire process and local embeddings that reflect dynamic interactions within specific brain functional networks at particular moments. Firstly, a convolutional feature extraction branch based on the residual network is designed to extract local features from the global embedding. To further improve the representation ability and accuracy of local features, a multidimensional collaborative attention (MCA) module is introduced. Secondly, the local features and patch embedded local embeddings are integrated into the feature coupling module (FCM), which utilizes hierarchical connections and enhanced cross-attention to couple region-level features, thereby enhancing local representation learning. Experimental results on three public datasets show that compared with other methods, this method improves accuracy by 4.92% on the DEAP, by 1.11% on the SEED, and by 7.76% on the SEED-IV, demonstrating its superior performance in emotion recognition tasks.

SÉLECTION CITATIONS
DÉTAIL DE RECHERCHE