Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 43
Filtrar
Más filtros

Banco de datos
Tipo del documento
Intervalo de año de publicación
1.
Hum Brain Mapp ; 45(5): e26555, 2024 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-38544418

RESUMEN

Novel features derived from imaging and artificial intelligence systems are commonly coupled to construct computer-aided diagnosis (CAD) systems that are intended as clinical support tools or for investigation of complex biological patterns. This study used sulcal patterns from structural images of the brain as the basis for classifying patients with schizophrenia from unaffected controls. Statistical, machine learning and deep learning techniques were sequentially applied as a demonstration of how a CAD system might be comprehensively evaluated in the absence of prior empirical work or extant literature to guide development, and the availability of only small sample datasets. Sulcal features of the entire cerebral cortex were derived from 58 schizophrenia patients and 56 healthy controls. No similar CAD systems has been reported that uses sulcal features from the entire cortex. We considered all the stages in a CAD system workflow: preprocessing, feature selection and extraction, and classification. The explainable AI techniques Local Interpretable Model-agnostic Explanations and SHapley Additive exPlanations were applied to detect the relevance of features to classification. At each stage, alternatives were compared in terms of their performance in the context of a small sample. Differentiating sulcal patterns were located in temporal and precentral areas, as well as the collateral fissure. We also verified the benefits of applying dimensionality reduction techniques and validation methods, such as resubstitution with upper bound correction, to optimize performance.


Asunto(s)
Inteligencia Artificial , Esquizofrenia , Humanos , Esquizofrenia/diagnóstico por imagen , Neuroimagen , Aprendizaje Automático , Diagnóstico por Computador
2.
Pharmacol Res ; 197: 106984, 2023 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-37940064

RESUMEN

The integration of positron emission tomography (PET) and single-photon emission computed tomography (SPECT) imaging techniques with machine learning (ML) algorithms, including deep learning (DL) models, is a promising approach. This integration enhances the precision and efficiency of current diagnostic and treatment strategies while offering invaluable insights into disease mechanisms. In this comprehensive review, we delve into the transformative impact of ML and DL in this domain. Firstly, a brief analysis is provided of how these algorithms have evolved and which are the most widely applied in this domain. Their different potential applications in nuclear imaging are then discussed, such as optimization of image adquisition or reconstruction, biomarkers identification, multimodal fusion and the development of diagnostic, prognostic, and disease progression evaluation systems. This is because they are able to analyse complex patterns and relationships within imaging data, as well as extracting quantitative and objective measures. Furthermore, we discuss the challenges in implementation, such as data standardization and limited sample sizes, and explore the clinical opportunities and future horizons, including data augmentation and explainable AI. Together, these factors are propelling the continuous advancement of more robust, transparent, and reliable systems.


Asunto(s)
Aprendizaje Profundo , Tomografía Computarizada por Rayos X , Tomografía de Emisión de Positrones/métodos , Tomografía Computarizada de Emisión de Fotón Único/métodos , Aprendizaje Automático
3.
IEEE Sens J ; 22(18): 17573-17582, 2022 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-36346095

RESUMEN

(Aim) COVID-19 pandemic causes numerous death tolls till now. Chest CT is an effective imaging sensor system to make accurate diagnosis. (Method) This article proposed a novel seven layer convolutional neural network based smart diagnosis model for COVID-19 diagnosis (7L-CNN-CD). We proposed a 14-way data augmentation to enhance the training set, and introduced stochastic pooling to replace traditional pooling methods. (Results) The 10 runs of 10-fold cross validation experiment show that our 7L-CNN-CD approach achieves a sensitivity of 94.44±0.73, a specificity of 93.63±1.60, and an accuracy of 94.03±0.80. (Conclusion) Our proposed 7L-CNN-CD is effective in diagnosing COVID-19 in chest CT images. It gives better performance than several state-of-the-art algorithms. The data augmentation and stochastic pooling methods are proven to be effective.

4.
Int J Intell Syst ; 37(2): 1572-1598, 2022 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-38607823

RESUMEN

COVID-19 pneumonia started in December 2019 and caused large casualties and huge economic losses. In this study, we intended to develop a computer-aided diagnosis system based on artificial intelligence to automatically identify the COVID-19 in chest computed tomography images. We utilized transfer learning to obtain the image-level representation (ILR) based on the backbone deep convolutional neural network. Then, a novel neighboring aware representation (NAR) was proposed to exploit the neighboring relationships between the ILR vectors. To obtain the neighboring information in the feature space of the ILRs, an ILR graph was generated based on the k-nearest neighbors algorithm, in which the ILRs were linked with their k-nearest neighboring ILRs. Afterward, the NARs were computed by the fusion of the ILRs and the graph. On the basis of this representation, a novel end-to-end COVID-19 classification architecture called neighboring aware graph neural network (NAGNN) was proposed. The private and public data sets were used for evaluation in the experiments. Results revealed that our NAGNN outperformed all the 10 state-of-the-art methods in terms of generalization ability. Therefore, the proposed NAGNN is effective in detecting COVID-19, which can be used in clinical diagnosis.

5.
Sensors (Basel) ; 21(11)2021 Jun 07.
Artículo en Inglés | MEDLINE | ID: mdl-34200287

RESUMEN

In this paper, a novel medical image encryption method based on multi-mode synchronization of hyper-chaotic systems is presented. The synchronization of hyper-chaotic systems is of great significance in secure communication tasks such as encryption of images. Multi-mode synchronization is a novel and highly complex issue, especially if there is uncertainty and disturbance. In this work, an adaptive-robust controller is designed for multimode synchronized chaotic systems with variable and unknown parameters, despite the bounded disturbance and uncertainty with a known function in two modes. In the first case, it is a main system with some response systems, and in the second case, it is a circular synchronization. Using theorems it is proved that the two synchronization methods are equivalent. Our results show that, we are able to obtain the convergence of synchronization error and parameter estimation error to zero using Lyapunov's method. The new laws to update time-varying parameters, estimating disturbance and uncertainty bounds are proposed such that stability of system is guaranteed. To assess the performance of the proposed synchronization method, various statistical analyzes were carried out on the encrypted medical images and standard benchmark images. The results show effective performance of the proposed synchronization technique in the medical images encryption for telemedicine application.


Asunto(s)
Algoritmos , Dinámicas no Lineales , Comunicación , Simulación por Computador , Incertidumbre
6.
Inf Fusion ; 67: 208-229, 2021 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-33052196

RESUMEN

(Aim) COVID-19 is an infectious disease spreading to the world this year. In this study, we plan to develop an artificial intelligence based tool to diagnose on chest CT images. (Method) On one hand, we extract features from a self-created convolutional neural network (CNN) to learn individual image-level representations. The proposed CNN employed several new techniques such as rank-based average pooling and multiple-way data augmentation. On the other hand, relation-aware representations were learnt from graph convolutional network (GCN). Deep feature fusion (DFF) was developed in this work to fuse individual image-level features and relation-aware features from both GCN and CNN, respectively. The best model was named as FGCNet. (Results) The experiment first chose the best model from eight proposed network models, and then compared it with 15 state-of-the-art approaches. (Conclusion) The proposed FGCNet model is effective and gives better performance than all 15 state-of-the-art methods. Thus, our proposed FGCNet model can assist radiologists to rapidly detect COVID-19 from chest CT images.

7.
Inf Fusion ; 64: 149-187, 2020 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-32834795

RESUMEN

Multimodal fusion in neuroimaging combines data from multiple imaging modalities to overcome the fundamental limitations of individual modalities. Neuroimaging fusion can achieve higher temporal and spatial resolution, enhance contrast, correct imaging distortions, and bridge physiological and cognitive information. In this study, we analyzed over 450 references from PubMed, Google Scholar, IEEE, ScienceDirect, Web of Science, and various sources published from 1978 to 2020. We provide a review that encompasses (1) an overview of current challenges in multimodal fusion (2) the current medical applications of fusion for specific neurological diseases, (3) strengths and limitations of available imaging modalities, (4) fundamental fusion rules, (5) fusion quality assessment methods, and (6) the applications of fusion for atlas-based segmentation and quantification. Overall, multimodal fusion shows significant benefits in clinical diagnosis and neuroscience research. Widespread education and further research amongst engineers, researchers and clinicians will benefit the field of multimodal neuroimaging.

8.
Hum Brain Mapp ; 38(3): 1208-1223, 2017 03.
Artículo en Inglés | MEDLINE | ID: mdl-27774713

RESUMEN

Neuroimaging studies have reported structural and physiological differences that could help understand the causes and development of Autism Spectrum Disorder (ASD). Many of them rely on multisite designs, with the recruitment of larger samples increasing statistical power. However, recent large-scale studies have put some findings into question, considering the results to be strongly dependent on the database used, and demonstrating the substantial heterogeneity within this clinically defined category. One major source of variance may be the acquisition of the data in multiple centres. In this work we analysed the differences found in the multisite, multi-modal neuroimaging database from the UK Medical Research Council Autism Imaging Multicentre Study (MRC AIMS) in terms of both diagnosis and acquisition sites. Since the dissimilarities between sites were higher than between diagnostic groups, we developed a technique called Significance Weighted Principal Component Analysis (SWPCA) to reduce the undesired intensity variance due to acquisition site and to increase the statistical power in detecting group differences. After eliminating site-related variance, statistically significant group differences were found, including Broca's area and the temporo-parietal junction. However, discriminative power was not sufficient to classify diagnostic groups, yielding accuracies results close to random. Our work supports recent claims that ASD is a highly heterogeneous condition that is difficult to globally characterize by neuroimaging, and therefore different (and more homogenous) subgroups should be defined to obtain a deeper understanding of ASD. Hum Brain Mapp 38:1208-1223, 2017. © 2016 Wiley Periodicals, Inc.


Asunto(s)
Trastorno Autístico/patología , Mapeo Encefálico , Encéfalo/patología , Análisis de Componente Principal , Adolescente , Adulto , Trastorno Autístico/diagnóstico por imagen , Trastorno Autístico/genética , Encéfalo/diagnóstico por imagen , Femenino , Humanos , Procesamiento de Imagen Asistido por Computador , Imagen por Resonancia Magnética , Masculino , Adulto Joven
9.
Int J Neural Syst ; 34(8): 2450043, 2024 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-38770651

RESUMEN

Neurodegenerative diseases pose a formidable challenge to medical research, demanding a nuanced understanding of their progressive nature. In this regard, latent generative models can effectively be used in a data-driven modeling of different dimensions of neurodegeneration, framed within the context of the manifold hypothesis. This paper proposes a joint framework for a multi-modal, common latent generative model to address the need for a more comprehensive understanding of the neurodegenerative landscape in the context of Parkinson's disease (PD). The proposed architecture uses coupled variational autoencoders (VAEs) to joint model a common latent space to both neuroimaging and clinical data from the Parkinson's Progression Markers Initiative (PPMI). Alternative loss functions, different normalization procedures, and the interpretability and explainability of latent generative models are addressed, leading to a model that was able to predict clinical symptomatology in the test set, as measured by the unified Parkinson's disease rating scale (UPDRS), with R2 up to 0.86 for same-modality and 0.441 cross-modality (using solely neuroimaging). The findings provide a foundation for further advancements in the field of clinical research and practice, with potential applications in decision-making processes for PD. The study also highlights the limitations and capabilities of the proposed model, emphasizing its direct interpretability and potential impact on understanding and interpreting neuroimaging patterns associated with PD symptomatology.


Asunto(s)
Aprendizaje Profundo , Progresión de la Enfermedad , Neuroimagen , Enfermedad de Parkinson , Enfermedad de Parkinson/diagnóstico por imagen , Enfermedad de Parkinson/fisiopatología , Humanos , Neuroimagen/métodos , Aprendizaje Automático Supervisado , Imagen Multimodal , Masculino , Femenino
10.
Sensors (Basel) ; 13(9): 11797-817, 2013 Sep 05.
Artículo en Inglés | MEDLINE | ID: mdl-24013490

RESUMEN

Ellipsoid fitting algorithms are widely used to calibrate Magnetic Angular Rate and Gravity (MARG) sensors. These algorithms are based on the minimization of an error function that optimizes the parameters of a mathematical sensor model that is subsequently applied to calibrate the raw data. The convergence of this kind of algorithms to a correct solution is very sensitive to input data. Input calibration datasets must be properly distributed in space so data can be accurately fitted to the theoretical ellipsoid model. Gathering a well distributed set is not an easy task as it is difficult for the operator carrying out the maneuvers to keep a visual record of all the positions that have already been covered, as well as the remaining ones. It would be then desirable to have a system that gives feedback to the operator when the dataset is ready, or to enable the calibration process in auto-calibrated systems. In this work, we propose two different algorithms that analyze the goodness of the distributions by computing four different indicators. The first approach is based on a thresholding algorithm that uses only one indicator as its input and the second one is based on a Fuzzy Logic System (FLS) that estimates the calibration error for a given calibration set using a weighted combination of two indicators. Very accurate classification between valid and invalid datasets is achieved with average Area Under Curve (AUC) of up to 0:98.


Asunto(s)
Acelerometría/instrumentación , Acelerometría/métodos , Algoritmos , Gravitación , Magnetometría/instrumentación , Magnetometría/métodos , Acelerometría/normas , Calibración , Diseño de Equipo , Análisis de Falla de Equipo , Magnetometría/normas
11.
J Imaging ; 9(7)2023 Jul 21.
Artículo en Inglés | MEDLINE | ID: mdl-37504824

RESUMEN

Artificial intelligence (AI) refers to the field of computer science theory and technology [...].

12.
Front Syst Neurosci ; 16: 838822, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-35720439

RESUMEN

Aims: Brain diseases refer to intracranial tissue and organ inflammation, vascular diseases, tumors, degeneration, malformations, genetic diseases, immune diseases, nutritional and metabolic diseases, poisoning, trauma, parasitic diseases, etc. Taking Alzheimer's disease (AD) as an example, the number of patients dramatically increases in developed countries. By 2025, the number of elderly patients with AD aged 65 and over will reach 7.1 million, an increase of nearly 29% over the 5.5 million patients of the same age in 2018. Unless medical breakthroughs are made, AD patients may increase from 5.5 million to 13.8 million by 2050, almost three times the original. Researchers have focused on developing complex machine learning (ML) algorithms, i.e., convolutional neural networks (CNNs), containing millions of parameters. However, CNN models need many training samples. A small number of training samples in CNN models may lead to overfitting problems. With the continuous research of CNN, other networks have been proposed, such as randomized neural networks (RNNs). Schmidt neural network (SNN), random vector functional link (RVFL), and extreme learning machine (ELM) are three types of RNNs. Methods: We propose three novel models to classify brain diseases to cope with these problems. The proposed models are DenseNet-based SNN (DSNN), DenseNet-based RVFL (DRVFL), and DenseNet-based ELM (DELM). The backbone of the three proposed models is the pre-trained "customize" DenseNet. The modified DenseNet is fine-tuned on the empirical dataset. Finally, the last five layers of the fine-tuned DenseNet are substituted by SNN, ELM, and RVFL, respectively. Results: Overall, the DSNN gets the best performance among the three proposed models in classification performance. We evaluate the proposed DSNN by five-fold cross-validation. The accuracy, sensitivity, specificity, precision, and F1-score of the proposed DSNN on the test set are 98.46% ± 2.05%, 100.00% ± 0.00%, 85.00% ± 20.00%, 98.36% ± 2.17%, and 99.16% ± 1.11%, respectively. The proposed DSNN is compared with restricted DenseNet, spiking neural network, and other state-of-the-art methods. Finally, our model obtains the best results among all models. Conclusions: DSNN is an effective model for classifying brain diseases.

13.
Int J Neural Syst ; 32(3): 2250001, 2022 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-34931938

RESUMEN

Implantable high-density multichannel neural recording microsystems provide simultaneous recording of brain activities. Wireless transmission of the entire recorded data causes high bandwidth usage, which is not tolerable for implantable applications. As a result, a hardware-friendly compression module is required to reduce the amount of data before it is transmitted. This paper presents a novel compression approach that utilizes a spike extractor and a vector quantization (VQ)-based spike compressor. In this approach, extracted spikes are vector quantized using an unsupervised learning process providing a high spike compression ratio (CR) of 10-80. A combination of extracting and compressing neural spikes results in a significant data reduction as well as preserving the spike waveshapes. The compression performance of the proposed approach was evaluated under variant conditions. We also developed new architectures such that the hardware blocks of our approach can be implemented more efficiently. The compression module was implemented in a 180-nm standard CMOS process achieving a SNDR of 14.49[Formula: see text]dB and a classification accuracy (CA) of 99.62% at a CR of 20, while consuming 4[Formula: see text][Formula: see text]W power and 0.16[Formula: see text]mm2 chip area per channel.


Asunto(s)
Compresión de Datos , Procesamiento de Señales Asistido por Computador , Potenciales de Acción , Algoritmos , Compresión de Datos/métodos
14.
Biology (Basel) ; 11(1)2022 Jan 14.
Artículo en Inglés | MEDLINE | ID: mdl-35053131

RESUMEN

As an important imaging modality, mammography is considered to be the global gold standard for early detection of breast cancer. Computer-Aided (CAD) systems have played a crucial role in facilitating quicker diagnostic procedures, which otherwise could take weeks if only radiologists were involved. In some of these CAD systems, breast pectoral segmentation is required for breast region partition from breast pectoral muscle for specific analysis tasks. Therefore, accurate and efficient breast pectoral muscle segmentation frameworks are in high demand. Here, we proposed a novel deep learning framework, which we code-named PeMNet, for breast pectoral muscle segmentation in mammography images. In the proposed PeMNet, we integrated a novel attention module called the Global Channel Attention Module (GCAM), which can effectively improve the segmentation performance of Deeplabv3+ using minimal parameter overheads. In GCAM, channel attention maps (CAMs) are first extracted by concatenating feature maps after paralleled global average pooling and global maximum pooling operation. CAMs are then refined and scaled up by multi-layer perceptron (MLP) for elementwise multiplication with CAMs in next feature level. By iteratively repeating this procedure, the global CAMs (GCAMs) are then formed and multiplied elementwise with final feature maps to lead to final segmentation. By doing so, CAMs in early stages of a deep convolution network can be effectively passed on to later stages of the network and therefore leads to better information usage. The experiments on a merged dataset derived from two datasets, INbreast and OPTIMAM, showed that PeMNet greatly outperformed state-of-the-art methods by achieving an IoU of 97.46%, global pixel accuracy of 99.48%, Dice similarity coefficient of 96.30%, and Jaccard of 93.33%, respectively.

15.
Comput Biol Med ; 149: 106053, 2022 10.
Artículo en Inglés | MEDLINE | ID: mdl-36108415

RESUMEN

Epilepsy is a disorder of the brain denoted by frequent seizures. The symptoms of seizure include confusion, abnormal staring, and rapid, sudden, and uncontrollable hand movements. Epileptic seizure detection methods involve neurological exams, blood tests, neuropsychological tests, and neuroimaging modalities. Among these, neuroimaging modalities have received considerable attention from specialist physicians. One method to facilitate the accurate and fast diagnosis of epileptic seizures is to employ computer-aided diagnosis systems (CADS) based on deep learning (DL) and neuroimaging modalities. This paper has studied a comprehensive overview of DL methods employed for epileptic seizures detection and prediction using neuroimaging modalities. First, DL-based CADS for epileptic seizures detection and prediction using neuroimaging modalities are discussed. Also, descriptions of various datasets, preprocessing algorithms, and DL models which have been used for epileptic seizures detection and prediction have been included. Then, research on rehabilitation tools has been presented, which contains brain-computer interface (BCI), cloud computing, internet of things (IoT), hardware implementation of DL techniques on field-programmable gate array (FPGA), etc. In the discussion section, a comparison has been carried out between research on epileptic seizure detection and prediction. The challenges in epileptic seizures detection and prediction using neuroimaging modalities and DL models have been described. In addition, possible directions for future works in this field, specifically for solving challenges in datasets, DL, rehabilitation, and hardware models, have been proposed. The final section is dedicated to the conclusion which summarizes the significant findings of the paper.


Asunto(s)
Aprendizaje Profundo , Epilepsia , Algoritmos , Electroencefalografía/métodos , Epilepsia/diagnóstico por imagen , Humanos , Neuroimagen , Convulsiones/diagnóstico por imagen
16.
J Imaging ; 7(4)2021 Apr 20.
Artículo en Inglés | MEDLINE | ID: mdl-34460524

RESUMEN

Over recent years, deep learning (DL) has established itself as a powerful tool across a broad spectrum of domains in imaging-e [...].

17.
Front Cell Dev Biol ; 9: 813996, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-35047515

RESUMEN

Aims: Most blood diseases, such as chronic anemia, leukemia (commonly known as blood cancer), and hematopoietic dysfunction, are caused by environmental pollution, substandard decoration materials, radiation exposure, and long-term use certain drugs. Thus, it is imperative to classify the blood cell images. Most cell classification is based on the manual feature, machine learning classifier or the deep convolution network neural model. However, manual feature extraction is a very tedious process, and the results are usually unsatisfactory. On the other hand, the deep convolution neural network is usually composed of massive layers, and each layer has many parameters. Therefore, each deep convolution neural network needs a lot of time to get the results. Another problem is that medical data sets are relatively small, which may lead to overfitting problems. Methods: To address these problems, we propose seven models for the automatic classification of blood cells: BCARENet, BCR5RENet, BCMV2RENet, BCRRNet, BCRENet, BCRSNet, and BCNet. The BCNet model is the best model among the seven proposed models. The backbone model in our method is selected as the ResNet-18, which is pre-trained on the ImageNet set. To improve the performance of the proposed model, we replace the last four layers of the trained transferred ResNet-18 model with the three randomized neural networks (RNNs), which are RVFL, ELM, and SNN. The final outputs of our BCNet are generated by the ensemble of the predictions from the three randomized neural networks by the majority voting. We use four multi-classification indexes for the evaluation of our model. Results: The accuracy, average precision, average F1-score, and average recall are 96.78, 97.07, 96.78, and 96.77%, respectively. Conclusion: We offer the comparison of our model with state-of-the-art methods. The results of the proposed BCNet model are much better than other state-of-the-art methods.

18.
Complex Intell Systems ; 7(3): 1295-1310, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-34804768

RESUMEN

Ductal carcinoma in situ (DCIS) is a pre-cancerous lesion in the ducts of the breast, and early diagnosis is crucial for optimal therapeutic intervention. Thermography imaging is a non-invasive imaging tool that can be utilized for detection of DCIS and although it has high accuracy (~ 88%), it is sensitivity can still be improved. Hence, we aimed to develop an automated artificial intelligence-based system for improved detection of DCIS in thermographs. This study proposed a novel artificial intelligence based system based on convolutional neural network (CNN) termed CNN-BDER on a multisource dataset containing 240 DCIS images and 240 healthy breast images. Based on CNN, batch normalization, dropout, exponential linear unit and rank-based weighted pooling were integrated, along with L-way data augmentation. Ten runs of tenfold cross validation were chosen to report the unbiased performances. Our proposed method achieved a sensitivity of 94.08 ± 1.22%, a specificity of 93.58 ± 1.49 and an accuracy of 93.83 ± 0.96. The proposed method gives superior performance than eight state-of-the-art approaches and manual diagnosis. The trained model could serve as a visual question answering system and improve diagnostic accuracy.

19.
Cancers (Basel) ; 13(19)2021 Oct 06.
Artículo en Inglés | MEDLINE | ID: mdl-34638493

RESUMEN

Predicting functional outcomes after surgery and early adjuvant treatment is difficult due to the complex, extended, interlocking brain networks that underpin cognition. The aim of this study was to test glioma functional interactions with the rest of the brain, thereby identifying the risk factors of cognitive recovery or deterioration. Seventeen patients with diffuse non-enhancing glioma (aged 22-56 years) were longitudinally MRI scanned and cognitively assessed before and after surgery and during a 12-month recovery period (55 MRI scans in total after exclusions). We initially found, and then replicated in an independent dataset, that the spatial correlation pattern between regional and global BOLD signals (also known as global signal topography) was associated with tumour occurrence. We then estimated the coupling between the BOLD signal from within the tumour and the signal extracted from different brain tissues. We observed that the normative global signal topography is reorganised in glioma patients during the recovery period. Moreover, we found that the BOLD signal within the tumour and lesioned brain was coupled with the global signal and that this coupling was associated with cognitive recovery. Nevertheless, patients did not show any apparent disruption of functional connectivity within canonical functional networks. Understanding how tumour infiltration and coupling are related to patients' recovery represents a major step forward in prognostic development.

20.
Front Neuroinform ; 15: 777977, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-34899226

RESUMEN

Schizophrenia (SZ) is a mental disorder whereby due to the secretion of specific chemicals in the brain, the function of some brain regions is out of balance, leading to the lack of coordination between thoughts, actions, and emotions. This study provides various intelligent deep learning (DL)-based methods for automated SZ diagnosis via electroencephalography (EEG) signals. The obtained results are compared with those of conventional intelligent methods. To implement the proposed methods, the dataset of the Institute of Psychiatry and Neurology in Warsaw, Poland, has been used. First, EEG signals were divided into 25 s time frames and then were normalized by z-score or norm L2. In the classification step, two different approaches were considered for SZ diagnosis via EEG signals. In this step, the classification of EEG signals was first carried out by conventional machine learning methods, e.g., support vector machine, k-nearest neighbors, decision tree, naïve Bayes, random forest, extremely randomized trees, and bagging. Various proposed DL models, namely, long short-term memories (LSTMs), one-dimensional convolutional networks (1D-CNNs), and 1D-CNN-LSTMs, were used in the following. In this step, the DL models were implemented and compared with different activation functions. Among the proposed DL models, the CNN-LSTM architecture has had the best performance. In this architecture, the ReLU activation function with the z-score and L2-combined normalization was used. The proposed CNN-LSTM model has achieved an accuracy percentage of 99.25%, better than the results of most former studies in this field. It is worth mentioning that to perform all simulations, the k-fold cross-validation method with k = 5 has been used.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA