RESUMEN
As the brain ages, it almost invariably accumulates vascular pathology, which differentially affects the cerebral white matter. A rich body of research has investigated the link between vascular risk factors and the brain. One of the less studied questions is that among various modifiable vascular risk factors, which is the most debilitating one for white matter health? A white matter specific brain age was developed to evaluate the overall white matter health from diffusion weighted imaging, using a three-dimensional convolutional neural network deep learning model in both cross-sectional UK biobank participants (n = 37,327) and a longitudinal subset (n = 1409). White matter brain age gap (WMBAG) was the difference between the white matter age and the chronological age. Participants with one, two, and three or more vascular risk factors, compared to those without any, showed an elevated WMBAG of 0.54, 1.23, and 1.94 years, respectively. Diabetes was most strongly associated with an increased WMBAG (1.39 years, p < 0.001) among all risk factors followed by hypertension (0.87 years, p < 0.001) and smoking (0.69 years, p < 0.001). Baseline WMBAG was associated significantly with processing speed, executive and global cognition. Significant associations of diabetes and hypertension with poor processing speed and executive function were found to be mediated through the WMBAG. White matter specific brain age can be successfully targeted for the examination of the most relevant risk factors and cognition, and for tracking an individual's cerebrovascular ageing process. It also provides clinical basis for the better management of specific risk factors.
RESUMEN
In recent decades, many different governmental and nongovernmental organizations have used lie detection for various purposes, including ensuring the honesty of criminal confessions. As a result, this diagnosis is evaluated with a polygraph machine. However, the polygraph instrument has limitations and needs to be more reliable. This study introduces a new model for detecting lies using electroencephalogram (EEG) signals. An EEG database of 20 study participants was created to accomplish this goal. This study also used a six-layer graph convolutional network and type 2 fuzzy (TF-2) sets for feature selection/extraction and automatic classification. The classification results show that the proposed deep model effectively distinguishes between truths and lies. As a result, even in a noisy environment (SNR = 0 dB), the classification accuracy remains above 90%. The proposed strategy outperforms current research and algorithms. Its superior performance makes it suitable for a wide range of practical applications.
Asunto(s)
Algoritmos , Electroencefalografía , Lógica Difusa , Redes Neurales de la Computación , Electroencefalografía/métodos , Humanos , Detección de Mentiras , Procesamiento de Señales Asistido por Computador , Masculino , Femenino , Adulto , Adulto JovenRESUMEN
Anomaly detection tasks involving time-series signal processing have been important research topics for decades. In many real-world anomaly detection applications, no specific distributions fit the data, and the characteristics of anomalies are different. Under these circumstances, the detection algorithm requires excellent learning ability of the data features. Transformers, which apply the self-attention mechanism, have shown outstanding performances in modelling long-range dependencies. Although Transformer based models have good prediction performance, they may be influenced by noise and ignore some unusual details, which are significant for anomaly detection. In this paper, a novel temporal context fusion framework: Temporal Context Fusion Transformer (TCF-Trans), is proposed for anomaly detection tasks with applications to time series. The original feature transmitting structure in the decoder of Informer is replaced with the proposed feature fusion decoder to fully utilise the features extracted from shallow and deep decoder layers. This strategy prevents the decoder from missing unusual anomaly details while maintaining robustness from noises inside the data. Besides, we propose the temporal context fusion module to adaptively fuse the generated auxiliary predictions. Extensive experiments on public and collected transportation datasets validate that the proposed framework is effective for anomaly detection in time series. Additionally, the ablation study and a series of parameter sensitivity experiments show that the proposed method maintains high performance under various experimental settings.
RESUMEN
Wearable exoskeleton robots have become a promising technology for supporting human motions in multiple tasks. Activity recognition in real-time provides useful information to enhance the robot's control assistance for daily tasks. This work implements a real-time activity recognition system based on the activity signals of an inertial measurement unit (IMU) and a pair of rotary encoders integrated into the exoskeleton robot. Five deep learning models have been trained and evaluated for activity recognition. As a result, a subset of optimized deep learning models was transferred to an edge device for real-time evaluation in a continuous action environment using eight common human tasks: stand, bend, crouch, walk, sit-down, sit-up, and ascend and descend stairs. These eight robot wearer's activities are recognized with an average accuracy of 97.35% in real-time tests, with an inference time under 10 ms and an overall latency of 0.506 s per recognition using the selected edge device.
Asunto(s)
Aprendizaje Profundo , Dispositivo Exoesqueleto , Robótica , Dispositivos Electrónicos Vestibles , Humanos , Actividades HumanasRESUMEN
Mass spectrometry (MS)-based quantitative proteomics experiments typically assay a subset of up to 60% of the ≈20 000 human protein coding genes. Computational methods for imputing the missing values using RNA expression data usually allow only for imputations of proteins measured in at least some of the samples. In silico methods for comprehensively estimating abundances across all proteins are still missing. Here, a novel method is proposed using deep learning to extrapolate the observed protein expression values in label-free MS experiments to all proteins, leveraging gene functional annotations and RNA measurements as key predictive attributes. This method is tested on four datasets, including human cell lines and human and mouse tissues. This method predicts the protein expression values with average R2 scores between 0.46 and 0.54, which is significantly better than predictions based on correlations using the RNA expression data alone. Moreover, it is demonstrated that the derived models can be "transferred" across experiments and species. For instance, the model derived from human tissues gave a R2=0.51 when applied to mouse tissue data. It is concluded that protein abundances generated in label-free MS experiments can be computationally predicted using functional annotated attributes and can be used to highlight aberrant protein abundance values.
Asunto(s)
Aprendizaje Profundo , Animales , Espectrometría de Masas , Ratones , Anotación de Secuencia Molecular , Proteínas , ProteómicaRESUMEN
BACKGROUND: Nucleosomes wrap the DNA into the nucleus of the Eukaryote cell and regulate its transcription phase. Several studies indicate that nucleosomes are determined by the combined effects of several factors, including DNA sequence organization. Interestingly, the identification of nucleosomes on a genomic scale has been successfully performed by computational methods using DNA sequence as input data. RESULTS: In this work, we propose CORENup, a deep learning model for nucleosome identification. CORENup processes a DNA sequence as input using one-hot representation and combines in a parallel fashion a fully convolutional neural network and a recurrent layer. These two parallel levels are devoted to catching both non periodic and periodic DNA string features. A dense layer is devoted to their combination to give a final classification. CONCLUSIONS: Results computed on public data sets of different organisms show that CORENup is a state of the art methodology for nucleosome positioning identification based on a Deep Neural Network architecture. The comparisons have been carried out using two groups of datasets, currently adopted by the best performing methods, and CORENup has shown top performance both in terms of classification metrics and elapsed computation time.
Asunto(s)
Genómica/métodos , Redes Neurales de la Computación , Nucleosomas/metabolismo , HumanosRESUMEN
Detection of abnormalities in wireless capsule endoscopy (WCE) images is a challenging task. Typically, these images suffer from low contrast, complex background, variations in lesion shape and color, which affect the accuracy of their segmentation and subsequent classification. This research proposes an automated system for detection and classification of ulcers in WCE images, based on state-of-the-art deep learning networks. Deep learning techniques, and in particular, convolutional neural networks (CNNs), have recently become popular in the analysis and recognition of medical images. The medical image datasets used in this study were obtained from WCE video frames. In this work, two milestone CNN architectures, namely the AlexNet and the GoogLeNet are extensively evaluated in object classification into ulcer or non-ulcer. Furthermore, we examine and analyze the images identified as containing ulcer objects to evaluate the efficiency of the utilized CNNs. Extensive experiments show that CNNs deliver superior performance, surpassing traditional machine learning methods by large margins, which supports their effectiveness as automated diagnosis tools.
Asunto(s)
Endoscopía Capsular/métodos , Redes Neurales de la Computación , Úlcera/diagnóstico por imagen , Aprendizaje Profundo , Humanos , Interpretación de Imagen Asistida por Computador , Procesamiento de Imagen Asistido por Computador , Aprendizaje AutomáticoRESUMEN
BACKGROUND: Nucleosomes are DNA-histone complex, each wrapping about 150 pairs of double-stranded DNA. Their function is fundamental for one of the primary functions of Chromatin i.e. packing the DNA into the nucleus of the Eukaryote cells. Several biological studies have shown that the nucleosome positioning influences the regulation of cell type-specific gene activities. Moreover, computational studies have shown evidence of sequence specificity concerning the DNA fragment wrapped into nucleosomes, clearly underlined by the organization of particular DNA substrings. As the main consequence, the identification of nucleosomes on a genomic scale has been successfully performed by computational methods using a sequence features representation. RESULTS: In this work, we propose a deep learning model for nucleosome identification. Our model stacks convolutional layers and Long Short-term Memories to automatically extract features from short- and long-range dependencies in a sequence. Using this model we are able to avoid the feature extraction and selection steps while improving the classification performances. CONCLUSIONS: Results computed on eleven data sets of five different organisms, from Yeast to Human, show the superiority of the proposed method with respect to the state of the art recently presented in the literature.
Asunto(s)
Aprendizaje Profundo , Nucleosomas/metabolismo , Animales , Secuencia de Bases , Bases de Datos de Ácidos Nucleicos , Humanos , Redes Neurales de la Computación , Curva ROC , Reproducibilidad de los Resultados , Saccharomyces cerevisiae/genéticaRESUMEN
Cardiovascular diseases remain one of the main threats to human health, significantly affecting the quality and life expectancy. Effective and prompt recognition of these diseases is crucial. This research aims to develop an effective novel hybrid method for automatically detecting dangerous arrhythmias based on cardiac patients' short electrocardiogram (ECG) fragments. This study suggests using a continuous wavelet transform (CWT) to convert ECG signals into images (scalograms) and examining the task of categorizing short 2-s segments of ECG signals into four groups of dangerous arrhythmias that are shockable, including ventricular flutter (C1), ventricular fibrillation (C2), ventricular tachycardia torsade de pointes (C3), and high-rate ventricular tachycardia (C4). We propose developing a novel hybrid neural network with a deep learning architecture to classify dangerous arrhythmias. This work utilizes actual electrocardiogram (ECG) data obtained from the PhysioNet database, alongside artificially generated ECG data produced by the Synthetic Minority Over-sampling Technique (SMOTE) approach, to address the issue of imbalanced class distribution for obtaining an accuracy-trained model. Experimental results demonstrate that the proposed approach achieves high accuracy, sensitivity, specificity, precision, and an F1-score of 97.75%, 97.75%, 99.25%, 97.75%, and 97.75%, respectively, in classifying all the four shockable classes of arrhythmias and are superior to traditional methods. Our work possesses significant clinical value in real-life scenarios since it has the potential to significantly enhance the diagnosis and treatment of life-threatening arrhythmias in individuals with cardiac disease. Furthermore, our model also has demonstrated adaptability and generality for two other datasets.
RESUMEN
Leukemia is a malignant disease that impacts explicitly the blood cells, leading to life-threatening infections and premature mortality. State-of-the-art machine-enabled technologies and sophisticated deep learning algorithms can assist clinicians in early-stage disease diagnosis. This study introduces an advanced end-to-end approach for the automated diagnosis of acute leukemia classes acute lymphocytic leukemia (ALL) and acute myeloid leukemia (AML). This study gathered a complete database of 44 patients, comprising 670 ALL and AML images. The proposed deep model's architecture consisted of a fusion of graph theory and convolutional neural network (CNN), with six graph Conv layers and a Softmax layer. The proposed deep model achieved a classification accuracy of 99% and a kappa coefficient of 0.85 for ALL and AML classes. The suggested model was assessed in noisy conditions and demonstrated strong resilience. Specifically, the model's accuracy remained above 90%, even at a signal-to-noise ratio (SNR) of 0 dB. The proposed approach was evaluated against contemporary methodologies and research, demonstrating encouraging outcomes. According to this, the suggested deep model can serve as a tool for clinicians to identify specific forms of acute leukemia.
RESUMEN
Nowadays, virtual learning environments have become widespread to avoid time and space constraints and share high-quality learning resources. As a result of human-computer interaction, student behaviors are recorded instantly. This work aims to design an educational recommendation system according to the individual's interests in educational resources. This system is evaluated based on clicking or downloading the source with the help of the user so that the appropriate resources can be suggested to users. In online tutorials, in addition to the problem of choosing the right source, we face the challenge of being aware of diversity in users' preferences and tastes, especially their short-term interests in the near future, at the beginning of a session. We assume that the user's interests consist of two parts: (1) the user's long-term interests, which include the user's constant interests based on the history of the user's dynamic activities, and (2) the user's short-term interests, which indicate the user's current interests. Due to the use of Bilstm networks and their gradual learning feature, the proposed model supports learners' behavioral changes. An average accuracy of 0.9978 and a Loss of 0.0051 offer more appropriate recommendations than similar works.
RESUMEN
Emotion is an intricate cognitive state that, when identified, can serve as a crucial component of the brain-computer interface. This study examines the identification of two categories of positive and negative emotions through the development and implementation of a dry electrode electroencephalogram (EEG). To achieve this objective, a dry EEG electrode is created using the silver-copper sintering technique, which is assessed through Scanning Electron Microscope (SEM) and Energy Dispersive X-ray Analysis (EDXA) evaluations. Subsequently, a database is generated utilizing the designated electrode, which is based on the musical stimulus. The collected data are fed into an improved deep network for automatic feature selection/extraction and classification. The deep network architecture is structured by combining type 2 fuzzy sets (FT2) and deep convolutional graph networks. The fabricated electrode demonstrated superior performance, efficiency, and affordability compared to other electrodes (both wet and dry) in this study. Furthermore, the dry EEG electrode was examined in noisy environments and demonstrated robust resistance across a diverse range of Signal-To-Noise ratios (SNRs). Furthermore, the proposed model achieved a classification accuracy of 99% for distinguishing between positive and negative emotions, an improvement of approximately 2% over previous studies. The manufactured dry EEG electrode is very economical and cost-effective in terms of manufacturing costs when compared to recent studies. The proposed deep network, combined with the fabricated dry EEG electrode, can be used in real-time applications for long-term recordings that do not require gel.
RESUMEN
BACKGROUND: The Prostate Imaging Reporting and Data System (PI-RADS) is an established reporting scheme for multiparametric magnetic resonance imaging (mpMRI) to distinguish clinically significant prostate cancer (csPCa). Deep learning (DL) holds great potential for automating csPCa classification on mpMRI. METHOD: To compare the performance between a DL algorithm and PI-RADS categorization in PCa detection and csPCa classification, we included 1,729 consecutive patients who underwent radical prostatectomy or biopsy in Tongji hospital. We developed DL models by integrating individual mpMRI sequences and employing an ensemble approach for distinguishing between csPCa and CiSPCa (specifically defined as PCa with a Gleason group 1 or benign prostate disease, training cohort: 1,285 patients vs. external testing cohort: 315 patients). RESULTS: DL-based models exhibited higher csPCa detection rates than PI-RADS categorization (area under the curve [AUC]: 0.902; sensitivity: 0.728; specificity: 0.906 vs. AUC: 0.759; sensitivity: 0.761; specificity: 0.756) (P < 0.001) Notably, DL networks exhibited significant strength in the prostate-specific antigen (PSA) arm < 10 ng/ml compared with PI-RADS assessment (AUC: 0.788; sensitivity: 0.588; specificity: 0.883 vs. AUC: 0.618; sensitivity: 0.379; specificity: 0.763) (Pâ¯=â¯0.041). CONCLUSIONS: We developed DL-based mpMRI ensemble models for csPCa classification with improved sensitivity, specificity, and accuracy compared with clinical PI-RADS assessment. In the PSA-stratified condition, the DL ensemble model performed better than PI-RADS in the detection of csPCa in both the high PSA group and the low PSA group.
Asunto(s)
Aprendizaje Profundo , Imágenes de Resonancia Magnética Multiparamétrica , Neoplasias de la Próstata , Masculino , Humanos , Neoplasias de la Próstata/patología , Antígeno Prostático Específico , Imagen por Resonancia Magnética/métodos , Estudios Retrospectivos , Biopsia Guiada por Imagen/métodosRESUMEN
Accurately segmenting the structure of the fetal head (FH) and performing biometry measurements, including head circumference (HC) estimation, stands as a vital requirement for addressing abnormal fetal growth during pregnancy under the expertise of experienced radiologists using ultrasound (US) images. However, accurate segmentation and measurement is a challenging task due to image artifact, incomplete ellipse fitting, and fluctuations due to FH dimensions over different trimesters. Also, it is highly time-consuming due to the absence of specialized features, which leads to low segmentation accuracy. To address these challenging tasks, we propose an automatic density regression approach to incorporate appearance and shape priors into the deep learning-based network model (DR-ASPnet) with robust ellipse fitting using fetal US images. Initially, we employed multiple pre-processing steps to remove unwanted distortions, variable fluctuations, and a clear view of significant features from the US images. Then some form of augmentation operation is applied to increase the diversity of the dataset. Next, we proposed the hierarchical density regression deep convolutional neural network (HDR-DCNN) model, which involves three network models to determine the complex location of FH for accurate segmentation during the training and testing processes. Then, we used post-processing operations using contrast enhancement filtering with a morphological operation model to smooth the region and remove unnecessary artifacts from the segmentation results. After post-processing, we applied the smoothed segmented result to the robust ellipse fitting-based least square (REFLS) method for HC estimation. Experimental results of the DR-ASPnet model obtain 98.86% dice similarity coefficient (DSC) as segmentation accuracy, and it also obtains 1.67 mm absolute distance (AD) as measurement accuracy compared to other state-of-the-art methods. Finally, we achieved a 0.99 correlation coefficient (CC) in estimating the measured and predicted HC values on the HC18 dataset.
RESUMEN
This study presents wrapper-based metaheuristic deep learning networks (WBM-DLNets) feature optimization algorithms for brain tumor diagnosis using magnetic resonance imaging. Herein, 16 pretrained deep learning networks are used to compute the features. Eight metaheuristic optimization algorithms, namely, the marine predator algorithm, atom search optimization algorithm (ASOA), Harris hawks optimization algorithm, butterfly optimization algorithm, whale optimization algorithm, grey wolf optimization algorithm (GWOA), bat algorithm, and firefly algorithm, are used to evaluate the classification performance using a support vector machine (SVM)-based cost function. A deep-learning network selection approach is applied to determine the best deep-learning network. Finally, all deep features of the best deep learning networks are concatenated to train the SVM model. The proposed WBM-DLNets approach is validated based on an available online dataset. The results reveal that the classification accuracy is significantly improved by utilizing the features selected using WBM-DLNets relative to those obtained using the full set of deep features. DenseNet-201-GWOA and EfficientNet-b0-ASOA yield the best results, with a classification accuracy of 95.7%. Additionally, the results of the WBM-DLNets approach are compared with those reported in the literature.
RESUMEN
We conducted a systematic review and meta-analysis of the diagnostic performance of current deep learning algorithms for the diagnosis of lung cancer. We searched major databases up to June 2022 to include studies that used artificial intelligence to diagnose lung cancer, using the histopathological analysis of true positive cases as a reference. The quality of the included studies was assessed independently by two authors based on the revised Quality Assessment of Diagnostic Accuracy Studies. Six studies were included in the analysis. The pooled sensitivity and specificity were 0.93 (95% CI 0.85−0.98) and 0.68 (95% CI 0.49−0.84), respectively. Despite the significantly high heterogeneity for sensitivity (I2 = 94%, p < 0.01) and specificity (I2 = 99%, p < 0.01), most of it was attributed to the threshold effect. The pooled SROC curve with a bivariate approach yielded an area under the curve (AUC) of 0.90 (95% CI 0.86 to 0.92). The DOR for the studies was 26.7 (95% CI 19.7−36.2) and heterogeneity was 3% (p = 0.40). In this systematic review and meta-analysis, we found that when using the summary point from the SROC, the pooled sensitivity and specificity of DL algorithms for the diagnosis of lung cancer were 93% and 68%, respectively.
RESUMEN
COVID-19, a deadly and highly contagious virus, caused the deaths of millions of individuals around the world. Early detection of the virus can reduce the virus transmission and fatality rate. Many deep learning (DL) based COVID-19 detection methods have been proposed, but most are trained on either small, incomplete, noisy, or imbalanced datasets. Many are also trained on a small number of COVID-19 samples. This study tackles these concerns by introducing DL-based solutions for COVID-19 diagnosis using computerized tomography (CT) images and 12 cutting-edge DL pre-trained models with acceptable Top-1 accuracy. All the models are trained on 9,000 COVID-19 samples and 5,000 normal images, which is higher than the COVID-19 images used in most studies. In addition, while most of the research used X-ray images for training, this study used CT images. CT scans capture blood arteries, bones, and soft tissues more effectively than X-Ray. The proposed techniques were evaluated, and the results show that NASNetLarge produced the best classification accuracy, followed by InceptionResNetV2 and DenseNet169. The three models achieved an accuracy of 99.86, 99.79, and 99.71%, respectively. Moreover, DenseNet121 and VGG16 achieved the best sensitivity, while InceptionV3 and InceptionResNetV2 achieved the best specificity. DenseNet121 and VGG16 attained a sensitivity of 99.94%, while InceptionV3 and InceptionResNetV2 achieved a specificity of 100%. The models are compared to those designed in three existing studies, and they produce better results. The results show that deep neural networks have the potential for computer-assisted COVID-19 diagnosis. We hope this study will be valuable in improving the decisions and accuracy of medical practitioners when diagnosing COVID-19. This study will assist future researchers in minimizing the repetition of analysis and identifying the ideal network for their tasks.
RESUMEN
PURPOSES: Accurate and efficient spine registration is crucial to success of spine image guidance. However, changes in spine pose cause intervertebral motion that can lead to significant registration errors. In this study, we develop a geometrical rectification technique via nonlinear principal component analysis (NLPCA) to achieve level-wise vertebral registration that is robust to large changes in spine pose. METHODS: We used explanted porcine spines and live pigs to develop and test our technique. Each sample was scanned with preoperative CT (pCT) in an initial pose and rescanned with intraoperative stereovision (iSV) in a different surgical posture. Patient registration rectified arbitrary spinal postures in pCT and iSV into a common, neutral pose through a parameterized moving-frame approach. Topologically encoded depth projection 2D images were then generated to establish invertible point-to-pixel correspondences. Level-wise point correspondences between pCT and iSV vertebral surfaces were generated via 2D image registration. Finally, closed-form vertebral level-wise rigid registration was obtained by directly mapping 3D surface point pairs. Implanted mini-screws were used as fiducial markers to measure registration accuracy. RESULTS: In seven explanted porcine spines and two live animal surgeries (maximum in-spine pose change of 87.5 mm and 32.7 degrees averaged from all spines), average target registration errors (TRE) of 1.70 ± 0.15 mm and 1.85 ± 0.16 mm were achieved, respectively. The automated spine rectification took 3-5 min, followed by an additional 30 secs for depth image projection and level-wise registration. CONCLUSIONS: Accuracy and efficiency of the proposed level-wise spine registration support its application in human open spine surgeries. The registration framework, itself, may also be applicable to other intraoperative imaging modalities such as ultrasound and MRI, which may expand utility of the approach in spine registration in general.
Asunto(s)
Imagenología Tridimensional/métodos , Imagen por Resonancia Magnética/métodos , Enfermedades de la Columna Vertebral/diagnóstico , Columna Vertebral/diagnóstico por imagen , Cirugía Asistida por Computador/métodos , Ultrasonografía/métodos , Animales , Modelos Animales de Enfermedad , Marcadores Fiduciales , Humanos , Enfermedades de la Columna Vertebral/cirugía , Columna Vertebral/cirugía , PorcinosRESUMEN
The purpose of the study was to test the usefulness of deep learning artificial neural networks and statistical modeling in predicting the strength of bone cements with defects. The defects are related to the introduction of admixtures, such as blood or saline, as contaminants into the cement at the preparation stage. Due to the wide range of applications of deep learning, among others in speech recognition, bioinformation processing, and medication design, the extent was checked to which it is possible to obtain information related to the prediction of the compressive strength of bone cements. Development and improvement of deep learning network (DLN) algorithms and statistical modeling in the analysis of changes in the mechanical parameters of the tested materials will enable determining an acceptable margin of error during surgery or cement preparation in relation to the expected strength of the material used to fill bone cavities. The use of the abovementioned computer methods may, therefore, play a significant role in the initial qualitative assessment of the effects of procedures and, thus, mitigation of errors resulting in failure to maintain the required mechanical parameters and patient dissatisfaction.
RESUMEN
Artificial Intelligence has found many applications in the last decade due to increased computing power. Artificial Neural Networks are inspired in the brain structure and consist in the interconnection of artificial neurons through artificial synapses in the so-called Deep Neural Networks (DNNs). Training these systems requires huge amounts of data and, after the network is trained, it can recognize unforeseen data and provide useful information. As far as the training is concerned, we can distinguish between supervised and unsupervised learning. The former requires labelled data and is based on the iterative minimization of the output error using the stochastic gradient descent method followed by the recalculation of the strength of the synaptic connections (weights) with the backpropagation algorithm. On the other hand, unsupervised learning does not require data labeling and it is not based on explicit output error minimization. Conventional ANNs can function with supervised learning algorithms (perceptrons, multi-layer perceptrons, convolutional networks, etc.) but also with unsupervised learning rules (Kohonen networks, self-organizing maps, etc.). Besides, another type of neural networks are the so-called Spiking Neural Networks (SNNs) in which learning takes place through the superposition of voltage spikes launched by the neurons. Their behavior is much closer to the brain functioning mechanisms they can be used with supervised and unsupervised learning rules. Since learning and inference is based on short voltage spikes, energy efficiency improves substantially. Up to this moment, all these ANNs (spiking and conventional) have been implemented as software tools running on conventional computing units based on the von Neumann architecture. However, this approach reaches important limits due to the required computing power, physical size and energy consumption. This is particularly true for applications at the edge of the internet. Thus, there is an increasing interest in developing AI tools directly implemented in hardware for this type of applications. The first hardware demonstrations have been based on Complementary Metal-Oxide-Semiconductor (CMOS) circuits and specific communication protocols. However, to further increase training speed andenergy efficiency while reducing the system size, the combination of CMOS neuron circuits with memristor synapses is now being explored. It has also been pointed out that the short time non-volatility of some memristors may even allow fabricating purely memristive ANNs. The memristor is a new device (first demonstrated in solid-state in 2008) which behaves as a resistor with memory and which has been shown to have potentiation and depression properties similar to those of biological synapses. In this Special Issue, we explore the state of the art of neuromorphic circuits implementing neural networks with memristors for AI applications.