Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 20
Filtrar
1.
Entropy (Basel) ; 26(5)2024 Apr 26.
Artículo en Inglés | MEDLINE | ID: mdl-38785617

RESUMEN

Learning in neural networks with locally-tuned neuron models such as radial Basis Function (RBF) networks is often seen as instable, in particular when multi-layered architectures are used. Furthermore, universal approximation theorems for single-layered RBF networks are very well established; therefore, deeper architectures are theoretically not required. Consequently, RBFs are mostly used in a single-layered manner. However, deep neural networks have proven their effectiveness on many different tasks. In this paper, we show that deeper RBF architectures with multiple radial basis function layers can be designed together with efficient learning schemes. We introduce an initialization scheme for deep RBF networks based on k-means clustering and covariance estimation. We further show how to make use of convolutions to speed up the calculation of the Mahalanobis distance in a partially connected way, which is similar to the convolutional neural networks (CNNs). Finally, we evaluate our approach on image classification as well as speech emotion recognition tasks. Our results show that deep RBF networks perform very well, with comparable results to other deep neural network types, such as CNNs.

2.
PLoS One ; 18(11): e0293615, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37930947

RESUMEN

Breast ultrasound medical images often have low imaging quality along with unclear target boundaries. These issues make it challenging for physicians to accurately identify and outline tumors when diagnosing patients. Since precise segmentation is crucial for diagnosis, there is a strong need for an automated method to enhance the segmentation accuracy, which can serve as a technical aid in diagnosis. Recently, the U-Net and its variants have shown great success in medical image segmentation. In this study, drawing inspiration from the U-Net concept, we propose a new variant of the U-Net architecture, called DBU-Net, for tumor segmentation in breast ultrasound images. To enhance the feature extraction capabilities of the encoder, we introduce a novel approach involving the utilization of two distinct encoding paths. In the first path, the original image is employed, while in the second path, we use an image created using the Roberts edge filter, in which edges are highlighted. This dual branch encoding strategy helps to extract the semantic rich information through a mutually informative learning process. At each level of the encoder, both branches independently undergo two convolutional layers followed by a pooling layer. To facilitate cross learning between the branches, a weighted addition scheme is implemented. These weights are dynamically learned by considering the gradient with respect to the loss function. We evaluate the performance of our proposed DBU-Net model on two datasets, namely BUSI and UDIAT, and our experimental results demonstrate superior performance compared to state-of-the-art models.


Asunto(s)
Neoplasias Mamarias Animales , Ultrasonografía Mamaria , Humanos , Femenino , Animales , Ultrasonografía , Cognición , Aprendizaje , Procesamiento de Imagen Asistido por Computador
3.
Sensors (Basel) ; 23(19)2023 Oct 09.
Artículo en Inglés | MEDLINE | ID: mdl-37837158

RESUMEN

Cardiovascular diseases (CVDs) are a major global health concern, causing significant morbidity and mortality. AI's integration with healthcare offers promising solutions, with data-driven techniques, including ECG analysis, emerging as powerful tools. However, privacy concerns pose a major barrier to distributing healthcare data for addressing data-driven CVD classification. To address confidentiality issues related to sensitive health data distribution, we propose leveraging artificially synthesized data generation. Our contribution introduces a novel diffusion-based model coupled with a State Space Augmented Transformer. This synthesizes conditional 12-lead electrocardiograms based on the 12 multilabeled heart rhythm classes of the PTB-XL dataset, with each lead depicting the heart's electrical activity from different viewpoints. Recent advances establish diffusion models as groundbreaking generative tools, while the State Space Augmented Transformer captures long-term dependencies in time series data. The quality of generated samples was assessed using metrics like Dynamic Time Warping (DTW) and Maximum Mean Discrepancy (MMD). To evaluate authenticity, we assessed the similarity of performance of a pre-trained classifier on both generated and real ECG samples.


Asunto(s)
Algoritmos , Enfermedades Cardiovasculares , Humanos , Electrocardiografía/métodos , Frecuencia Cardíaca
4.
Sensors (Basel) ; 23(15)2023 Jul 29.
Artículo en Inglés | MEDLINE | ID: mdl-37571564

RESUMEN

Pulmonary tuberculosis (PTB) is a bacterial infection that affects the lung. PTB remains one of the infectious diseases with the highest global mortalities. Chest radiography is a technique that is often employed in the diagnosis of PTB. Radiologists identify the severity and stage of PTB by inspecting radiographic features in the patient's chest X-ray (CXR). The most common radiographic features seen on CXRs include cavitation, consolidation, masses, pleural effusion, calcification, and nodules. Identifying these CXR features will help physicians in diagnosing a patient. However, identifying these radiographic features for intricate disorders is challenging, and the accuracy depends on the radiologist's experience and level of expertise. So, researchers have proposed deep learning (DL) techniques to detect and mark areas of tuberculosis infection in CXRs. DL models have been proposed in the literature because of their inherent capacity to detect diseases and segment the manifestation regions from medical images. However, fully supervised semantic segmentation requires several pixel-by-pixel labeled images. The annotation of such a large amount of data by trained physicians has some challenges. First, the annotation requires a significant amount of time. Second, the cost of hiring trained physicians is expensive. In addition, the subjectivity of medical data poses a difficulty in having standardized annotation. As a result, there is increasing interest in weak localization techniques. Therefore, in this review, we identify methods employed in the weakly supervised segmentation and localization of radiographic manifestations of pulmonary tuberculosis from chest X-rays. First, we identify the most commonly used public chest X-ray datasets for tuberculosis identification. Following that, we discuss the approaches for weakly localizing tuberculosis radiographic manifestations in chest X-rays. The weakly supervised localization of PTB can highlight the region of the chest X-ray image that contributed the most to the DL model's classification output and help pinpoint the diseased area. Finally, we discuss the limitations and challenges of weakly supervised techniques in localizing TB manifestations regions in chest X-ray images.


Asunto(s)
Tuberculosis Pulmonar , Tuberculosis , Humanos , Rayos X , Radiografía Torácica/métodos , Tuberculosis Pulmonar/diagnóstico por imagen , Radiografía
5.
Sensors (Basel) ; 22(24)2022 Dec 14.
Artículo en Inglés | MEDLINE | ID: mdl-36560204

RESUMEN

The orchestration of software-defined networks (SDN) and the internet of things (IoT) has revolutionized the computing fields. These include the broad spectrum of connectivity to sensors and electronic appliances beyond standard computing devices. However, these networks are still vulnerable to botnet attacks such as distributed denial of service, network probing, backdoors, information stealing, and phishing attacks. These attacks can disrupt and sometimes cause irreversible damage to several sectors of the economy. As a result, several machine learning-based solutions have been proposed to improve the real-time detection of botnet attacks in SDN-enabled IoT networks. The aim of this review is to investigate research studies that applied machine learning techniques for deterring botnet attacks in SDN-enabled IoT networks. Initially the first major botnet attacks in SDN-IoT networks have been thoroughly discussed. Secondly a commonly used machine learning techniques for detecting and mitigating botnet attacks in SDN-IoT networks are discussed. Finally, the performance of these machine learning techniques in detecting and mitigating botnet attacks is presented in terms of commonly used machine learning models' performance metrics. Both classical machine learning (ML) and deep learning (DL) techniques have comparable performance in botnet attack detection. However, the classical ML techniques require extensive feature engineering to achieve optimal features for efficient botnet attack detection. Besides, they fall short of detecting unforeseen botnet attacks. Furthermore, timely detection, real-time monitoring, and adaptability to new types of attacks are still challenging tasks in classical ML techniques. These are mainly because classical machine learning techniques use signatures of the already known malware both in training and after deployment.


Asunto(s)
Internet de las Cosas , Benchmarking , Electrónica , Aprendizaje Automático , Programas Informáticos
6.
Entropy (Basel) ; 24(8)2022 Aug 13.
Artículo en Inglés | MEDLINE | ID: mdl-36010780

RESUMEN

In this paper, we study the learnability of the Boolean inner product by a systematic simulation study. The family of the Boolean inner product function is known to be representable by neural networks of threshold neurons of depth 3 with only 2n+1 units (n the input dimension)-whereas an exact representation by a depth 2 network cannot possibly be of polynomial size. This result can be seen as a strong argument for deep neural network architectures. In our study, we found that this depth 3 architecture of the Boolean inner product is difficult to train, much harder than the depth 2 network, at least for the small input size scenarios n≤16. Nonetheless, the accuracy of the deep architecture increased with the dimension of the input space to 94% on average, which means that multiple restarts are needed to find the compact depth 3 architecture. Replacing the fully connected first layer by a partially connected layer (a kind of convolutional layer sparsely connected with weight sharing) can significantly improve the learning performance up to 99% accuracy in simulations. Another way to improve the learnability of the compact depth 3 representation of the inner product could be achieved by adding just a few additional units into the first hidden layer.

7.
Expert Syst Appl ; 206: 117812, 2022 Nov 15.
Artículo en Inglés | MEDLINE | ID: mdl-35754941

RESUMEN

The rapid outbreak of COVID-19 has affected the lives and livelihoods of a large part of the society. Hence, to confine the rapid spread of this virus, early detection of COVID-19 is extremely important. One of the most common ways of detecting COVID-19 is by using chest X-ray images. In the literature, it is found that most of the research activities applied convolutional neural network (CNN) models where the features generated by the last convolutional layer were directly passed to the classification models. In this paper, convolutional long short-term memory (ConvLSTM) layer is used in order to encode the spatial dependency among the feature maps obtained from the last convolutional layer of the CNN and to improve the image representational capability of the model. Additionally, the squeeze-and-excitation (SE) block, a spatial attention mechanism, is used to allocate weights to important local features. These two mechanisms are employed on three popular CNN models - VGG19, InceptionV3, and MobileNet to improve their classification strength. Finally, the Sugeno fuzzy integral based ensemble method is used on these classifiers' outputs to enhance the detection accuracy further. For experiments, three chest X-ray datasets, which are very prevalent for COVID-19 detection, are considered. For all the three datasets, it is found that the results obtained by the proposed method are comparable to state-of-the-art methods. The code, along with the pre-trained models, can be found at https://github.com/colabpro123/CovidConvLSTM.

8.
Diagnostics (Basel) ; 13(1)2022 Dec 29.
Artículo en Inglés | MEDLINE | ID: mdl-36611403

RESUMEN

Heart disease is one of the leading causes of mortality throughout the world. Among the different heart diagnosis techniques, an electrocardiogram (ECG) is the least expensive non-invasive procedure. However, the following are challenges: the scarcity of medical experts, the complexity of ECG interpretations, the manifestation similarities of heart disease in ECG signals, and heart disease comorbidity. Machine learning algorithms are viable alternatives to the traditional diagnoses of heart disease from ECG signals. However, the black box nature of complex machine learning algorithms and the difficulty in explaining a model's outcomes are obstacles for medical practitioners in having confidence in machine learning models. This observation paves the way for interpretable machine learning (IML) models as diagnostic tools that can build a physician's trust and provide evidence-based diagnoses. Therefore, in this systematic literature review, we studied and analyzed the research landscape in interpretable machine learning techniques by focusing on heart disease diagnosis from an ECG signal. In this regard, the contribution of our work is manifold; first, we present an elaborate discussion on interpretable machine learning techniques. In addition, we identify and characterize ECG signal recording datasets that are readily available for machine learning-based tasks. Furthermore, we identify the progress that has been achieved in ECG signal interpretation using IML techniques. Finally, we discuss the limitations and challenges of IML techniques in interpreting ECG signals.

9.
Front Physiol ; 12: 720464, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-34539444

RESUMEN

Traditional pain assessment approaches ranging from self-reporting methods, to observational scales, rely on the ability of an individual to accurately assess and successfully report observed or experienced pain episodes. Automatic pain assessment tools are therefore more than desirable in cases where this specific ability is negatively affected by various psycho-physiological dispositions, as well as distinct physical traits such as in the case of professional athletes, who usually have a higher pain tolerance as regular individuals. Hence, several approaches have been proposed during the past decades for the implementation of an autonomous and effective pain assessment system. These approaches range from more conventional supervised and semi-supervised learning techniques applied on a set of carefully hand-designed feature representations, to deep neural networks applied on preprocessed signals. Some of the most prominent advantages of deep neural networks are the ability to automatically learn relevant features, as well as the inherent adaptability of trained deep neural networks to related inference tasks. Yet, some significant drawbacks such as requiring large amounts of data to train deep models and over-fitting remain. Both of these problems are especially relevant in pain intensity assessment, where labeled data is scarce and generalization is of utmost importance. In the following work we address these shortcomings by introducing several novel multi-modal deep learning approaches (characterized by specific supervised, as well as self-supervised learning techniques) for the assessment of pain intensity based on measurable bio-physiological data. While the proposed supervised deep learning approach is able to attain state-of-the-art inference performances, our self-supervised approach is able to significantly improve the data efficiency of the proposed architecture by automatically generating physiological data and simultaneously performing a fine-tuning of the architecture, which has been previously trained on a significantly smaller amount of data.

10.
J Imaging ; 7(9)2021 Sep 06.
Artículo en Inglés | MEDLINE | ID: mdl-34564105

RESUMEN

A brain Magnetic resonance imaging (MRI) scan of a single individual consists of several slices across the 3D anatomical view. Therefore, manual segmentation of brain tumors from magnetic resonance (MR) images is a challenging and time-consuming task. In addition, an automated brain tumor classification from an MRI scan is non-invasive so that it avoids biopsy and make the diagnosis process safer. Since the beginning of this millennia and late nineties, the effort of the research community to come-up with automatic brain tumor segmentation and classification method has been tremendous. As a result, there are ample literature on the area focusing on segmentation using region growing, traditional machine learning and deep learning methods. Similarly, a number of tasks have been performed in the area of brain tumor classification into their respective histological type, and an impressive performance results have been obtained. Considering state of-the-art methods and their performance, the purpose of this paper is to provide a comprehensive survey of three, recently proposed, major brain tumor segmentation and classification model techniques, namely, region growing, shallow machine learning and deep learning. The established works included in this survey also covers technical aspects such as the strengths and weaknesses of different approaches, pre- and post-processing techniques, feature extraction, datasets, and models' performance evaluation metrics.

11.
J Imaging ; 7(2)2021 Feb 01.
Artículo en Inglés | MEDLINE | ID: mdl-34460621

RESUMEN

A brain tumor is one of the foremost reasons for the rise in mortality among children and adults. A brain tumor is a mass of tissue that propagates out of control of the normal forces that regulate growth inside the brain. A brain tumor appears when one type of cell changes from its normal characteristics and grows and multiplies abnormally. The unusual growth of cells within the brain or inside the skull, which can be cancerous or non-cancerous has been the reason for the death of adults in developed countries and children in under developing countries like Ethiopia. The studies have shown that the region growing algorithm initializes the seed point either manually or semi-manually which as a result affects the segmentation result. However, in this paper, we proposed an enhanced region-growing algorithm for the automatic seed point initialization. The proposed approach's performance was compared with the state-of-the-art deep learning algorithms using the common dataset, BRATS2015. In the proposed approach, we applied a thresholding technique to strip the skull from each input brain image. After the skull is stripped the brain image is divided into 8 blocks. Then, for each block, we computed the mean intensities and from which the five blocks with maximum mean intensities were selected out of the eight blocks. Next, the five maximum mean intensities were used as a seed point for the region growing algorithm separately and obtained five different regions of interest (ROIs) for each skull stripped input brain image. The five ROIs generated using the proposed approach were evaluated using dice similarity score (DSS), intersection over union (IoU), and accuracy (Acc) against the ground truth (GT), and the best region of interest is selected as a final ROI. Finally, the final ROI was compared with different state-of-the-art deep learning algorithms and region-based segmentation algorithms in terms of DSS. Our proposed approach was validated in three different experimental setups. In the first experimental setup where 15 randomly selected brain images were used for testing and achieved a DSS value of 0.89. In the second and third experimental setups, the proposed approach scored a DSS value of 0.90 and 0.80 for 12 randomly selected and 800 brain images respectively. The average DSS value for the three experimental setups was 0.86.

12.
Sensors (Basel) ; 21(16)2021 Aug 11.
Artículo en Inglés | MEDLINE | ID: mdl-34450866

RESUMEN

Sleep Apnea is a breathing disorder occurring during sleep. Older people suffer most from this disease. In-time diagnosis of apnea is needed which can be observed by the application of a proper health monitoring system. In this work, we focus on Obstructive Sleep Apnea (OSA) detection from the Electrocardiogram (ECG) signals obtained through the body sensors. Our work mainly consists of an experimental study of different ensemble techniques applied on three deep learning models-two Convolutional Neural Network (CNN) based models, and a combination of CNN and Long Short-Term Memory (LSTM) models, which were previously proposed in the OSA detection domain. We have chosen four ensemble techniques-majority voting, sum rule and Choquet integral based fuzzy fusion and trainable ensemble using Multi-Layer Perceptron (MLP) for our case study. All the experiments are conducted on the benchmark PhysioNet Apnea-ECG Database. Finally, we have achieved highest OSA detection accuracy of 85.58% using the MLP based ensemble approach. Our best result is also able to surpass many of state-of-the-art methods.


Asunto(s)
Aprendizaje Profundo , Síndromes de la Apnea del Sueño , Apnea Obstructiva del Sueño , Anciano , Electrocardiografía , Humanos , Redes Neurales de la Computación , Polisomnografía , Síndromes de la Apnea del Sueño/diagnóstico , Apnea Obstructiva del Sueño/diagnóstico
13.
Sensors (Basel) ; 21(11)2021 May 23.
Artículo en Inglés | MEDLINE | ID: mdl-34071029

RESUMEN

Breast cancer, like most forms of cancer, is a fatal disease that claims more than half a million lives every year. In 2020, breast cancer overtook lung cancer as the most commonly diagnosed form of cancer. Though extremely deadly, the survival rate and longevity increase substantially with early detection and diagnosis. The treatment protocol also varies with the stage of breast cancer. Diagnosis is typically done using histopathological slides from which it is possible to determine whether the tissue is in the Ductal Carcinoma In Situ (DCIS) stage, in which the cancerous cells have not spread into the encompassing breast tissue, or in the Invasive Ductal Carcinoma (IDC) stage, wherein the cells have penetrated into the neighboring tissues. IDC detection is extremely time-consuming and challenging for physicians. Hence, this can be modeled as an image classification task where pattern recognition and machine learning can be used to aid doctors and medical practitioners in making such crucial decisions. In the present paper, we use an IDC Breast Cancer dataset that contains 277,524 images (with 78,786 IDC positive images and 198,738 IDC negative images) to classify the images into IDC(+) and IDC(-). To that end, we use feature extractors, including textural features, such as SIFT, SURF and ORB, and statistical features, such as Haralick texture features. These features are then combined to yield a dataset of 782 features. These features are ensembled by stacking using various Machine Learning classifiers, such as Random Forest, Extra Trees, XGBoost, AdaBoost, CatBoost and Multi Layer Perceptron followed by feature selection using Pearson Correlation Coefficient to yield a dataset with four features that are then used for classification. From our experimental results, we found that CatBoost yielded the highest accuracy (92.55%), which is at par with other state-of-the-art results-most of which employ Deep Learning architectures. The source code is available in the GitHub repository.


Asunto(s)
Neoplasias de la Mama , Carcinoma Intraductal no Infiltrante , Neoplasias de la Mama/diagnóstico , Computadores , Humanos , Aprendizaje Automático , Redes Neurales de la Computación
14.
Sensors (Basel) ; 20(8)2020 Apr 17.
Artículo en Inglés | MEDLINE | ID: mdl-32316626

RESUMEN

In this paper, we present a multimodal dataset for affective computing research acquired in a human-computer interaction (HCI) setting. An experimental mobile and interactive scenario was designed and implemented based on a gamified generic paradigm for the induction of dialog-based HCI relevant emotional and cognitive load states. It consists of six experimental sequences, inducing Interest, Overload, Normal, Easy, Underload, and Frustration. Each sequence is followed by subjective feedbacks to validate the induction, a respiration baseline to level off the physiological reactions, and a summary of results. Further, prior to the experiment, three questionnaires related to emotion regulation (ERQ), emotional control (TEIQue-SF), and personality traits (TIPI) were collected from each subject to evaluate the stability of the induction paradigm. Based on this HCI scenario, the University of Ulm Multimodal Affective Corpus (uulmMAC), consisting of two homogenous samples of 60 participants and 100 recording sessions was generated. We recorded 16 sensor modalities including 4 × video, 3 × audio, and 7 × biophysiological, depth, and pose streams. Further, additional labels and annotations were also collected. After recording, all data were post-processed and checked for technical and signal quality, resulting in the final uulmMAC dataset of 57 subjects and 95 recording sessions. The evaluation of the reported subjective feedbacks shows significant differences between the sequences, well consistent with the induced states, and the analysis of the questionnaires shows stable results. In summary, our uulmMAC database is a valuable contribution for the field of affective computing and multimodal data analysis: Acquired in a mobile interactive scenario close to real HCI, it consists of a large number of subjects and allows transtemporal investigations. Validated via subjective feedbacks and checked for quality issues, it can be used for affective computing and machine learning applications.


Asunto(s)
Reconocimiento Visual de Modelos/fisiología , Interfaz Usuario-Computador , Emociones/fisiología , Humanos , Aprendizaje Automático
15.
Sensors (Basel) ; 20(3)2020 Feb 04.
Artículo en Inglés | MEDLINE | ID: mdl-32033240

RESUMEN

Several approaches have been proposed for the analysis of pain-related facial expressions. These approaches range from common classification architectures based on a set of carefully designed handcrafted features, to deep neural networks characterised by an autonomous extraction of relevant facial descriptors and simultaneous optimisation of a classification architecture. In the current work, an end-to-end approach based on attention networks for the analysis and recognition of pain-related facial expressions is proposed. The method combines both spatial and temporal aspects of facial expressions through a weighted aggregation of attention-based neural networks' outputs, based on sequences of Motion History Images (MHIs) and Optical Flow Images (OFIs). Each input stream is fed into a specific attention network consisting of a Convolutional Neural Network (CNN) coupled to a Bidirectional Long Short-Term Memory (BiLSTM) Recurrent Neural Network (RNN). An attention mechanism generates a single weighted representation of each input stream (MHI sequence and OFI sequence), which is subsequently used to perform specific classification tasks. Simultaneously, a weighted aggregation of the classification scores specific to each input stream is performed to generate a final classification output. The assessment conducted on both the BioVid Heat Pain Database (Part A) and SenseEmotion Database points at the relevance of the proposed approach, as its classification performance is on par with state-of-the-art classification approaches proposed in the literature.


Asunto(s)
Dimensión del Dolor/métodos , Reconocimiento de Normas Patrones Automatizadas , Grabación en Video , Algoritmos , Atención , Calibración , Bases de Datos Factuales , Diagnóstico por Computador/métodos , Cara , Expresión Facial , Voluntarios Sanos , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Aprendizaje Automático , Memoria a Corto Plazo , Modelos Estadísticos , Redes Neurales de la Computación , Dolor , Reproducibilidad de los Resultados , Temperatura
16.
J Imaging ; 6(11)2020 Nov 10.
Artículo en Inglés | MEDLINE | ID: mdl-34460565

RESUMEN

Deep learning algorithms have become the first choice as an approach to medical image analysis, face recognition, and emotion recognition. In this survey, several deep-learning-based approaches applied to breast cancer, cervical cancer, brain tumor, colon and lung cancers are studied and reviewed. Deep learning has been applied in almost all of the imaging modalities used for cervical and breast cancers and MRIs for the brain tumor. The result of the review process indicated that deep learning methods have achieved state-of-the-art in tumor detection, segmentation, feature extraction and classification. As presented in this paper, the deep learning approaches were used in three different modes that include training from scratch, transfer learning through freezing some layers of the deep learning network and modifying the architecture to reduce the number of parameters existing in the network. Moreover, the application of deep learning to imaging devices for the detection of various cancer cases has been studied by researchers affiliated to academic and medical institutes in economically developed countries; while, the study has not had much attention in Africa despite the dramatic soar of cancer risks in the continent.

17.
Sensors (Basel) ; 19(20)2019 Oct 17.
Artículo en Inglés | MEDLINE | ID: mdl-31627305

RESUMEN

Standard feature engineering involves manually designing measurable descriptors based on some expert knowledge in the domain of application, followed by the selection of the best performing set of designed features for the subsequent optimisation of an inference model. Several studies have shown that this whole manual process can be efficiently replaced by deep learning approaches which are characterised by the integration of feature engineering, feature selection and inference model optimisation into a single learning process. In the following work, deep learning architectures are designed for the assessment of measurable physiological channels in order to perform an accurate classification of different levels of artificially induced nociceptive pain. In contrast to previous works, which rely on carefully designed sets of hand-crafted features, the current work aims at building competitive pain intensity inference models through autonomous feature learning, based on deep neural networks. The assessment of the designed deep learning architectures is based on the BioVid Heat Pain Database (Part A) and experimental validation demonstrates that the proposed uni-modal architecture for the electrodermal activity (EDA) and the deep fusion approaches significantly outperform previous methods reported in the literature, with respective average performances of 84.57 % and 84.40 % for the binary classification experiment consisting of the discrimination between the baseline and the pain tolerance level ( T 0 vs. T 4 ) in a Leave-One-Subject-Out (LOSO) cross-validation evaluation setting. Moreover, the experimental results clearly show the relevance of the proposed approaches, which also offer more flexibility in the case of transfer learning due to the modular nature of deep neural networks.


Asunto(s)
Aprendizaje Automático , Modelos Biológicos , Redes Neurales de la Computación , Dolor Nociceptivo/fisiopatología , Bases de Datos Factuales , Aprendizaje Profundo , Humanos , Procesamiento de Imagen Asistido por Computador
18.
Sci Data ; 6(1): 196, 2019 10 09.
Artículo en Inglés | MEDLINE | ID: mdl-31597919

RESUMEN

From a computational viewpoint, emotions continue to be intriguingly hard to understand. In research, a direct and real-time inspection in realistic settings is not possible. Discrete, indirect, post-hoc recordings are therefore the norm. As a result, proper emotion assessment remains a problematic issue. The Continuously Annotated Signals of Emotion (CASE) dataset provides a solution as it focusses on real-time continuous annotation of emotions, as experienced by the participants, while watching various videos. For this purpose, a novel, intuitive joystick-based annotation interface was developed, that allowed for simultaneous reporting of valence and arousal, that are instead often annotated independently. In parallel, eight high quality, synchronized physiological recordings (1000 Hz, 16-bit ADC) were obtained from ECG, BVP, EMG (3x), GSR (or EDA), respiration and skin temperature sensors. The dataset consists of the physiological and annotation data from 30 participants, 15 male and 15 female, who watched several validated video-stimuli. The validity of the emotion induction, as exemplified by the annotation and physiological data, is also presented.


Asunto(s)
Afecto/fisiología , Emociones/fisiología , Adulto , Nivel de Alerta , Electrocardiografía , Electromiografía , Femenino , Humanos , Masculino , Persona de Mediana Edad , Fotopletismografía , Frecuencia Respiratoria , Temperatura Cutánea
19.
Front Robot AI ; 6: 6, 2019.
Artículo en Inglés | MEDLINE | ID: mdl-33501023

RESUMEN

Research on artificial development, reinforcement learning, and intrinsic motivations like curiosity could profit from the recently developed framework of multi-objective reinforcement learning. The combination of these ideas may lead to more realistic artificial models for life-long learning and goal directed behavior in animals and humans.

20.
Neural Netw ; 23(4): 497-509, 2010 May.
Artículo en Inglés | MEDLINE | ID: mdl-19783119

RESUMEN

Supervised learning requires a large amount of labeled data, but the data labeling process can be expensive and time consuming, as it requires the efforts of human experts. Co-Training is a semi-supervised learning method that can reduce the amount of required labeled data through exploiting the available unlabeled data to improve the classification accuracy. It is assumed that the patterns are represented by two or more redundantly sufficient feature sets (views) and these views are independent given the class. On the other hand, most of the real-world pattern recognition tasks involve a large number of categories which may make the task difficult. The tree-structured approach is an output space decomposition method where a complex multi-class problem is decomposed into a set of binary sub-problems. In this paper, we propose two learning architectures to combine the merits of the tree-structured approach and Co-Training. We show that our architectures are especially useful for classification tasks that involve a large number of classes and a small amount of labeled data where the single-view tree-structured approach does not perform well alone but when combined with Co-Training, it can exploit effectively the independent views and the unlabeled data to improve the recognition accuracy.


Asunto(s)
Minería de Datos/métodos , Aprendizaje , Reconocimiento de Normas Patrones Automatizadas/métodos , Interfaz Usuario-Computador , Algoritmos , Inteligencia Artificial , Simulación por Computador , Humanos , Reconocimiento Visual de Modelos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...