Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 92
Filtrar
Mais filtros

Base de dados
Tipo de documento
Intervalo de ano de publicação
1.
BMC Genomics ; 25(1): 151, 2024 Feb 07.
Artigo em Inglês | MEDLINE | ID: mdl-38326777

RESUMO

BACKGROUND: The mRNA subcellular localization bears substantial impact in the regulation of gene expression, cellular migration, and adaptation. However, the methods employed for experimental determination of this localization are arduous, time-intensive, and come with a high cost. METHODS: In this research article, we tackle the essential challenge of predicting the subcellular location of messenger RNAs (mRNAs) through Unified mRNA Subcellular Localization Predictor (UMSLP), a machine learning (ML) based approach. We embrace an in silico strategy that incorporate four distinct feature sets: kmer, pseudo k-tuple nucleotide composition, nucleotide physicochemical attributes, and the 3D sequence depiction achieved via Z-curve transformation for predicting subcellular localization in benchmark dataset across five distinct subcellular locales, encompassing nucleus, cytoplasm, extracellular region (ExR), mitochondria, and endoplasmic reticulum (ER). RESULTS: The proposed ML model UMSLP attains cutting-edge outcomes in predicting mRNA subcellular localization. On independent testing dataset, UMSLP ahcieved over 87% precision, 94% specificity, and 94% accuracy. Compared to other existing tools, UMSLP outperformed mRNALocator, mRNALoc, and SubLocEP by 11%, 21%, and 32%, respectively on average prediction accuracy for all five locales. SHapley Additive exPlanations analysis highlights the dominance of k-mer features in predicting cytoplasm, nucleus, ER, and ExR localizations, while Z-curve based features play pivotal roles in mitochondria subcellular localization detection. AVAILABILITY: We have shared datasets, code, Docker API for users in GitHub at: https://github.com/smusleh/UMSLP .


Assuntos
Retículo Endoplasmático , Mitocôndrias , RNA Mensageiro/genética , Mitocôndrias/genética , Biologia Computacional/métodos , Aprendizado de Máquina , Nucleotídeos
2.
Plant Dis ; 108(3): 711-724, 2024 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-37755420

RESUMO

Rhizoctonia crown and root rot (RCRR), caused by Rhizoctonia solani, can cause severe yield and quality losses in sugar beet. The most common strategy to control the disease is the development of resistant varieties. In the breeding process, field experiments with artificial inoculation are carried out to evaluate the performance of genotypes and varieties. The phenotyping process in breeding trials requires constant monitoring and scoring by skilled experts. This work is time demanding and shows bias and heterogeneity according to the experience and capacity of each individual person. Optical sensors and artificial intelligence have demonstrated great potential to achieve higher accuracy than human raters and the possibility to standardize phenotyping applications. A workflow combining red-green-blue and multispectral imagery coupled to an unmanned aerial vehicle (UAV), as well as machine learning techniques, was applied to score diseased plants and plots affected by RCRR. Georeferenced annotation of UAV-orthorectified images was carried out. With the annotated images, five convolutional neural networks were trained to score individual plants. The training was carried out with different image analysis strategies and data augmentation. The custom convolutional neural network trained from scratch together with pretrained MobileNet showed the best precision in scoring RCRR (0.73 to 0.85). The average per plot of spectral information was used to score the plots, and the benefit of adding the information obtained from the score of individual plants was compared. For this purpose, machine learning models were trained together with data management strategies, and the best-performing model was chosen. A combined pipeline of random forest and k-nearest neighbors has shown the best weighted precision (0.67). This research provides a reliable workflow for detecting and scoring RCRR based on aerial imagery. RCRR is often distributed heterogeneously in trial plots; therefore, considering the information from individual plants of the plots showed a significant improvement in UAV-based automated monitoring routines.


Assuntos
Beta vulgaris , Dispositivos Aéreos não Tripulados , Humanos , Rhizoctonia , Inteligência Artificial , Melhoramento Vegetal , Aprendizado de Máquina , Verduras , Açúcares
3.
Biometrics ; 79(2): 964-974, 2023 06.
Artigo em Inglês | MEDLINE | ID: mdl-35426119

RESUMO

Multivariate time-series (MTS) data are prevalent in diverse domains and often high dimensional. We propose new random projection ensemble classifiers with high-dimensional MTS. The method first applies dimension reduction in the time domain via randomly projecting the time-series variables into some low-dimensional space, followed by measuring the disparity via some novel base classifier between the data and the candidate generating processes in the projected space. Our contributions are twofold: (i) We derive optimal weighted majority voting schemes for pooling information from the base classifiers for multiclass classification and (ii) we introduce new base frequency-domain classifiers based on Whittle likelihood (WL), Kullback-Leibler (KL) divergence, eigen-distance (ED), and Chernoff (CH) divergence. Both simulations for binary and multiclass problems, and an Electroencephalogram (EEG) application demonstrate the efficacy of the proposed methods in constructing accurate classifiers with high-dimensional MTS.


Assuntos
Algoritmos , Fatores de Tempo
4.
Plant Dis ; 107(1): 188-200, 2023 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-35581914

RESUMO

Disease incidence (DI) and metrics of disease severity are relevant parameters for decision making in plant protection and plant breeding. To develop automated and sensor-based routines, a sugar beet variety trial was inoculated with Cercospora beticola and monitored with a multispectral camera system mounted to an unmanned aerial vehicle (UAV) over the vegetation period. A pipeline based on machine learning methods was established for image data analysis and extraction of disease-relevant parameters. Features based on the digital surface model, vegetation indices, shadow condition, and image resolution improved classification performance in comparison with using single multispectral channels in 12 and 6% of diseased and soil regions, respectively. With a postprocessing step, area-related parameters were computed after classification. Results of this pipeline also included extraction of DI and disease severity (DS) from UAV data. The calculated area under disease progress curve of DS was 2,810.4 to 7,058.8%.days for human visual scoring and 1,400.5 to 4,343.2%.days for UAV-based scoring. Moreover, a sharper differentiation of varieties compared with visual scoring was observed in area-related parameters such as area of complete foliage (AF), area of healthy foliage (AH), and mean area of lesion by unit of foliage ([Formula: see text]). These advantages provide the option to replace the laborious work of visual disease assessments in the field with a more precise, nondestructive assessment via multispectral data acquired by UAV flights.[Formula: see text] Copyright © 2023 The Author(s). This is an open access article distributed under the CC BY-NC-ND 4.0 International license.


Assuntos
Beta vulgaris , Cercospora , Humanos , Incidência , Melhoramento Vegetal , Verduras , Açúcares
5.
Sensors (Basel) ; 23(21)2023 Nov 03.
Artigo em Inglês | MEDLINE | ID: mdl-37960661

RESUMO

With the rapid growth of social media networks and internet accessibility, most businesses are becoming vulnerable to a wide range of threats and attacks. Thus, intrusion detection systems (IDSs) are considered one of the most essential components for securing organizational networks. They are the first line of defense against online threats and are responsible for quickly identifying potential network intrusions. Mainly, IDSs analyze the network traffic to detect any malicious activities in the network. Today, networks are expanding tremendously as the demand for network services is expanding. This expansion leads to diverse data types and complexities in the network, which may limit the applicability of the developed algorithms. Moreover, viruses and malicious attacks are changing in their quantity and quality. Therefore, recently, several security researchers have developed IDSs using several innovative techniques, including artificial intelligence methods. This work aims to propose a support vector machine (SVM)-based deep learning system that will classify the data extracted from servers to determine the intrusion incidents on social media. To implement deep learning-based IDSs for multiclass classification, the CSE-CIC-IDS 2018 dataset has been used for system evaluation. The CSE-CIC-IDS 2018 dataset was subjected to several preprocessing techniques to prepare it for the training phase. The proposed model has been implemented in 100,000 instances of a sample dataset. This study demonstrated that the accuracy, true-positive recall, precision, specificity, false-positive recall, and F-score of the proposed model were 100%, 100%, 100%, 100%, 0%, and 100%, respectively.

6.
Sensors (Basel) ; 23(21)2023 Oct 28.
Artigo em Inglês | MEDLINE | ID: mdl-37960485

RESUMO

The Internet of Vehicles(IoV) employs vehicle-to-everything (V2X) technology to establish intricate interconnections among the Internet, the IoT network, and the Vehicle Networks (IVNs), forming a complex vehicle communication network. However, the vehicle communication network is very vulnerable to attacks. The implementation of an intrusion detection system (IDS) emerges as an essential requisite to ensure the security of in-vehicle/inter-vehicle communication in IoV. Within this context, the imbalanced nature of network traffic data and the diversity of network attacks stand as pivotal factors in IDS performance. On the one hand, network traffic data often heavily suffer from data imbalance, which impairs the detection performance. To address this issue, this paper employs a hybrid approach combining the Synthetic Minority Over-sampling Technique (SMOTE) and RandomUnderSampler to achieve a balanced class distribution. On the other hand, the diversity of network attacks constitutes another significant factor contributing to poor intrusion detection model performance. Most current machine learning-based IDSs mainly perform binary classification, while poorly dealing with multiclass classification. This paper proposes an adaptive tree-based ensemble network as the intrusion detection engine for the IDS in IoV. This engine employs a deep-layer structure, wherein diverse ML models are stacked as layers and are interconnected in a cascading manner, which enables accurate and efficient multiclass classification, facilitating the precise identification of diverse network attacks. Moreover, a machine learning-based approach is used for feature selection to reduce feature dimensionality, substantially alleviating the computational overhead. Finally, we evaluate the proposed IDS performance on various cyber-attacks from the in-vehicle and external networks in IoV by using the network intrusion detection dataset CICIDS2017 and the vehicle security dataset Car-Hacking. The experimental results demonstrate remarkable performance, with an F1-score of 0.965 on the CICIDS2017 dataset and an F1-score of 0.9999 on the Car-Hacking dataset. These scores demonstrate that our IDS can achieve efficient and precise multiclass classification. This research provides a valuable reference for ensuring the cybersecurity of IoV.

7.
Sensors (Basel) ; 23(17)2023 Aug 30.
Artigo em Inglês | MEDLINE | ID: mdl-37687976

RESUMO

(1) Background: in the field of motor-imagery brain-computer interfaces (MI-BCIs), obtaining discriminative features among multiple MI tasks poses a significant challenge. Typically, features are extracted from single electroencephalography (EEG) channels, neglecting their interconnections, which leads to limited results. To address this limitation, there has been growing interest in leveraging functional brain connectivity (FC) as a feature in MI-BCIs. However, the high inter- and intra-subject variability has so far limited its effectiveness in this domain. (2) Methods: we propose a novel signal processing framework that addresses this challenge. We extracted translation-invariant features (TIFs) obtained from a scattering convolution network (SCN) and brain connectivity features (BCFs). Through a feature fusion approach, we combined features extracted from selected channels and functional connectivity features, capitalizing on the strength of each component. Moreover, we employed a multiclass support vector machine (SVM) model to classify the extracted features. (3) Results: using a public dataset (IIa of the BCI Competition IV), we demonstrated that the feature fusion approach outperformed existing state-of-the-art methods. Notably, we found that the best results were achieved by merging TIFs with BCFs, rather than considering TIFs alone. (4) Conclusions: our proposed framework could be the key for improving the performance of a multiclass MI-BCI system.


Assuntos
Interfaces Cérebro-Computador , Encéfalo , Eletroencefalografia , Imagens, Psicoterapia , Processamento de Sinais Assistido por Computador
8.
Sensors (Basel) ; 23(18)2023 Sep 15.
Artigo em Inglês | MEDLINE | ID: mdl-37765976

RESUMO

Vehicle make and model recognition (VMMR) is an important aspect of intelligent transportation systems (ITS). In VMMR systems, surveillance cameras capture vehicle images for real-time vehicle detection and recognition. These captured images pose challenges, including shadows, reflections, changes in weather and illumination, occlusions, and perspective distortion. Another significant challenge in VMMR is the multiclass classification. This scenario has two main categories: (a) multiplicity and (b) ambiguity. Multiplicity concerns the issue of different forms among car models manufactured by the same company, while the ambiguity problem arises when multiple models from the same manufacturer have visually similar appearances or when vehicle models of different makes have visually comparable rear/front views. This paper introduces a novel and robust VMMR model that can address the above-mentioned issues with accuracy comparable to state-of-the-art methods. Our proposed hybrid CNN model selects the best descriptive fine-grained features with the help of Fisher Discriminative Least Squares Regression (FDLSR). These features are extracted from a deep CNN model fine-tuned on the fine-grained vehicle datasets Stanford-196 and BoxCars21k. Using ResNet-152 features, our proposed model outperformed the SVM and FC layers in accuracy by 0.5% and 4% on Stanford-196 and 0.4 and 1% on BoxCars21k, respectively. Moreover, this model is well-suited for small-scale fine-grained vehicle datasets.

9.
Sensors (Basel) ; 23(11)2023 May 25.
Artigo em Inglês | MEDLINE | ID: mdl-37299779

RESUMO

The use of Riemannian geometry decoding algorithms in classifying electroencephalography-based motor-imagery brain-computer interfaces (BCIs) trials is relatively new and promises to outperform the current state-of-the-art methods by overcoming the noise and nonstationarity of electroencephalography signals. However, the related literature shows high classification accuracy on only relatively small BCI datasets. The aim of this paper is to provide a study of the performance of a novel implementation of the Riemannian geometry decoding algorithm using large BCI datasets. In this study, we apply several Riemannian geometry decoding algorithms on a large offline dataset using four adaptation strategies: baseline, rebias, supervised, and unsupervised. Each of these adaptation strategies is applied in motor execution and motor imagery for both scenarios 64 electrodes and 29 electrodes. The dataset is composed of four-class bilateral and unilateral motor imagery and motor execution of 109 subjects. We run several classification experiments and the results show that the best classification accuracy is obtained for the scenario where the baseline minimum distance to Riemannian mean has been used. The mean accuracy values up to 81.5% for motor execution, and up to 76.4% for motor imagery. The accurate classification of EEG trials helps to realize successful BCI applications that allow effective control of devices.


Assuntos
Algoritmos , Interfaces Cérebro-Computador , Humanos , Eletroencefalografia/métodos , Imagens, Psicoterapia
10.
Sensors (Basel) ; 23(13)2023 Jun 22.
Artigo em Inglês | MEDLINE | ID: mdl-37447662

RESUMO

Essential oils are valuable in various industries, but their easy adulteration can cause adverse health effects. Electronic nasal sensors offer a solution for adulteration detection. This article proposes a new system for characterising essential oils based on low-cost sensor networks and machine learning techniques. The sensors used belong to the MQ family (MQ-2, MQ-3, MQ-4, MQ-5, MQ-6, MQ-7, and MQ-8). Six essential oils were used, including Cistus ladanifer, Pinus pinaster, and Cistus ladanifer oil adulterated with Pinus pinaster, Melaleuca alternifolia, tea tree, and red fruits. A total of up to 7100 measurements were included, with more than 118 h of measurements of 33 different parameters. These data were used to train and compare five machine learning algorithms: discriminant analysis, support vector machine, k-nearest neighbours, neural network, and naive Bayesian when the data were used individually or when hourly mean values were included. To evaluate the performance of the included machine learning algorithms, accuracy, precision, recall, and F1-score were considered. The study found that using k-nearest neighbours, accuracy, recall, F1-score, and precision values were 1, 0.99, 0.99, and 1, respectively. The accuracy reached 100% with k-nearest neighbours using only 2 parameters for averaged data or 15 parameters for individual data.


Assuntos
Óleos Voláteis , Teorema de Bayes , Aprendizado de Máquina , Algoritmos , Redes Neurais de Computação , Máquina de Vetores de Suporte
11.
J Environ Manage ; 344: 118594, 2023 Oct 15.
Artigo em Inglês | MEDLINE | ID: mdl-37473555

RESUMO

Modern wastewater treatment plants base their biological processes on advanced control systems which ensure compliance with discharge limits and minimize energy consumption responding to information from on-line probes. The correct readings of probes are particularly crucial for intermittent aeration controllers, which rely on real-time measurements of ammonia and oxygen in biological tanks. These data are also an important resource for developing artificial intelligence algorithms that can identify process or sensor anomalies, thus guiding the choices of plant operators and automatic process controllers. However, using anomaly detection and classification algorithms in real-time wastewater treatment is challenging because of the noisy nature of sensor measurements, the difficulty of obtaining labeled real-plant data, and the complex and interdependent mechanisms that govern biological processes. This work aims at thoroughly exploring the performance of machine learning methods in detecting and classifying the main anomalies in plants operating with intermittent aeration. Using oxygen, ammonia and aeration power measurements from a set of plants in Italy, we perform both binary and multiclass classification, and we compare them through a rigorous validation procedure that includes a test on an unknown dataset, proposing a new evaluation protocol. The classification methods explored are support vector machine, multilayer perceptron, random forest, and two gradient boosting methods (LightGBM and XGBoost). The best performance was achieved using the gradient boosting ensemble algorithms, with up to 96% of anomalies detected and up to 84% and 62% of anomalies classified correctly on the first and second datasets respectively.


Assuntos
Inteligência Artificial , Purificação da Água , Amônia , Aprendizado de Máquina , Redes Neurais de Computação , Algoritmos , Máquina de Vetores de Suporte
12.
Eur J Neurosci ; 56(1): 3613-3644, 2022 07.
Artigo em Inglês | MEDLINE | ID: mdl-35445438

RESUMO

Tracking how individual human brains change over extended timescales is crucial to clinical scenarios ranging from stroke recovery to healthy aging. The use of resting state (RS) activity for tracking is a promising possibility. However, it is unresolved how a person's RS activity over time can be decoded to distinguish neurophysiological changes from confounding cognitive variability. Here, we develop a method to screen RS activity changes for these confounding effects by formulating it as a problem of change classification. We demonstrate a novel solution to change classification by linking individual-specific change to inter-individual differences. Individual RS-electroencephalography (EEG) was acquired over 5 consecutive days including task states devised to simulate the effects of inter-day cognitive variation. As inter-individual differences are shaped by neurophysiological differences, the inter-individual differences in RS activity on 1 day were analysed (using machine learning) to identify distinctive configurations in each individual's RS activity. Using this configuration as a decision rule, an individual could be re-identified from 2-s samples of the instantaneous oscillatory power spectrum acquired on a different day both from RS and confounded RS with a limited loss in accuracy. Importantly, the low loss in accuracy in cross-day versus same-day classification was achieved with classifiers that combined information from multiple frequency bands at channels across the scalp (with a concentration at characteristic fronto-central and occipital zones). Taken together, these findings support the technical feasibility of screening RS activity for confounding effects and the suitability of longitudinal RS for robust individualized inferences about neurophysiological change in health and disease.


Assuntos
Encéfalo , Eletroencefalografia , Encéfalo/fisiologia , Humanos , Aprendizado de Máquina
13.
Artigo em Inglês | MEDLINE | ID: mdl-36284449

RESUMO

OBJECTIVES: This study aimed to develop a classification model to detect and distinguish apathy and depression based on text, audio, and video features and to make use of the shapely additive explanations (SHAP) toolkit to increase the model interpretability. METHODS: Subjective scales and objective experiments were conducted on 319 mild cognitive impairment (MCI) patients to measure apathy and depression. The MCI patients were classified into four groups, depression only, apathy only, depressed-apathetic, and the normal group. Speech, facial and text features were extracted using the open-source data analysis toolkits. Multiclass classification and SHAP toolkits were used to develop a classification model and explain the contribution of specific features. RESULTS: The macro-averaged f1 score and accuracy for overall model were 0.91 and 0.90, respectively. The accuracy for the apathetic, depressed, depressed-apathetic, and normal groups were 0.98, 0.88, 0.93, and 0.82, respectively. The SHAP toolkit identified speech features (Mel-frequency cepstral coefficient (MFCC) 4, spectral slopes, F0, F1), facial features (action unit (AU) 14, 26, 28, 45), and text feature (text 6 semantic) associated with apathy. Meanwhile, speech features (spectral slopes, shimmer, F0) and facial expression (AU 2, 6, 7, 10, 14, 26, 45) were associated with depression. Apart from the shared features mentioned above, new speech (MFCC 2, loudness) and facial (AU 9) features were observed in the depressive-apathetic group. CONCLUSIONS: Apathy and depression shared some verbal and facial features while also exhibited distinct features. A combination of text, audio, and video could be used to improve the early detection and differential diagnosis of apathy and depression in MCI patients.


Assuntos
Apatia , Disfunção Cognitiva , Humanos , Idoso , Depressão/diagnóstico , Depressão/psicologia , Disfunção Cognitiva/diagnóstico , Disfunção Cognitiva/psicologia , Testes Neuropsicológicos
14.
Sensors (Basel) ; 22(18)2022 Sep 07.
Artigo em Inglês | MEDLINE | ID: mdl-36146114

RESUMO

Quantum entanglement is a unique phenomenon of quantum mechanics, which has no classical counterpart and gives quantum systems their advantage in computing, communication, sensing, and metrology. In quantum sensing and metrology, utilizing an entangled probe state enhances the achievable precision more than its classical counterpart. Noise in the probe state preparation step can cause the system to output unentangled states, which might not be resourceful. Hence, an effective method for the detection and classification of tripartite entanglement is required at that step. However, current mathematical methods cannot robustly classify multiclass entanglement in tripartite quantum systems, especially in the case of mixed states. In this paper, we explore the utility of artificial neural networks for classifying the entanglement of tripartite quantum states into fully separable, biseparable, and fully entangled states. We employed Bell's inequality for the dataset of tripartite quantum states and train the deep neural network for multiclass classification. This entanglement classification method is computationally efficient due to using a small number of measurements. At the same time, it also maintains generalization by covering a large Hilbert space of tripartite quantum states.

15.
Sensors (Basel) ; 22(21)2022 Oct 22.
Artigo em Inglês | MEDLINE | ID: mdl-36365802

RESUMO

A new approach to the estimation and classification of nonlinear frequency modulated (NLFM) signals is presented in the paper. These problems are crucial in electronic reconnaissance systems whose role is to indicate what signals are being received and recognized by the intercepting receiver. NLFM signals offer a variety of useful properties not available for signals with linear frequency modulation (LFM). In particular, NLFM signals can ensure the desired reduction of sidelobes of an autocorrelation (AC) function and desired power spectral density (PSD); therefore, such signals are more frequently used in modern radar and echolocation systems. Due to their nonlinear properties, the discussed signals are difficult to recognize and therefore require sophisticated methods of analysis, estimation and classification. NLFM signals with frequency content varying with time are mainly analyzed by time-frequency algorithms. However, the methods presented in the paper belong to time-chirp domain, which is relatively rarely cited in the literature. It is proposed to use polynomial approximations of nonlinear frequency and phase functions describing signals. This allows for applying the cubic phase function (CPF) as an estimator of phase polynomial coefficients. Originally, the CPF involved only third-order nonlinearities of the phase function. The extension of the CPF using nonuniform sampling is used to analyse the higher order polynomial phase. In this paper, a sixth order polynomial is considered. It is proposed to estimate the instantaneous frequency using a polynomial with coefficients calculated from the coefficients of the phase polynomial obtained by CPF. The determined coefficients also constitute the set of distinctive features for a classification task. The proposed CPF-based classification method was examined for three common NLFM signals and one LFM signal. Two types of neural network classifiers: learning vector quantization (LVQ) and multilayer perceptron (MLP) are considered for such defined classification problem. The performance of both the estimation and classification processes was analyzed using Monte Carlo simulation studies for different SNRs. The results of the simulation research revealed good estimation performance and error-free classification for the SNR range encountered in practical applications.


Assuntos
Algoritmos , Processamento de Sinais Assistido por Computador , Animais , Redes Neurais de Computação , Simulação por Computador , Método de Monte Carlo
16.
Sensors (Basel) ; 22(12)2022 Jun 18.
Artigo em Inglês | MEDLINE | ID: mdl-35746389

RESUMO

Alzheimer's Disease (AD) is a health apprehension of significant proportions that is negatively impacting the ageing population globally. It is characterized by neuronal loss and the formation of structures such as neurofibrillary tangles and amyloid plaques in the early as well as later stages of the disease. Neuroimaging modalities are routinely used in clinical practice to capture brain alterations associated with AD. On the other hand, deep learning methods are routinely used to recognize patterns in underlying data distributions effectively. This work uses Convolutional Neural Network (CNN) architectures in both 2D and 3D domains to classify the initial stages of AD into AD, Mild Cognitive Impairment (MCI) and Normal Control (NC) classes using the positron emission tomography neuroimaging modality deploying data augmentation in a random zoomed in/out scheme. We used novel concepts such as the blurring before subsampling principle and distant domain transfer learning to build 2D CNN architectures. We performed three binaries, that is, AD/NC, AD/MCI, MCI/NC and one multiclass classification task AD/NC/MCI. The statistical comparison revealed that 3D-CNN architecture performed the best achieving an accuracy of 89.21% on AD/NC, 71.70% on AD/MCI, 62.25% on NC/MCI and 59.73% on AD/NC/MCI classification tasks using a five-fold cross-validation hyperparameter selection approach. Data augmentation helps in achieving superior performance on the multiclass classification task. The obtained results support the application of deep learning models towards early recognition of AD.


Assuntos
Doença de Alzheimer , Disfunção Cognitiva , Doença de Alzheimer/diagnóstico por imagem , Disfunção Cognitiva/diagnóstico por imagem , Humanos , Imageamento por Ressonância Magnética/métodos , Redes Neurais de Computação , Neuroimagem/métodos , Tomografia por Emissão de Pósitrons/métodos
17.
Sensors (Basel) ; 21(12)2021 Jun 10.
Artigo em Inglês | MEDLINE | ID: mdl-34200521

RESUMO

The early detection of melanoma is the most efficient way to reduce its mortality rate. Dermatologists achieve this task with the help of dermoscopy, a non-invasive tool allowing the visualization of patterns of skin lesions. Computer-aided diagnosis (CAD) systems developed on dermoscopic images are needed to assist dermatologists. These systems rely mainly on multiclass classification approaches. However, the multiclass classification of skin lesions by an automated system remains a challenging task. Decomposing a multiclass problem into a binary problem can reduce the complexity of the initial problem and increase the overall performance. This paper proposes a CAD system to classify dermoscopic images into three diagnosis classes: melanoma, nevi, and seborrheic keratosis. We introduce a novel ensemble scheme of convolutional neural networks (CNNs), inspired by decomposition and ensemble methods, to improve the performance of the CAD system. Unlike conventional ensemble methods, we use a directed acyclic graph to aggregate binary CNNs for the melanoma detection task. On the ISIC 2018 public dataset, our method achieves the best balanced accuracy (76.6%) among multiclass CNNs, an ensemble of multiclass CNNs with classical aggregation methods, and other related works. Our results reveal that the directed acyclic graph is a meaningful approach to develop a reliable and robust automated diagnosis system for the multiclass classification of dermoscopic images.


Assuntos
Melanoma , Neoplasias Cutâneas , Dermoscopia , Diagnóstico por Computador , Humanos , Melanoma/diagnóstico por imagem , Redes Neurais de Computação , Neoplasias Cutâneas/diagnóstico por imagem
18.
Hum Factors ; 63(5): 772-787, 2021 08.
Artigo em Inglês | MEDLINE | ID: mdl-33538624

RESUMO

OBJECTIVE: This paper aimed to investigate the robustness of driver cognitive workload detection based on electrocardiogram (ECG) when considering temporal variation and individual differences in cognitive workload. BACKGROUND: Cognitive workload is a critical component to be monitored for error prevention in human-machine systems. It may fluctuate instantaneously over time even in the same tasks and differ across individuals. METHOD: A driving simulation study was conducted to classify driver cognitive workload underlying four experimental conditions (baseline, N-back, texting, and N-back + texting distraction) in two repeated 1-hr blocks. Heart rate (HR) and heart rate variability (HRV) were compared among the experimental conditions and between the blocks. Random forests were built on HR and HRV to classify cognitive workload in different blocks and for different individuals. RESULTS: HR and HRV were significantly different between repeated blocks in the study, demonstrating the time-induced variation in cognitive workload. The performance of cognitive workload classification across blocks and across individuals was significantly improved after normalizing HR and HRV in each block by the corresponding baseline. CONCLUSION: The temporal variation and individual differences in cognitive workload affects ECG-based cognitive workload detection. But normalization approaches relying on the choice of appropriate baselines help compensate for the effects of temporal variation and individual differences. APPLICATION: The findings provide insight into the value and limitations of ECG-based driver cognitive workload monitoring during prolonged driving for individual drivers.


Assuntos
Condução de Veículo , Individualidade , Condução de Veículo/psicologia , Cognição/fisiologia , Eletrocardiografia , Frequência Cardíaca/fisiologia , Humanos , Carga de Trabalho
19.
Molecules ; 26(4)2021 Feb 19.
Artigo em Inglês | MEDLINE | ID: mdl-33669834

RESUMO

Applied datasets can vary from a few hundred to thousands of samples in typical quantitative structure-activity/property (QSAR/QSPR) relationships and classification. However, the size of the datasets and the train/test split ratios can greatly affect the outcome of the models, and thus the classification performance itself. We compared several combinations of dataset sizes and split ratios with five different machine learning algorithms to find the differences or similarities and to select the best parameter settings in nonbinary (multiclass) classification. It is also known that the models are ranked differently according to the performance merit(s) used. Here, 25 performance parameters were calculated for each model, then factorial ANOVA was applied to compare the results. The results clearly show the differences not just between the applied machine learning algorithms but also between the dataset sizes and to a lesser extent the train/test split ratios. The XGBoost algorithm could outperform the others, even in multiclass modeling. The performance parameters reacted differently to the change of the sample set size; some of them were much more sensitive to this factor than the others. Moreover, significant differences could be detected between train/test split ratios as well, exerting a great effect on the test validation of our models.


Assuntos
Algoritmos , Bases de Dados como Assunto , Relação Quantitativa Estrutura-Atividade , Intervalos de Confiança , Aprendizado de Máquina
20.
Entropy (Basel) ; 23(7)2021 Jun 22.
Artigo em Inglês | MEDLINE | ID: mdl-34206624

RESUMO

All features of any data type are universally equipped with categorical nature revealed through histograms. A contingency table framed by two histograms affords directional and mutual associations based on rescaled conditional Shannon entropies for any feature-pair. The heatmap of the mutual association matrix of all features becomes a roadmap showing which features are highly associative with which features. We develop our data analysis paradigm called categorical exploratory data analysis (CEDA) with this heatmap as a foundation. CEDA is demonstrated to provide new resolutions for two topics: multiclass classification (MCC) with one single categorical response variable and response manifold analytics (RMA) with multiple response variables. We compute visible and explainable information contents with multiscale and heterogeneous deterministic and stochastic structures in both topics. MCC involves all feature-group specific mixing geometries of labeled high-dimensional point-clouds. Upon each identified feature-group, we devise an indirect distance measure, a robust label embedding tree (LET), and a series of tree-based binary competitions to discover and present asymmetric mixing geometries. Then, a chain of complementary feature-groups offers a collection of mixing geometric pattern-categories with multiple perspective views. RMA studies a system's regulating principles via multiple dimensional manifolds jointly constituted by targeted multiple response features and selected major covariate features. This manifold is marked with categorical localities reflecting major effects. Diverse minor effects are checked and identified across all localities for heterogeneity. Both MCC and RMA information contents are computed for data's information content with predictive inferences as by-products. We illustrate CEDA developments via Iris data and demonstrate its applications on data taken from the PITCHf/x database.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA