Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 8 de 8
Filtrar
Mais filtros

Base de dados
Tipo de documento
Intervalo de ano de publicação
1.
Entropy (Basel) ; 25(4)2023 Mar 27.
Artigo em Inglês | MEDLINE | ID: mdl-37190359

RESUMO

The Korean film market has been rapidly growing, and the importance of explainable artificial intelligence (XAI) in the film industry is also increasing. In this highly competitive market, where producing a movie incurs substantial costs, it is crucial for film industry professionals to make informed decisions. To assist these professionals, we propose DRECE (short for Dimension REduction, Clustering, and classification for Explainable artificial intelligence), an XAI-powered box office classification and trend analysis model that provides valuable insights and data-driven decision-making opportunities for the Korean film industry. The DRECE framework starts with transforming multi-dimensional data into two dimensions through dimensionality reduction techniques, grouping similar data points through K-means clustering, and classifying movie clusters through machine-learning models. The XAI techniques used in the model make the decision-making process transparent, providing valuable insights for film industry professionals to improve the box office performance and maximize profits. With DRECE, the Korean film market can be understood in new and exciting ways, and decision-makers can make informed decisions to achieve success.

2.
Sensors (Basel) ; 21(5)2021 Feb 26.
Artigo em Inglês | MEDLINE | ID: mdl-33652726

RESUMO

Recently, multistep-ahead prediction has attracted much attention in electric load forecasting because it can deal with sudden changes in power consumption caused by various events such as fire and heat wave for a day from the present time. On the other hand, recurrent neural networks (RNNs), including long short-term memory and gated recurrent unit (GRU) networks, can reflect the previous point well to predict the current point. Due to this property, they have been widely used for multistep-ahead prediction. The GRU model is simple and easy to implement; however, its prediction performance is limited because it considers all input variables equally. In this paper, we propose a short-term load forecasting model using an attention based GRU to focus more on the crucial variables and demonstrate that this can achieve significant performance improvements, especially when the input sequence of RNN is long. Through extensive experiments, we show that the proposed model outperforms other recent multistep-ahead prediction models in the building-level power consumption forecasting.

3.
Sensors (Basel) ; 20(6)2020 Mar 23.
Artigo em Inglês | MEDLINE | ID: mdl-32210112

RESUMO

For efficient and effective energy management, accurate energy consumption forecasting is required in energy management systems (EMSs). Recently, several artificial intelligence-based techniques have been proposed for accurate electric load forecasting; moreover, perfect energy consumption data are critical for the prediction. However, owing to diverse reasons, such as device malfunctions and signal transmission errors, missing data are frequently observed in the actual data. Previously, many imputation methods have been proposed to compensate for missing values; however, these methods have achieved limited success in imputing electric energy consumption data because the period of data missing is long and the dependency on historical data is high. In this study, we propose a novel missing-value imputation scheme for electricity consumption data. The proposed scheme uses a bagging ensemble of multilayer perceptrons (MLPs), called softmax ensemble network, wherein the ensemble weight of each MLP is determined by a softmax function. This ensemble network learns electric energy consumption data with explanatory variables and imputes missing values in this data. To evaluate the performance of our scheme, we performed diverse experiments on real electric energy consumption data and confirmed that the proposed scheme can deliver superior performance compared to other imputation methods.

4.
Heliyon ; 10(12): e32934, 2024 Jun 30.
Artigo em Inglês | MEDLINE | ID: mdl-39021936

RESUMO

Gait recognition is the identification of individuals based on how they walk. It can identify an individual of interest without their intervention, making it better suited for surveillance from afar. Computer-aided silhouette-based gait analysis is frequently employed due to its efficiency and effectiveness. However, covariate conditions have a significant influence on individual recognition because they conceal essential features that are helpful in recognizing individuals from their walking style. To address such issues, we proposed a novel deep-learning framework to tackle covariate conditions in gait by proposing regions subject to covariate conditions. The features extracted from those regions will be neglected to keep the model's performance effective with custom kernels. The proposed technique sets aside static and dynamic areas of interest, where static areas contain covariates, and then features are learnt from the dynamic regions unaffected by covariates to effectively recognize individuals. The features were extracted using three customized kernels, and the results were concatenated to produce a fused feature map. Afterward, CNN learns and extracts the features from the proposed regions to recognize an individual. The suggested approach is an end-to-end system that eliminates the requirement for manual region proposal and feature extraction, which would improve gait-based identification of individuals in real-world scenarios. The experimentation is performed on publicly available dataset i.e. CASIA A, and CASIA C. The findings indicate that subjects wearing bags produced 90 % accuracy, and subjects wearing coats produced 58 % accuracy. Likewise, recognizing individuals with different walking speeds also exhibited excellent results, with an accuracy of 94 % for fast and 96 % for slow-paced walk patterns, which shows improvement compared to previous deep learning methods.© 2017 Elsevier Inc. All rights reserved.

5.
IEEE Trans Image Process ; 31: 4622-4636, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35776807

RESUMO

In this paper, we address the Online Unsupervised Domain Adaptation (OUDA) problem and propose a novel multi-stage framework to solve real-world situations when the target data are unlabeled and arriving online sequentially in batches. Most of the traditional manifold-based methods on the OUDA problem focus on transforming each arriving target data to the source domain without sufficiently considering the temporal coherency and accumulative statistics among the arriving target data. In order to project the data from the source and the target domains to a common subspace and manipulate the projected data in real-time, our proposed framework institutes a novel method, called an Incremental Computation of Mean-Subspace (ICMS) technique, which computes an approximation of mean-target subspace on a Grassmann manifold and is proven to be a close approximate to the Karcher mean. Furthermore, the transformation matrix computed from the mean-target subspace is applied to the next target data in the recursive-feedback stage, aligning the target data closer to the source domain. The computation of transformation matrix and the prediction of next-target subspace leverage the performance of the recursive-feedback stage by considering the cumulative temporal dependency among the flow of the target subspace on the Grassmann manifold. The labels of the transformed target data are predicted by the pre-trained source classifier, then the classifier is updated by the transformed data and predicted labels. Extensive experiments on six datasets were conducted to investigate in depth the effect and contribution of each stage in our proposed framework and its performance over previous approaches in terms of classification accuracy and computational speed. In addition, the experiments on traditional manifold-based learning models and neural-network-based learning models demonstrated the applicability of our proposed framework for various types of learning models.


Assuntos
Algoritmos , Redes Neurais de Computação , Retroalimentação
6.
J Supercomput ; 78(17): 19246-19271, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35754515

RESUMO

Population size has made disease monitoring a major concern in the healthcare system, due to which auto-detection has become a top priority. Intelligent disease detection frameworks enable doctors to recognize illnesses, provide stable and accurate results, and lower mortality rates. An acute and severe disease known as Coronavirus (COVID19) has suddenly become a global health crisis. The fastest way to avoid the spreading of Covid19 is to implement an automated detection approach. In this study, an explainable COVID19 detection in CT scan and chest X-ray is established using a combination of deep learning and machine learning classification algorithms. A Convolutional Neural Network (CNN) collects deep features from collected images, and these features are then fed into a machine learning ensemble for COVID19 assessment. To identify COVID19 disease from images, an ensemble model is developed which includes, Gaussian Naive Bayes (GNB), Support Vector Machine (SVM), Decision Tree (DT), Logistic Regression (LR), K-Nearest Neighbor (KNN), and Random Forest (RF). The overall performance of the proposed method is interpreted using Gradient-weighted Class Activation Mapping (Grad-CAM), and t-distributed Stochastic Neighbor Embedding (t-SNE). The proposed method is evaluated using two datasets containing 1,646 and 2,481 CT scan images gathered from COVID19 patients, respectively. Various performance comparisons with state-of-the-art approaches were also shown. The proposed approach beats existing models, with scores of 98.5% accuracy, 99% precision, and 99% recall, respectively. Further, the t-SNE and explainable Artificial Intelligence (AI) experiments are conducted to validate the proposed approach.

7.
Comput Intell Neurosci ; 2022: 6892995, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35178079

RESUMO

Daily peak load forecasting (DPLF) and total daily load forecasting (TDLF) are essential for optimal power system operation from one day to one week later. This study develops a Cubist-based incremental learning model to perform accurate and interpretable DPLF and TDLF. To this end, we employ time-series cross-validation to effectively reflect recent electrical load trends and patterns when constructing the model. We also analyze variable importance to identify the most crucial factors in the Cubist model. In the experiments, we used two publicly available building datasets and three educational building cluster datasets. The results showed that the proposed model yielded averages of 7.77 and 10.06 in mean absolute percentage error and coefficient of variation of the root mean square error, respectively. We also confirmed that temperature and holiday information are significant external factors, and electrical loads one day and one week ago are significant internal factors.


Assuntos
Eletricidade , Redes Neurais de Computação , Previsões , Temperatura
8.
J Pers Med ; 12(2)2022 Jan 31.
Artigo em Inglês | MEDLINE | ID: mdl-35207676

RESUMO

With the development of big data and cloud computing technologies, the importance of pseudonym information has grown. However, the tools for verifying whether the de-identification methodology is correctly applied to ensure data confidentiality and usability are insufficient. This paper proposes a verification of de-identification techniques for personal healthcare information by considering data confidentiality and usability. Data are generated and preprocessed by considering the actual statistical data, personal information datasets, and de-identification datasets based on medical data to represent the de-identification technique as a numeric dataset. Five tree-based regression models (i.e., decision tree, random forest, gradient boosting machine, extreme gradient boosting, and light gradient boosting machine) are constructed using the de-identification dataset to effectively discover nonlinear relationships between dependent and independent variables in numerical datasets. Then, the most effective model is selected from personal information data in which pseudonym processing is essential for data utilization. The Shapley additive explanation, an explainable artificial intelligence technique, is applied to the most effective model to establish pseudonym processing policies and machine learning to present a machine-learning process that selects an appropriate de-identification methodology.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA