Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 22
Filtrar
Más filtros












Base de datos
Intervalo de año de publicación
1.
Bioinform Biol Insights ; 18: 11779322241263674, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-39091283

RESUMEN

Small non-coding RNAs (sRNAs) regulate the synthesis of virulence factors and other pathogenic traits, which enables the bacteria to survive and proliferate after host infection. While high-throughput sequencing data have proved useful in identifying sRNAs from the intergenic regions (IGRs) of the genome, it remains a challenge to present a complete genome-wide map of the expression of the sRNAs. Moreover, existing methodologies necessitate multiple dependencies for executing their algorithm and also lack a targeted approach for the de novo sRNA identification. We developed an Isolation Forest algorithm-based method and the tool Prediction Of sRNAs using Isolation Forest for the de novo identification of sRNAs from available bacterial sRNA-seq data (http://posif.ibab.ac.in/). Using this framework, we predicted 1120 sRNAs and 46 small proteins in Mycobacterium tuberculosis. Besides, we highlight the context-dependent expression of novel sRNAs, their probable synthesis, and their potential relevance in stress response mechanisms manifested by M. tuberculosis.

2.
Heliyon ; 10(15): e35243, 2024 Aug 15.
Artículo en Inglés | MEDLINE | ID: mdl-39166090

RESUMEN

Intelligent fault detection considered as a paramount importance in Power Electronics Systems (PELS) to ensure operational reliability along with rising complexities and critical application demands. However, most of the developed methods in real-world scenarios can have better detection, and accurate diagnosis. In this regard, ResFaultyMan, a novel unsupervised isolation forest-based model, is presented in this paper, for real-world fault/anomaly detection in PELS. Capitalizing on the dynamics of faults, ResFaultyMan utilizes a tree-based structure for effective anomaly isolation, demonstrating adaptability to diverse fault scenarios. The test bench, comprising a load, Triac switch, resistor, voltage source, and Pyboard microcontroller, provides a dynamic setting for performance evaluation. The integration of a Pyboard microcontroller and a Python-to-Python interface facilitates fast data transfer and sampling, enhancing the efficiency of ResFaultyMan in real-time fault detection scenarios. Comparative analysis with OneClassSVM and LocalOutlierFactor, utilizing Key Performance Indicators (KPIs) of Accuracy, Precision, and Recall, as well as F1 Score, manifest ResFaultyMan's fault detection capabilities for fault detection in PELSs, and its performance in the related applications.

3.
Sci Rep ; 14(1): 14409, 2024 06 22.
Artículo en Inglés | MEDLINE | ID: mdl-38909127

RESUMEN

Type II diabetes mellitus (T2DM) is a rising global health burden due to its rapidly increasing prevalence worldwide, and can result in serious complications. Therefore, it is of utmost importance to identify individuals at risk as early as possible to avoid long-term T2DM complications. In this study, we developed an interpretable machine learning model leveraging baseline levels of biomarkers of oxidative stress (OS), inflammation, and mitochondrial dysfunction (MD) for identifying individuals at risk of developing T2DM. In particular, Isolation Forest (iForest) was applied as an anomaly detection algorithm to address class imbalance. iForest was trained on the control group data to detect cases of high risk for T2DM development as outliers. Two iForest models were trained and evaluated through ten-fold cross-validation, the first on traditional biomarkers (BMI, blood glucose levels (BGL) and triglycerides) alone and the second including the additional aforementioned biomarkers. The second model outperformed the first across all evaluation metrics, particularly for F1 score and recall, which were increased from 0.61 ± 0.05 to 0.81 ± 0.05 and 0.57 ± 0.06 to 0.81 ± 0.08, respectively. The feature importance scores identified a novel combination of biomarkers, including interleukin-10 (IL-10), 8-isoprostane, humanin (HN), and oxidized glutathione (GSSG), which were revealed to be more influential than the traditional biomarkers in the outcome prediction. These results reveal a promising method for simultaneously predicting and understanding the risk of T2DM development and suggest possible pharmacological intervention to address inflammation and OS early in disease progression.


Asunto(s)
Biomarcadores , Diabetes Mellitus Tipo 2 , Aprendizaje Automático , Estrés Oxidativo , Humanos , Biomarcadores/sangre , Masculino , Femenino , Persona de Mediana Edad , Medición de Riesgo/métodos , Factores de Riesgo , Glucemia/análisis , Glucemia/metabolismo , Inflamación , Algoritmos
4.
Sensors (Basel) ; 24(8)2024 Apr 15.
Artículo en Inglés | MEDLINE | ID: mdl-38676138

RESUMEN

Soft sensors have been extensively utilized to approximate real-time power prediction in wind power generation, which is challenging to measure instantaneously. The short-term forecast of wind power aims at providing a reference for the dispatch of the intraday power grid. This study proposes a soft sensor model based on the Long Short-Term Memory (LSTM) network by combining data preprocessing with Variational Modal Decomposition (VMD) to improve wind power prediction accuracy. It does so by adopting the isolation forest algorithm for anomaly detection of the original wind power series and processing the missing data by multiple imputation. Based on the process data samples, VMD technology is used to achieve power data decomposition and noise reduction. The LSTM network is introduced to predict each modal component separately, and further sum reconstructs the prediction results of each component to complete the wind power prediction. From the experimental results, it can be seen that the LSTM network which uses an Adam optimizing algorithm has better convergence accuracy. The VMD method exhibited superior decomposition outcomes due to its inherent Wiener filter capabilities, which effectively mitigate noise and forestall modal aliasing. The Mean Absolute Percentage Error (MAPE) was reduced by 9.3508%, which indicates that the LSTM network combined with the VMD method has better prediction accuracy.

5.
Sensors (Basel) ; 24(5)2024 Mar 03.
Artículo en Inglés | MEDLINE | ID: mdl-38475188

RESUMEN

Hyperspectral anomaly detection is used to recognize unusual patterns or anomalies in hyperspectral data. Currently, many spectral-spatial detection methods have been proposed with a cascaded manner; however, they often neglect the complementary characteristics between the spectral and spatial dimensions, which easily leads to yield high false alarm rate. To alleviate this issue, a spectral-spatial information fusion (SSIF) method is designed for hyperspectral anomaly detection. First, an isolation forest is exploited to obtain spectral anomaly map, in which the object-level feature is constructed with an entropy rate segmentation algorithm. Then, a local spatial saliency detection scheme is proposed to produce the spatial anomaly result. Finally, the spectral and spatial anomaly scores are integrated together followed by a domain transform recursive filtering to generate the final detection result. Experiments on five hyperspectral datasets covering ocean and airport scenes prove that the proposed SSIF produces superior detection results over other state-of-the-art detection techniques.

6.
Int J Mol Sci ; 25(6)2024 Mar 21.
Artículo en Inglés | MEDLINE | ID: mdl-38542533

RESUMEN

Proteomic analysis of extracellular vesicles presents several challenges due to the unique nature of these small membrane-bound structures. Alternative analyses could reveal outcomes hidden from standard statistics to explore and develop potential new biological hypotheses that may have been overlooked during the initial evaluation of the data. An analysis sequence focusing on deviating protein expressions from donors' primary cells was performed, leveraging machine-learning techniques to analyze small datasets, and it has been applied to evaluate extracellular vesicles' protein content gathered from mesenchymal stem cells cultured on bioactive glass discs doped or not with metal ions. The goal was to provide additional opportunities for detecting details between experimental conditions that are not entirely revealed with classic statistical inference, offering further insights regarding the experimental design and assisting the researchers in interpreting the outcomes. The methodology extracted a set of EV-related proteins whose differences between conditions could be partially explainable with statistics, suggesting the presence of other factors involved in the bioactive glasses' interactions with tissues. Outlier identification of extracellular vesicles' protein expression levels related to biomaterial preparation was instrumental in improving the interpretation of the experimental outcomes.


Asunto(s)
Vesículas Extracelulares , Células Madre Mesenquimatosas , Proteómica/métodos , Vesículas Extracelulares/metabolismo , Vidrio
7.
Sensors (Basel) ; 24(3)2024 Jan 24.
Artículo en Inglés | MEDLINE | ID: mdl-38339461

RESUMEN

In this study, we present a novel machine learning framework for web server anomaly detection that uniquely combines the Isolation Forest algorithm with expert evaluation, focusing on individual user activities within NGINX server logs. Our approach addresses the limitations of traditional methods by effectively isolating and analyzing subtle anomalies in vast datasets. Initially, the Isolation Forest algorithm was applied to extensive NGINX server logs, successfully identifying outlier user behaviors that conventional methods often overlook. We then employed DBSCAN for detailed clustering of these anomalies, categorizing them based on user request times and types. A key innovation of our methodology is the incorporation of post-clustering expert analysis. Cybersecurity professionals evaluated the identified clusters, adding a crucial layer of qualitative assessment. This enabled the accurate distinction between benign and potentially harmful activities, leading to targeted responses such as access restrictions or web server configuration adjustments. Our approach demonstrates a significant advancement in network security, offering a more refined understanding of user behavior. By integrating algorithmic precision with expert insights, we provide a comprehensive and nuanced strategy for enhancing cybersecurity measures. This study not only advances anomaly detection techniques but also emphasizes the critical need for a multifaceted approach in protecting web server infrastructures.

8.
Sensors (Basel) ; 24(2)2024 Jan 12.
Artículo en Inglés | MEDLINE | ID: mdl-38257586

RESUMEN

We aimed to improve the detection accuracy of laser methane sensors in expansive temperature application environments. In this paper, a large-scale dataset of the measured concentration of the sensor at different temperatures is established, and a temperature compensation model based on the ISSA-BP neural network is proposed. On the data side, a large-scale dataset of 15,810 sets of laser methane sensors with different temperatures and concentrations was established, and an Improved Isolation Forest algorithm was used to clean the large-scale data and remove the outliers in the dataset. On the modeling framework, a temperature compensation model based on the ISSA-BP neural network is proposed. The quasi-reflective learning, chameleon swarm algorithm, Lévy flight, and artificial rabbits optimization are utilized to improve the initialization of the sparrow population, explorer position, anti-predator position, and position of individual sparrows in each generation, respectively, to improve the global optimization seeking ability of the standard sparrow search algorithm. The ISSA-BP temperature compensation model far outperforms the four models, SVM, RF, BP, and PSO-BP, in model evaluation metrics such as MAE, MAPE, RMSE, and R-square for both the training and test sets. The results show that the algorithm in this paper can significantly improve the detection accuracy of the laser methane sensor under the wide temperature application environment.

9.
Med Biol Eng Comput ; 62(2): 521-535, 2024 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-37943419

RESUMEN

Long-term electroencephalogram (Long-Term EEG) has the capacity to monitor over a long period, making it a valuable tool in medical institutions. However, due to the large volume of patient data, selecting clean data segments from raw Long-Term EEG for further analysis is an extremely time-consuming and labor-intensive task. Furthermore, the various actions of patients during recording make it difficult to use algorithms to denoise part of the EEG data, and thus lead to the rejection of these data. Therefore, tools for the quick rejection of heavily corrupted epochs in Long-Term EEG records are highly beneficial. In this paper, a new reliable and fast automatic artifact rejection method for Long-Term EEG based on Isolation Forest (IF) is proposed. Specifically, the IF algorithm is repetitively applied to detect outliers in the EEG data, and the boundary of inliers is promptly adjusted by using a statistical indicator to make the algorithm proceed in an iterative manner. The iteration is terminated when the distance metric between clean epochs and artifact-corrupted epochs remains unchanged. Six statistical indicators (i.e., min, max, median, mean, kurtosis, and skewness) are evaluated by setting them as centroid to adjust the boundary during iteration, and the proposed method is compared with several state-of-the-art methods on a retrospectively collected dataset. The experimental results indicate that utilizing the min value of data as the centroid yields the most optimal performance, and the proposed method is highly efficacious and reliable in the automatic artifact rejection of Long-Term EEG, as it significantly improves the overall data quality. Furthermore, the proposed method surpasses compared methods on most data segments with poor data quality, demonstrating its superior capacity to enhance the data quality of the heavily corrupted data. Besides, owing to the linear time complexity of IF, the proposed method is much faster than other methods, thus providing an advantage when dealing with extensive datasets.


Asunto(s)
Artefactos , Procesamiento de Señales Asistido por Computador , Humanos , Estudios Retrospectivos , Algoritmos , Electroencefalografía/métodos
10.
Comput Biol Med ; 168: 107784, 2024 01.
Artículo en Inglés | MEDLINE | ID: mdl-38042100

RESUMEN

The use of machine learning in biomedical research has surged in recent years thanks to advances in devices and artificial intelligence. Our aim is to expand this body of knowledge by applying machine learning to pulmonary auscultation signals. Despite improvements in digital stethoscopes and attempts to find synergy between them and artificial intelligence, solutions for their use in clinical settings remain scarce. Physicians continue to infer initial diagnoses with less sophisticated means, resulting in low accuracy, leading to suboptimal patient care. To arrive at a correct preliminary diagnosis, the auscultation diagnostics need to be of high accuracy. Due to the large number of auscultations performed, data availability opens up opportunities for more effective sound analysis. In this study, digital 6-channel auscultations of 45 patients were used in various machine learning scenarios, with the aim of distinguishing between normal and abnormal pulmonary sounds. Audio features (such as fundamental frequencies F0-4, loudness, HNR, DFA, as well as descriptive statistics of log energy, RMS and MFCC) were extracted using the Python library Surfboard. Windowing, feature aggregation, and concatenation strategies were used to prepare data for machine learning algorithms in unsupervised (fair-cut forest, outlier forest) and supervised (random forest, regularized logistic regression) settings. The evaluation was carried out using 9-fold stratified cross-validation repeated 30 times. Decision fusion by averaging the outputs for a subject was also tested and found to be helpful. Supervised models showed a consistent advantage over unsupervised ones, with random forest achieving a mean AUC ROC of 0.691 (accuracy 71.11%, Kappa 0.416, F1-score 0.675) in side-based detection and a mean AUC ROC of 0.721 (accuracy 68.89%, Kappa 0.371, F1-score 0.650) in patient-based detection.


Asunto(s)
Inteligencia Artificial , Auscultación , Humanos , Auscultación/métodos , Algoritmos , Aprendizaje Automático , Pulmón
11.
Sensors (Basel) ; 23(19)2023 Sep 22.
Artículo en Inglés | MEDLINE | ID: mdl-37836852

RESUMEN

As the world progresses toward a digitally connected and sustainable future, the integration of semi-supervised anomaly detection in wastewater treatment processes (WWTPs) promises to become an essential tool in preserving water resources and assuring the continuous effectiveness of plants. When these complex and dynamic systems are coupled with limited historical anomaly data or complex anomalies, it is crucial to have powerful tools capable of detecting subtle deviations from normal behavior to enable the early detection of equipment malfunctions. To address this challenge, in this study, we analyzed five semi-supervised machine learning techniques (SSLs) such as Isolation Forest (IF), Local Outlier Factor (LOF), One-Class Support Vector Machine (OCSVM), Multilayer Perceptron Autoencoder (MLP-AE), and Convolutional Autoencoder (Conv-AE) for detecting different anomalies (complete, concurrent, and complex) of the Dissolved Oxygen (DO) sensor and aeration valve in the WWTP. The best results are obtained in the case of Conv-AE algorithm, with an accuracy of 98.36 for complete faults, 97.81% for concurrent faults, and 98.64% for complex faults (a combination of incipient and concurrent faults). Additionally, we developed an anomaly detection system for the most effective semi-supervised technique, which can provide the detection of delay time and generate a fault alarm for each considered anomaly.

12.
Sensors (Basel) ; 22(24)2022 Dec 10.
Artículo en Inglés | MEDLINE | ID: mdl-36560054

RESUMEN

Dynamic data (including environmental, traffic, and sensor data) were recently recognized as an important part of Open Government Data (OGD). Although these data are of vital importance in the development of data intelligence applications, such as business applications that exploit traffic data to predict traffic demand, they are prone to data quality errors produced by, e.g., failures of sensors and network faults. This paper explores the quality of Dynamic Open Government Data. To that end, a single case is studied using traffic data from the official Greek OGD portal. The portal uses an Application Programming Interface (API), which is essential for effective dynamic data dissemination. Our research approach includes assessing data quality using statistical and machine learning methods to detect missing values and anomalies. Traffic flow-speed correlation analysis, seasonal-trend decomposition, and unsupervised isolation Forest (iForest) are used to detect anomalies. iForest anomalies are classified as sensor faults and unusual traffic conditions. The iForest algorithm is also trained on additional features, and the model is explained using explainable artificial intelligence. There are 20.16% missing traffic observations, and 50% of the sensors have 15.5% to 33.43% missing values. The average percent of anomalies per sensor is 71.1%, with only a few sensors having less than 10% anomalies. Seasonal-trend decomposition detected 12.6% anomalies in the data of these sensors, and iForest 11.6%, with very few overlaps. To the authors' knowledge, this is the first time a study has explored the quality of dynamic OGD.


Asunto(s)
Inteligencia Artificial , Aprendizaje Automático , Algoritmos , Gobierno
13.
Sensors (Basel) ; 22(23)2022 Dec 01.
Artículo en Inglés | MEDLINE | ID: mdl-36502064

RESUMEN

This study addressed the problem of localization in an ultrawide-band (UWB) network, where the positions of both the access points and the tags needed to be estimated. We considered a fully wireless UWB localization system, comprising both software and hardware, featuring easy plug-and-play usability for the consumer, primarily targeting sport and leisure applications. Anchor self-localization was addressed by two-way ranging, also embedding a Gauss-Newton algorithm for the estimation and compensation of antenna delays, and a modified isolation forest algorithm working with low-dimensional set of measurements for outlier identification and removal. This approach avoids time-consuming calibration procedures, and it enables accurate tag localization by the multilateration of time difference of arrival measurements. For the assessment of performance and the comparison of different algorithms, we considered an experimental campaign with data gathered by a proprietary UWB localization system.


Asunto(s)
Deportes , Tecnología Inalámbrica , Algoritmos , Computadores , Tecnología
14.
Diagnostics (Basel) ; 12(12)2022 Nov 29.
Artículo en Inglés | MEDLINE | ID: mdl-36552991

RESUMEN

This paper introduces an unsupervised deep learning-driven scheme for mental tasks' recognition using EEG signals. To this end, the Multichannel Wiener filter was first applied to EEG signals as an artifact removal algorithm to achieve robust recognition. Then, a quadratic time-frequency distribution (QTFD) was applied to extract effective time-frequency signal representation of the EEG signals and catch the EEG signals' spectral variations over time to improve the recognition of mental tasks. The QTFD time-frequency features are employed as input for the proposed deep belief network (DBN)-driven Isolation Forest (iF) scheme to classify the EEG signals. Indeed, a single DBN-based iF detector is constructed based on each class's training data, with the class's samples as inliers and all other samples as anomalies (i.e., one-vs.-rest). The DBN is considered to learn pertinent information without assumptions on the data distribution, and the iF scheme is used for data discrimination. This approach is assessed using experimental data comprising five mental tasks from a publicly available database from the Graz University of Technology. Compared to the DBN-based Elliptical Envelope, Local Outlier Factor, and state-of-the-art EEG-based classification methods, the proposed DBN-based iF detector offers superior discrimination performance of mental tasks.

15.
Entropy (Basel) ; 24(5)2022 Apr 27.
Artículo en Inglés | MEDLINE | ID: mdl-35626495

RESUMEN

Outlier detection is an important research direction in the field of data mining. Aiming at the problem of unstable detection results and low efficiency caused by randomly dividing features of the data set in the Isolation Forest algorithm in outlier detection, an algorithm CIIF (Cluster-based Improved Isolation Forest) that combines clustering and Isolation Forest is proposed. CIIF first uses the k-means method to cluster the data set, selects a specific cluster to construct a selection matrix based on the results of the clustering, and implements the selection mechanism of the algorithm through the selection matrix; then builds multiple isolation trees. Finally, the outliers are calculated according to the average search length of each sample in different isolation trees, and the Top-n objects with the highest outlier scores are regarded as outliers. Through comparative experiments with six algorithms in eleven real data sets, the results show that the CIIF algorithm has better performance. Compared to the Isolation Forest algorithm, the average AUC (Area under the Curve of ROC) value of our proposed CIIF algorithm is improved by 7%.

16.
JMIR Mhealth Uhealth ; 9(8): e25415, 2021 08 12.
Artículo en Inglés | MEDLINE | ID: mdl-34387554

RESUMEN

BACKGROUND: With the development and promotion of wearable devices and their mobile health (mHealth) apps, physiological signals have become a research hotspot. However, noise is complex in signals obtained from daily lives, making it difficult to analyze the signals automatically and resulting in a high false alarm rate. At present, screening out the high-quality segments of the signals from huge-volume data with few labels remains a problem. Signal quality assessment (SQA) is essential and is able to advance the valuable information mining of signals. OBJECTIVE: The aims of this study were to design an SQA algorithm based on the unsupervised isolation forest model to classify the signal quality into 3 grades: good, acceptable, and unacceptable; validate the algorithm on labeled data sets; and apply the algorithm on real-world data to evaluate its efficacy. METHODS: Data used in this study were collected by a wearable device (SensEcho) from healthy individuals and patients. The observation windows for electrocardiogram (ECG) and respiratory signals were 10 and 30 seconds, respectively. In the experimental procedure, the unlabeled training set was used to train the models. The validation and test sets were labeled according to preset criteria and used to evaluate the classification performance quantitatively. The validation set consisted of 3460 and 2086 windows of ECG and respiratory signals, respectively, whereas the test set was made up of 4686 and 3341 windows of signals, respectively. The algorithm was also compared with self-organizing maps (SOMs) and 4 classic supervised models (logistic regression, random forest, support vector machine, and extreme gradient boosting). One case validation was illustrated to show the application effect. The algorithm was then applied to 1144 cases of ECG signals collected from patients and the detected arrhythmia false alarms were calculated. RESULTS: The quantitative results showed that the ECG SQA model achieved 94.97% and 95.58% accuracy on the validation and test sets, respectively, whereas the respiratory SQA model achieved 81.06% and 86.20% accuracy on the validation and test sets, respectively. The algorithm was superior to SOM and achieved moderate performance when compared with the supervised models. The example case showed that the algorithm was able to correctly classify the signal quality even when there were complex pathological changes in the signals. The algorithm application results indicated that some specific types of arrhythmia false alarms such as tachycardia, atrial premature beat, and ventricular premature beat could be significantly reduced with the help of the algorithm. CONCLUSIONS: This study verified the feasibility of applying the anomaly detection unsupervised model to SQA. The application scenarios include reducing the false alarm rate of the device and selecting signal segments that can be used for further research.


Asunto(s)
Electrocardiografía , Dispositivos Electrónicos Vestibles , Algoritmos , Arritmias Cardíacas , Humanos , Máquina de Vectores de Soporte
17.
Sensors (Basel) ; 21(15)2021 Jul 31.
Artículo en Inglés | MEDLINE | ID: mdl-34372436

RESUMEN

In this study, we proposed a data-driven approach to the condition monitoring of the marine engine. Although several unsupervised methods in the maritime industry have existed, the common limitation was the interpretation of the anomaly; they do not explain why the model classifies specific data instances as an anomaly. This study combines explainable AI techniques with anomaly detection algorithm to overcome the limitation above. As an explainable AI method, this study adopts Shapley Additive exPlanations (SHAP), which is theoretically solid and compatible with any kind of machine learning algorithm. SHAP enables us to measure the marginal contribution of each sensor variable to an anomaly. Thus, one can easily specify which sensor is responsible for the specific anomaly. To illustrate our framework, the actual sensor stream obtained from the cargo vessel collected over 10 months was analyzed. In this analysis, we performed hierarchical clustering analysis with transformed SHAP values to interpret and group common anomaly patterns. We showed that anomaly interpretation and segmentation using SHAP value provides more useful interpretation compared to the case without using SHAP value.


Asunto(s)
Algoritmos , Aprendizaje Automático , Ríos
18.
Sensors (Basel) ; 21(11)2021 Jun 04.
Artículo en Inglés | MEDLINE | ID: mdl-34199809

RESUMEN

This paper proposes a new diagnostic method for sensor signals collected during semiconductor manufacturing. These signals provide important information for predicting the quality and yield of the finished product. Much of the data gathered during this process is time series data for fault detection and classification (FDC) in real time. This means that time series classification (TSC) must be performed during fabrication. With advances in semiconductor manufacturing, the distinction between normal and abnormal data has become increasingly significant as new challenges arise in their identification. One challenge is that an extremely high FDC performance is required, which directly impacts productivity and yield. However, general classification algorithms can have difficulty separating normal and abnormal data because of subtle differences. Another challenge is that the frequency of abnormal data is remarkably low. Hence, engineers can use only normal data to develop their models. This study presents a method that overcomes these problems and improves the FDC performance; it consists of two phases. Phase I has three steps: signal segmentation, feature extraction based on local outlier factors (LOF), and one-class classification (OCC) modeling using the isolation forest (iF) algorithm. Phase II, the test stage, consists of three steps: signal segmentation, feature extraction, and anomaly detection. The performance of the proposed method is superior to that of other baseline methods.


Asunto(s)
Algoritmos , Semiconductores , Difusión
19.
Sensors (Basel) ; 21(1)2020 Dec 27.
Artículo en Inglés | MEDLINE | ID: mdl-33375503

RESUMEN

This paper proposes an indoor positioning method based on iBeacon technology that combines anomaly detection and a weighted Levenberg-Marquadt (LM) algorithm. The proposed solution uses the isolation forest algorithm for anomaly detection on the collected Received Signal Strength Indicator (RSSI) data from different iBeacon base stations, and calculates the anomaly rate of each signal source while eliminating abnormal signals. Then, a weight matrix is set by using each anomaly ratio and the RSSI value after eliminating the abnormal signal. Finally, the constructed weight matrix and the weighted LM algorithm are combined to solve the positioning coordinates. An Android smartphone was used to verify the positioning method proposed in this paper in an indoor scene. This experimental scenario revealed an average positioning error of 1.540 m and a root mean square error (RMSE) of 1.748 m. A large majority (85.71%) of the positioning point errors were less than 3 m. Furthermore, the RMSE of the method proposed in this paper was, respectively, 38.69%, 36.60%, and 29.52% lower than the RMSE of three other methods used for comparison. The experimental results show that the iBeacon-based indoor positioning method proposed in this paper can improve the precision of indoor positioning and has strong practicability.

20.
Sensors (Basel) ; 20(22)2020 Nov 18.
Artículo en Inglés | MEDLINE | ID: mdl-33218186

RESUMEN

Visual perception-based methods are a promising means of capturing the surface damage state of wire ropes and hence provide a potential way to monitor the condition of wire ropes. Previous methods mainly concentrated on the handcrafted feature-based flaw representation, and a classifier was constructed to realize fault recognition. However, appearances of outdoor wire ropes are seriously affected by noises like lubricating oil, dust, and light. In addition, in real applications, it is difficult to prepare a sufficient amount of flaw data to train a fault classifier. In the context of these issues, this study proposes a new flaw detection method based on the convolutional denoising autoencoder (CDAE) and Isolation Forest (iForest). CDAE is first trained by using an image reconstruction loss. Then, it is finetuned to minimize a cost function that penalizes the iForest-based flaw score difference between normal data and flaw data. Real hauling rope images of mine cableways were used to test the effectiveness and advantages of the newly developed method. Comparisons of various methods showed the CDAE-iForest method performed better in discriminative feature learning and flaw isolation with a small amount of flaw training data.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...