Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 138
Filtrar
Más filtros

Tipo del documento
Intervalo de año de publicación
1.
Stat Med ; 43(16): 3051-3061, 2024 Jul 20.
Artículo en Inglés | MEDLINE | ID: mdl-38803077

RESUMEN

The matrix profile serves as a fundamental tool to provide insights into similar patterns within time series. Existing matrix profile algorithms have been primarily developed for the normalized Euclidean distance, which may not be a proper distance measure in many settings. The methodology work of this paper was motivated by statistical analysis of beat-to-beat interval (BBI) data collected from smartwatches to monitor e-cigarette users' heart rate change patterns for which the original Euclidean distance ( L 2 $$ {L}_2 $$ -norm) would be a more suitable choice. Yet, incorporating the Euclidean distance into existing matrix profile algorithms turned out to be computationally challenging, especially when the time series is long with extended query sequences. We propose a novel methodology to efficiently compute matrix profile for long time series data based on the Euclidean distance. This methodology involves four key steps including (1) projection of the time series onto eigenspace; (2) enhancing singular value decomposition (SVD) computation; (3) early abandon strategy; and (4) determining lower bounds based on the first left singular vector. Simulation studies based on BBI data from the motivating example have demonstrated remarkable reductions in computational time, ranging from one-fourth to one-twentieth of the time required by the conventional method. Unlike the conventional method of which the performance deteriorates sharply as the time series length or the query sequence length increases, the proposed method consistently performs well across a wide range of the time series length or the query sequence length.


Asunto(s)
Algoritmos , Frecuencia Cardíaca , Humanos , Frecuencia Cardíaca/fisiología , Sistemas Electrónicos de Liberación de Nicotina , Modelos Estadísticos , Interpretación Estadística de Datos
2.
Mikrochim Acta ; 191(6): 327, 2024 May 13.
Artículo en Inglés | MEDLINE | ID: mdl-38740592

RESUMEN

In the ratiometric fluorescent (RF) strategy, the selection of fluorophores and their respective ratios helps to create visual quantitative detection of target analytes. This study presents a framework for optimizing ratiometric probes, employing both two-component and three-component RF designs. For this purpose, in a two-component ratiometric nanoprobe designed for detecting methyl parathion (MP), an organophosphate pesticide, yellow-emissive thioglycolic acid-capped CdTe quantum dots (Y-QDs) (analyte-responsive), and blue-emissive carbon dots (CDs) (internal reference) were utilized. Mathematical polynomial equations modeled the emission profiles of CDs and Y-QDs in the absence of MP, as well as the emission colors of Y-QDs in the presence of MP separately. In other two-/three-component examples, the detection of dopamine hydrochloride (DA) was investigated using an RF design based on blue-emissive carbon dots (B-CDs) (internal reference) and N-acetyl L-cysteine functionalized CdTe quantum dots with red/green emission colors (R-QDs/G-QDs) (analyte-responsive). The colors of binary/ternary mixtures in the absence and presence of MP/DA were predicted using fitted equations and additive color theory. Finally, the Euclidean distance method in the normalized CIE XYZ color space calculated the distance between predicted colors, with the maximum distance defining the real-optimal concentration of fluorophores. This strategy offers a more efficient and precise method for determining optimal probe concentrations compared to a trial-and-error approach. The model's effectiveness was confirmed through experimental validation, affirming its efficacy.

3.
Sensors (Basel) ; 24(16)2024 Aug 14.
Artículo en Inglés | MEDLINE | ID: mdl-39204960

RESUMEN

Sleep is a vital physiological process for human health, and accurately detecting various sleep states is crucial for diagnosing sleep disorders. This study presents a novel algorithm for identifying sleep stages using EEG signals, which is more efficient and accurate than the state-of-the-art methods. The key innovation lies in employing a piecewise linear data reduction technique called the Halfwave method in the time domain. This method simplifies EEG signals into a piecewise linear form with reduced complexity while preserving sleep stage characteristics. Then, a features vector with six statistical features is built using parameters obtained from the reduced piecewise linear function. We used the MIT-BIH Polysomnographic Database to test our proposed method, which includes more than 80 h of long data from different biomedical signals with six main sleep classes. We used different classifiers and found that the K-Nearest Neighbor classifier performs better in our proposed method. According to experimental findings, the average sensitivity, specificity, and accuracy of the proposed algorithm on the Polysomnographic Database considering eight records is estimated as 94.82%, 96.65%, and 95.73%, respectively. Furthermore, the algorithm shows promise in its computational efficiency, making it suitable for real-time applications such as sleep monitoring devices. Its robust performance across various sleep classes suggests its potential for widespread clinical adoption, making significant advances in the knowledge, detection, and management of sleep problems.


Asunto(s)
Algoritmos , Electroencefalografía , Polisomnografía , Procesamiento de Señales Asistido por Computador , Fases del Sueño , Trastornos del Sueño-Vigilia , Humanos , Electroencefalografía/métodos , Fases del Sueño/fisiología , Trastornos del Sueño-Vigilia/diagnóstico , Trastornos del Sueño-Vigilia/fisiopatología , Polisomnografía/métodos , Femenino , Masculino , Adulto , Bases de Datos Factuales
4.
Sensors (Basel) ; 24(14)2024 Jul 17.
Artículo en Inglés | MEDLINE | ID: mdl-39066027

RESUMEN

Strip steel plays a crucial role in modern industrial production, where enhancing the accuracy and real-time capabilities of surface defect classification is essential. However, acquiring and annotating defect samples for training deep learning models are challenging, further complicated by the presence of redundant information in these samples. These issues hinder the classification of strip steel surface defects. To address these challenges, this paper introduces a high real-time network, ODNet (Orthogonal Decomposition Network), designed for few-shot strip steel surface defect classification. ODNet utilizes ResNet as its backbone and incorporates orthogonal decomposition technology to reduce the feature redundancies. Furthermore, it integrates skip connection to preserve essential correlation information in the samples, preventing excessive elimination. The model optimizes the parameter efficiency by employing Euclidean distance as the classifier. The orthogonal decomposition not only helps reduce redundant image information but also ensures compatibility with the Euclidean distance requirement for orthogonal input. Extensive experiments conducted on the FSC-20 benchmark demonstrate that ODNet achieves superior real-time performance, accuracy, and generalization compared to alternative methods, effectively addressing the challenges of few-shot strip steel surface defect classification.

5.
Entropy (Basel) ; 26(3)2024 Feb 25.
Artículo en Inglés | MEDLINE | ID: mdl-38539709

RESUMEN

One of the crucial steps in the multi-criteria decision analysis involves establishing the importance of criteria and determining the relationship between them. This paper proposes an extended Hellwig's method (H_EM) that utilizes entropy-based weights and Mahalanobis distance to address this issue. By incorporating the concept of entropy, weights are determined based on their information content represented by the matrix data. The Mahalanobis distance is employed to address interdependencies among criteria, contributing to the improved performance of the proposed framework. To illustrate the relevance and effectiveness of the extended H_EM method, this study utilizes it to assess the progress toward achieving Sustainable Development Goal 4 of the 2030 Agenda within the European Union countries for education in the year 2021. Performance comparison is conducted between results obtained by the extended Hellwig's method and its other variants. The results reveal a significant impact on the ranking of the EU countries in the education area, depending on the choice of distance measure (Euclidean or Mahalanobis) and the system of weights (equal or entropy-based). Overall, this study highlights the potential of the proposed method in addressing complex decision-making scenarios with interdependent criteria.

6.
Biol Sport ; 41(3): 15-28, 2024 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-38952897

RESUMEN

To improve soccer performance, coaches should be able to replicate the match's physical efforts during the training sessions. For this goal, small-sided games (SSGs) are widely used. The main purpose of the current study was to develop similarity and overload scores to quantify the degree of similarity and the extent to which the SSG was able to replicate match intensity. GPSs were employed to collect external load and were grouped in three vectors (kinematic, metabolic, and mechanical). Euclidean distance was used to calculate the distance between training and match vectors, which was subsequently converted into a similarity score. The average of the pairwise difference between vectors was used to develop the overload scores. Three similarity (Simkin, Simmet, Simmec) and three overload scores (OVERkin, OVERmet, OVERmec) were defined for kinematic, metabolic, and mechanical vectors. Simmet and OVERmet were excluded from further analysis, showing a very large correlation (r > 0.7, p < 0.01) with Simkin and OVERkin. The scores were subsequently analysed considering teams' level (First team vs. U19 team) and SSGs' characteristics in the various playing roles. The independent-sample t-test showed (p < 0.01) that the First team presented greater Simkin (d = 0.91), OVERkin (d = 0.47), and OVERmec (d = 0.35) scores. Moreover, a generalized linear mixed model (GLMM) was employed to evaluate differences according to SSG characteristics. The results suggest that a specific SSG format could lead to different similarity and overload scores according to the playing position. This process could simplify data interpretation and categorize SSGs based on their scores.

7.
Environ Sci Technol ; 57(8): 3434-3444, 2023 02 28.
Artículo en Inglés | MEDLINE | ID: mdl-36537350

RESUMEN

Machine learning (ML) provides an efficient manner for rapid prediction of the life-cycle environmental impacts of chemicals, but challenges remain due to low prediction accuracy and poor interpretability of the models. To address these issues, we focused on data processing by using a mutual information-permutation importance (MI-PI) feature selection method to filter out irrelevant molecular descriptors from the input data, which improved the model interpretability by preserving the physicochemical meanings of original molecular descriptors without generation of new variables. We also applied a weighted Euclidean distance method to mine the data most relevant to the predicted targets by quantifying the contribution of each feature, thereby the prediction accuracy was improved. On the basis of above data processing, we developed artificial neural network (ANN) models for predicting the life-cycle environmental impacts of chemicals with R2 values of 0.81, 0.81, 0.84, 0.75, 0.73, and 0.86 for global warming, human health, metal depletion, freshwater ecotoxicity, particulate matter formation, and terrestrial acidification, respectively. The ML models were interpreted using the Shapley additive explanation method by quantifying the contribution of each input molecular descriptor to environmental impact categories. This work suggests that the combination of feature selection by MI-PI and source data selection based on weighted Euclidean distance has a promising potential to improve the accuracy and interpretability of the models for predicting the life-cycle environmental impacts of chemicals.


Asunto(s)
Ambiente , Calentamiento Global , Humanos , Redes Neurales de la Computación , Agua Dulce , Aprendizaje Automático
8.
Sensors (Basel) ; 23(24)2023 Dec 18.
Artículo en Inglés | MEDLINE | ID: mdl-38139754

RESUMEN

Face verification, crucial for identity authentication and access control in our digital society, faces significant challenges when comparing images taken in diverse environments, which vary in terms of distance, angle, and lighting conditions. These disparities often lead to decreased accuracy due to significant resolution changes. This paper introduces an adaptive face verification solution tailored for diverse conditions, particularly focusing on Unmanned Aerial Vehicle (UAV)-based public safety applications. Our approach features an innovative adaptive verification threshold algorithm and an optimised operation pipeline, specifically designed to accommodate varying distances between the UAV and the human subject. The proposed solution is implemented based on a UAV platform and empirically compared with several state-of-the-art solutions. Empirical results have shown that an improvement of 15% in accuracy can be achieved.

9.
Knowl Inf Syst ; 65(2): 855-868, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-36373008

RESUMEN

The most straightforward approaches to checking the degrees of similarity and differentiation between two sets are to use distance and cosine similarity metrics. The cosine of the angle between two n-dimensional vectors in n-dimensional space is called cosine similarity. Even though the two sides are dissimilar in size, cosine similarity may readily find commonalities since it deals with the angle in between. Cosine similarity is widely used because it is simple, ideal for usage with sparse data, and deals with the angle between two vectors rather than their magnitude. The distance function is an elegant and canonical quantitative tool to measure the similarity or difference between two sets. This work presents new metrics of distance and cosine similarity amongst Fermatean fuzzy sets. Initially, the definitions of the new measures based on Fermatean fuzzy sets were presented, and their properties were explored. Considering that the cosine measure does not satisfy the axiom of similarity measure, then we propose a method to construct other similarity measures between Fermatean fuzzy sets based on the proposed cosine similarity and Euclidean distance measures and it satisfies the axiom of the similarity measure. Furthermore, we obtain a cosine distance measure between Fermatean fuzzy sets by using the relationship between the similarity and distance measures, then we extend the technique for order of preference by similarity to the ideal solution method to the proposed cosine distance measure, which can deal with the related decision-making problems not only from the point of view of geometry but also from the point of view of algebra. Finally, we give a practical example to illustrate the reasonableness and effectiveness of the proposed method, which is also compared with other existing methods.

10.
BMC Nurs ; 22(1): 422, 2023 Nov 10.
Artículo en Inglés | MEDLINE | ID: mdl-37950226

RESUMEN

BACKGROUND: The perception of the quality of care provided by the medical institution to patients is directly affected by the job satisfaction of nurses. The feeling of job satisfaction is caused besides other things by the subjective expectations of employees about what their work should provide them with in return. The aim of the study is to evaluate and compare job satisfaction of hospital nurses in the Czech Republic in 2011 and 2021 by identifying differences between their personal preferences and perceived saturation. METHODS: The respondents are hospital nurses in the Czech Republic in 2011 and 2021. A developed questionnaire was used to determine the job satisfaction factors. The order of factors of personal preferences, perceived saturation and differences between them was compiled. For evaluation was used the Euclidean distance model that enables to capture the order and determine the significance given by the distance in which the factors are located. RESULTS: At the top of personal preferences of hospital nurses, the factors salary and patient care are in the first two places with a similar distance. The salary factor is the most preferred by hospital nurses in both evaluated periods, and at the same time there is the greatest discrepancy between personal preferences and perceived saturation. By contrast, image of profession and working conditions were sufficiently saturated by the employer in both periods, but nurses do not significantly prefer these factors. CONCLUSIONS: The salary and patient care (i.e. the mission of the nurse's work itself) are at the top of personal preferences of hospital nurses, with an exclusive position among other factors. We consider it important that the hospital management emphasizes them in the management of hospital nurses. At the same time, the patient care is perceived by the hospital nurses as one of the most saturated factors - in contrast to salary, which is located at the complete opposite pole as the least saturated factor and therefore emerges from the mutual comparison as the factor with the greatest degree of divergence. The stated conclusions are valid for both compared periods. New method of data evaluation was successfully tested.

11.
Entropy (Basel) ; 25(3)2023 Mar 16.
Artículo en Inglés | MEDLINE | ID: mdl-36981399

RESUMEN

Since the Fuzzy C-Means algorithm is incapable of considering the influence of different features and exponential constraints on high-dimensional and complex data, a fuzzy clustering algorithm based on non-Euclidean distance combining feature weights and entropy weights is proposed. The proposed algorithm is based on the Fuzzy C-Means soft clustering algorithm to deal with high-dimensional and complex data. The objective function of the new algorithm is modified with the help of two different entropy terms and a non-Euclidean way of computing the distance. The distance calculation formula enhances the efficiency of extracting the contribution of different features. The first entropy term helps to minimize the clusters' dispersion and maximize the negative entropy to control the clustering process, which also promotes the association between the samples. The second entropy term helps to control the weights of features since different features have different weights in the clustering process. Experiments on real-world datasets indicate that the proposed algorithm gives better clustering results than other algorithms. The experiments demonstrate the proposed algorithm's robustness by analyzing the parameters' sensitivity and comparing the computational distance formulas. In summary, the improved algorithm improves classification performance under noisy interference and high-dimensional datasets, increases computational efficiency, performs well in real-world high-dimensional datasets, and encourages the development of robust noise-resistant high-dimensional fuzzy clustering algorithms.

12.
Remote Sens Environ ; 273: 112958, 2022 May.
Artículo en Inglés | MEDLINE | ID: mdl-36081832

RESUMEN

The unprecedented availability of optical satellite data in cloud-based computing platforms, such as Google Earth Engine (GEE), opens new possibilities to develop crop trait retrieval models from the local to the planetary scale. Hybrid retrieval models are of interest to run in these platforms as they combine the advantages of physically- based radiative transfer models (RTM) with the flexibility of machine learning regression algorithms. Previous research with GEE primarily relied on processing bottom-of-atmosphere (BOA) reflectance data, which requires atmospheric correction. In the present study, we implemented hybrid models directly into GEE for processing Sentinel-2 (S2) Level-1C (L1C) top-of-atmosphere (TOA) reflectance data into crop traits. To achieve this, a training dataset was generated using the leaf-canopy RTM PROSAIL in combination with the atmospheric model 6SV. Gaussian process regression (GPR) retrieval models were then established for eight essential crop traits namely leaf chlorophyll content, leaf water content, leaf dry matter content, fractional vegetation cover, leaf area index (LAI), and upscaled leaf variables (i.e., canopy chlorophyll content, canopy water content and canopy dry matter content). An important pre-requisite for implementation into GEE is that the models are sufficiently light in order to facilitate efficient and fast processing. Successful reduction of the training dataset by 78% was achieved using the active learning technique Euclidean distance-based diversity (EBD). With the EBD-GPR models, highly accurate validation results of LAI and upscaled leaf variables were obtained against in situ field data from the validation study site Munich-North-Isar (MNI), with normalized root mean square errors (NRMSE) from 6% to 13%. Using an independent validation dataset of similar crop types (Italian Grosseto test site), the retrieval models showed moderate to good performances for canopy-level variables, with NRMSE ranging from 14% to 50%, but failed for the leaf-level estimates. Obtained maps over the MNI site were further compared against Sentinel-2 Level 2 Prototype Processor (SL2P) vegetation estimates generated from the ESA Sentinels' Application Platform (SNAP) Biophysical Processor, proving high consistency of both retrievals (R 2 from 0.80 to 0.94). Finally, thanks to the seamless GEE processing capability, the TOA-based mapping was applied over the entirety of Germany at 20 m spatial resolution including information about prediction uncertainty. The obtained maps provided confidence of the developed EBD-GPR retrieval models for integration in the GEE framework and national scale mapping from S2-L1C imagery. In summary, the proposed retrieval workflow demonstrates the possibility of routine processing of S2 TOA data into crop traits maps at any place on Earth as required for operational agricultural applications.

13.
Sensors (Basel) ; 22(21)2022 Oct 31.
Artículo en Inglés | MEDLINE | ID: mdl-36366045

RESUMEN

Advances in neural networks have garnered growing interest in applications of machine vision in livestock management, but simpler landmark-based approaches suitable for small, early stage exploratory studies still represent a critical stepping stone towards these more sophisticated analyses. While such approaches are well-validated for calibrated images, the practical limitations of such imaging systems restrict their applicability in working farm environments. The aim of this study was to validate novel algorithmic approaches to improving the reliability of scale-free image biometrics acquired from uncalibrated images of minimally restrained livestock. Using a database of 551 facial images acquired from 108 dairy cows, we demonstrate that, using a simple geometric projection-based approach to metric extraction, a priori knowledge may be leveraged to produce more intuitive and reliable morphometric measurements than conventional informationally complete Euclidean distance matrix analysis. Where uncontrolled variations in image annotation, camera position, and animal pose could not be fully controlled through the design of morphometrics, we further demonstrate how modern unsupervised machine learning tools may be used to leverage the systematic error structures created by such lurking variables in order to generate bias correction terms that may subsequently be used to improve the reliability of downstream statistical analyses and dimension reduction.


Asunto(s)
Ganado , Aprendizaje Automático no Supervisado , Animales , Femenino , Bovinos , Reproducibilidad de los Resultados , Redes Neurales de la Computación , Matemática
14.
Biom J ; 64(6): 1023-1039, 2022 08.
Artículo en Inglés | MEDLINE | ID: mdl-35561036

RESUMEN

Hepatocellular carcinoma (HCC) is the most common primary cancer of the liver. Finding new biomarkers for its early detection is of high clinical importance. As with many other diseases, cancer has a progressive nature. In cancer biomarker studies, it is often the case that the true disease status of the recruited individuals exhibits more than two classes. The receiver operating characteristic (ROC) surface is a well-known statistical tool for assessing the biomarkers' discriminatory ability in trichotomous settings. The volume under the ROC surface (VUS) is an overall measure of the discriminatory ability of a marker. In practice, clinicians are often in need of cutoffs for decision-making purposes. A popular approach for computing such cutoffs is the Youden index and its recent three-class generalization. A drawback of such a method is that it treats the data in a pairwise fashion rather than consider all the data simultaneously. The use of the minimized Euclidean distance from the perfection corner to the ROC surface (also known as closest to perfection method) is an alternative to the Youden index that may be preferable in some settings. When such a method is employed, there is a need for inferences around the resulting true class rates/fractions that correspond to the optimal operating point. In this paper, we provide an inferential framework for the derivation of marginal confidence intervals (CIs) and joint confidence spaces (CSs) around the corresponding true class fractions, when dealing with trichotomous settings. We explore parametric and nonparametric approaches for the construction of such CIs and CSs. We evaluate our approaches through extensive simulations and apply them to a real data set that refers to HCC patients.


Asunto(s)
Carcinoma Hepatocelular , Neoplasias Hepáticas , Biomarcadores , Carcinoma Hepatocelular/diagnóstico , Humanos , Neoplasias Hepáticas/diagnóstico , Curva ROC
15.
Stat Med ; 40(20): 4522-4539, 2021 09 10.
Artículo en Inglés | MEDLINE | ID: mdl-34080733

RESUMEN

Pancreatic ductal adenocarcinoma (PDAC) is an aggressive type of cancer with a 5-year survival rate of less than 5%. As in many other diseases, its diagnosis might involve progressive stages. It is common that in biomarker studies referring to PDAC, recruitment involves three groups: healthy individuals, patients that suffer from chronic pancreatitis, and PDAC patients. Early detection and accurate classification of the state of the disease are crucial for patients' successful treatment. ROC analysis is the most popular way to evaluate the performance of a biomarker and the Youden index is commonly employed for cutoff derivation. The so-called generalized Youden index has a drawback in the three-class case of not accommodating the full data set when estimating the optimal cutoffs. In this article, we explore the use of the Euclidean distance of the ROC to the perfection corner for the derivation of cutoffs in trichotomous settings. We construct an inferential framework that involves both parametric and nonparametric techniques. Our methods can accommodate the full information of a given data set and thus provide more accurate estimates in terms of the decision-making cutoffs compared with a Youden-based strategy. We evaluate our approaches through extensive simulations and illustrate them on a PDAC biomarker study.


Asunto(s)
Neoplasias Pancreáticas , Biomarcadores , Intervalos de Confianza , Humanos , Neoplasias Pancreáticas/diagnóstico , Curva ROC
16.
Proc Natl Acad Sci U S A ; 115(23): 5914-5919, 2018 06 05.
Artículo en Inglés | MEDLINE | ID: mdl-29784801

RESUMEN

The change-point detection has been carried out in terms of the Euclidean minimum spanning tree (MST) and shortest Hamiltonian path (SHP), with successful applications in the determination of authorship of a classic novel, the detection of change in a network over time, the detection of cell divisions, etc. However, these Euclidean graph-based tests may fail if a dataset contains random interferences. To solve this problem, we present a powerful non-Euclidean SHP-based test, which is consistent and distribution-free. The simulation shows that the test is more powerful than both Euclidean MST- and SHP-based tests and the non-Euclidean MST-based test. Its applicability in detecting both landing and departure times in video data of bees' flower visits is illustrated.

17.
Sensors (Basel) ; 21(24)2021 Dec 09.
Artículo en Inglés | MEDLINE | ID: mdl-34960322

RESUMEN

The inertial navigation system has high short-term positioning accuracy but features cumulative error. Although no cumulative error occurs in WiFi fingerprint localization, mismatching is common. A popular technique thus involves integrating an inertial navigation system with WiFi fingerprint matching. The particle filter uses dead reckoning as the state transfer equation and the difference between inertial navigation and WiFi fingerprint matching as the observation equation. Floor map information is introduced to detect whether particles cross the wall; if so, the weight is set to zero. For particles that do not cross the wall, considering the distance between current and historical particles, an adaptive particle filter is proposed. The adaptive factor increases the weight of highly trusted particles and reduces the weight of less trusted particles. This paper also proposes a multidimensional Euclidean distance algorithm to reduce WiFi fingerprint mismatching. Experimental results indicate that the proposed algorithm achieves high positioning accuracy.

18.
Entropy (Basel) ; 23(3)2021 Feb 25.
Artículo en Inglés | MEDLINE | ID: mdl-33668760

RESUMEN

In this paper we propose a novel transform domain steganography technique-hiding a message in components of linear combination of high order eigenfaces vectors. By high order we mean eigenvectors responsible for dimensions with low amount of overall image variance, which are usually related to high-frequency parameters of image (details). The study found that when the method was trained on large enough data sets, image quality was nearly unaffected by modification of some linear combination coefficients used as PCA-based features. The proposed method is only limited to facial images, but in the era of overwhelming influence of social media, hundreds of thousands of selfies uploaded every day to social networks do not arouse any suspicion as a potential steganography communication channel. From our best knowledge there is no description of any popular steganography method that utilizes eigenfaces image domain. Due to this fact we have performed extensive evaluation of our method using at least 200 000 facial images for training and robustness evaluation of proposed approach. The obtained results are very promising. What is more, our numerical comparison with other state-of-the-art algorithms proved that eigenfaces-based steganography is among most robust methods against compression attack. The proposed research can be reproduced because we use publicly accessible data set and our implementation can be downloaded.

19.
Appl Intell (Dordr) ; 51(6): 3844-3864, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-34764570

RESUMEN

The accuracy of graph based learning techniques relies on the underlying topological structure and affinity between data points, which are assumed to lie on a smooth Riemannian manifold. However, the assumption of local linearity in a neighborhood does not always hold true. Hence, the Euclidean distance based affinity that determines the graph edges may fail to represent the true connectivity strength between data points. Moreover, the affinity between data points is influenced by the distribution of the data around them and must be considered in the affinity measure. In this paper, we propose two techniques, C C G A L and C C G A N that use cross-covariance based graph affinity (CCGA) to represent the relation between data points in a local region. C C G A L also explores the additional connectivity between data points which share a common local neighborhood. C C G A N considers the influence of respective neighborhoods of the two immediately connected data points, which further enhance the affinity measure. Experimental results of manifold learning on synthetic datasets show that CCGA is able to represent the affinity measure between data points more accurately. This results in better low dimensional representation. Manifold regularization experiments on standard image dataset further indicate that the proposed CCGA based affinity is able to accurately identify and include the influence of the data points and its common neighborhood that increase the classification accuracy. The proposed method outperforms the existing state-of-the-art manifold regularization methods by a significant margin.

20.
BMC Bioinformatics ; 21(Suppl 17): 481, 2020 Dec 14.
Artículo en Inglés | MEDLINE | ID: mdl-33308142

RESUMEN

BACKGROUND: Prediction of patient outcome in medical intensive care units (ICU) may help for development and investigation of early interventional strategies. Several ICU scoring systems have been developed and are used to predict clinical outcome of ICU patients. These scores are calculated from clinical physiological and biochemical characteristics of patients. Heart rate variability (HRV) is a correlate of cardiac autonomic regulation and has been evident as a marker of poor clinical prognosis. HRV can be measured from the electrocardiogram non-invasively and monitored in real time. HRV has been identified as a promising 'electronic biomarker' of disease severity. Traumatic brain injury (TBI) is a subset of critically ill patients admitted to ICU, with significant morbidity and mortality, and often difficult to predict outcomes. Changes of HRV for brain injured patients have been reported in several studies. This study aimed to utilize the continuous HRV collection from admission across the first 24 h in the ICU in severe TBI patients to develop a patient outcome prediction system. RESULTS: A feature extraction strategy was applied to measure the HRV fluctuation during time. A prediction model was developed based on HRV measures with a genetic algorithm for feature selection. The result (AUC: 0.77) was compared with earlier reported scoring systems (highest AUC: 0.76), encouraging further development and practical application. CONCLUSIONS: The prediction models built with different feature sets indicated that HRV based parameters may help predict brain injury patient outcome better than the previously adopted illness severity scores.


Asunto(s)
Lesiones Traumáticas del Encéfalo/diagnóstico , Frecuencia Cardíaca/fisiología , Algoritmos , Área Bajo la Curva , Lesiones Traumáticas del Encéfalo/patología , Electrocardiografía , Humanos , Unidades de Cuidados Intensivos , Modelos Logísticos , Pronóstico , Curva ROC , Índice de Severidad de la Enfermedad
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA