Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 7 de 7
Filtrar
Mais filtros

Base de dados
Tipo de documento
Intervalo de ano de publicação
1.
Sensors (Basel) ; 22(22)2022 Nov 14.
Artigo em Inglês | MEDLINE | ID: mdl-36433399

RESUMO

It is essential to estimate the sleep quality and diagnose the clinical stages in time and at home, because they are closely related to and important causes of chronic diseases and daily life dysfunctions. However, the existing "gold-standard" sensing machine for diagnosis (Polysomnography (PSG) with Electroencephalogram (EEG) measurements) is almost infeasible to deploy at home in a "ubiquitous" manner. In addition, it is costly to train clinicians for the diagnosis of sleep conditions. In this paper, we proposed a novel technical and systematic attempt to tackle the previous barriers: first, we proposed to monitor and sense the sleep conditions using the infrared (IR) camera videos synchronized with the EEG signal; second, we proposed a novel cross-modal retrieval system termed as Cross-modal Contrastive Hashing Retrieval (CCHR) to build the relationship between EEG and IR videos, retrieving the most relevant EEG signal given an infrared video. Specifically, the CCHR is novel in the following two perspectives. Firstly, to eliminate the large cross-modal semantic gap between EEG and IR data, we designed a novel joint cross-modal representation learning strategy using a memory-enhanced hard-negative mining design under the framework of contrastive learning. Secondly, as the sleep monitoring data are large-scale (8 h long for each subject), a novel contrastive hashing module is proposed to transform the joint cross-modal features to the discriminative binary hash codes, enabling the efficient storage and inference. Extensive experiments on our collected cross-modal sleep condition dataset validated that the proposed CCHR achieves superior performances compared with existing cross-modal hashing methods.


Assuntos
Eletroencefalografia , Transtornos do Sono-Vigília , Humanos , Polissonografia , Sono , Aprendizagem
2.
Sensors (Basel) ; 19(4)2019 Feb 18.
Artigo em Inglês | MEDLINE | ID: mdl-30781668

RESUMO

Accurate stride-length estimation is a fundamental component in numerous applications, such as pedestrian dead reckoning, gait analysis, and human activity recognition. The existing stride-length estimation algorithms work relatively well in cases of walking a straight line at normal speed, but their error overgrows in complex scenes. Inaccurate walking-distance estimation leads to huge accumulative positioning errors of pedestrian dead reckoning. This paper proposes TapeLine, an adaptive stride-length estimation algorithm that automatically estimates a pedestrian's stride-length and walking-distance using the low-cost inertial-sensor embedded in a smartphone. TapeLine consists of a Long Short-Term Memory module and Denoising Autoencoders that aim to sanitize the noise in raw inertial-sensor data. In addition to accelerometer and gyroscope readings during stride interval, extracted higher-level features based on excellent early studies were also fed to proposed network model for stride-length estimation. To train the model and evaluate its performance, we designed a platform to collect inertial-sensor measurements from a smartphone as training data, pedestrian step events, actual stride-length, and cumulative walking-distance from a foot-mounted inertial navigation system module as training labels at the same time. We conducted elaborate experiments to verify the performance of the proposed algorithm and compared it with the state-of-the-art SLE algorithms. The experimental results demonstrated that the proposed algorithm outperformed the existing methods and achieves good estimation accuracy, with a stride-length error rate of 4.63% and a walking-distance error rate of 1.43% using inertial-sensor embedded in smartphone without depending on any additional infrastructure or pre-collected database when a pedestrian is walking in both indoor and outdoor complex environments (stairs, spiral stairs, escalators and elevators) with natural motion patterns (fast walking, normal walking, slow walking, running, jumping).

3.
Sensors (Basel) ; 18(10)2018 Oct 03.
Artigo em Inglês | MEDLINE | ID: mdl-30282938

RESUMO

Accurate indoor positioning technology provides location-based service for a variety of applications. However, most existing indoor localization approaches (e.g., Wi-Fi and Bluetooth-based methods) rely heavily on positioning infrastructure, which prevents their large-scale deployment and limits the range at which they are applicable. Here, we proposed an infrastructure-free indoor positioning and tracking approach, termed LiMag, which used ubiquitous magnetic field and ambient lights (e.g., fluorescent, incandescent, and light-emitting diodes (LEDs)) without containing modulated information. We conducted an in-depth study on both the advantages and the challenges in leveraging magnetic field and ambient light intensity for indoor localization. Based on the insights from this study, we established a hybrid observation model that took full advantage of both the magnetic field and ambient light signals. To address the low discernibility of the hybrid observation model, LiMag first generated a single-step fingerprint model by vectorizing consecutive hybrid observations within each step. In order to accurately track users, a lightweight single-step tracking algorithm based on the single-step fingerprints and the particle filter framework was designed. LiMag leveraged the walking information of users and several single-step fingerprints to generate long trajectory fingerprints that exhibited much higher location differentiation ability than the single-step fingerprint. To accelerate particle convergence and eliminate the accumulative error of single-step tracking algorithm, a long trajectory calibration scheme based on long trajectory fingerprints was also introduced. An undirected weighted graph model was constructed to decrease the computational overhead resulting from this long trajectory matching. In addition to typical indoor scenarios including offices, shopping malls and parking lots, we also conducted experiments in more challenging scenarios, including large open-plan areas as well as environments characterized by strong sunlight. Our proposed algorithm achieved a 75th percentile localization accuracy of 1.8 m and 2.2 m, respectively, in the office and shopping mall tested. In conclusion, our LiMag algorithm provided location-based service of infrastructure-free with significantly improved localization accuracy and coverage, as well as satisfactory robustness inside complex indoor environments.

4.
IEEE Trans Pattern Anal Mach Intell ; 46(8): 5288-5305, 2024 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-38315607

RESUMO

Multi-Source-Free Unsupervised Domain Adaptation (MSFUDA) requires aggregating knowledge from multiple source models and adapting it to the target domain. Two challenges remain: 1) suboptimal coarse-grained (domain-level) aggregation of multiple source models, and 2) risky semantics propagation based on local structures. In this article, we propose an evidential learning method for MSFUDA, where we formulate two uncertainties, i.e. Evidential Prediction Uncertainty (EPU) and Evidential Adjacency-Consistent Uncertainty (EAU), respectively for addressing the two challenges. The former, EPU, captures the uncertainty of a sample fitted to a source model, which can suggest the preferences of target samples for different source models. Based on this, we develop an EPU-Based Multi-Source Aggregation module to achieve fine-grained, instance-level source knowledge aggregation. The latter, EAU, provides a robust measure of consistency among adjacent samples in the target domain. Utilizing this, we develop an EAU-Guided Local Structure Mining module to ensure the trustworthy propagation of semantics. The two modules are integrated into the Evidential Aggregation and Adaptation Framework (EAAF), and we demonstrated that this framework achieves state-of-the-art performances on three MSFUDA benchmarks.

5.
IEEE Trans Image Process ; 32: 2033-2048, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37030696

RESUMO

Source-free unsupervised domain adaptation (SFUDA) aims to learn a target domain model using unlabeled target data and the knowledge of a well-trained source domain model. Most previous SFUDA works focus on inferring semantics of target data based on the source knowledge. Without measuring the transferability of the source knowledge, these methods insufficiently exploit the source knowledge, and fail to identify the reliability of the inferred target semantics. However, existing transferability measurements require either source data or target labels, which are infeasible in SFUDA. To this end, firstly, we propose a novel Uncertainty-induced Transferability Representation (UTR), which leverages uncertainty as the tool to analyse the channel-wise transferability of the source encoder in the absence of the source data and target labels. The domain-level UTR unravels how transferable the encoder channels are to the target domain and the instance-level UTR characterizes the reliability of the inferred target semantics. Secondly, based on the UTR, we propose a novel Calibrated Adaption Framework (CAF) for SFUDA, including i) the source knowledge calibration module that guides the target model to learn the transferable source knowledge and discard the non-transferable one, and ii) the target semantics calibration module that calibrates the unreliable semantics. With the help of the calibrated source knowledge and the target semantics, the model adapts to the target domain safely and ultimately better. We verified the effectiveness of our method using experimental results and demonstrated that the proposed method achieves state-of-the-art performances on the three SFUDA benchmarks. Code is available at https://github.com/SPIresearch/UTR.

6.
Med Biol Eng Comput ; 56(4): 571-582, 2018 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-28836083

RESUMO

Microwave-based breast cancer detection has been proposed as a complementary approach to compensate for some drawbacks of existing breast cancer detection techniques. Among the existing microwave breast cancer detection methods, machine learning-type algorithms have recently become more popular. These focus on detecting the existence of breast tumours rather than performing imaging to identify the exact tumour position. A key component of the machine learning approaches is feature extraction. One of the most widely used feature extraction method is principle component analysis (PCA). However, it can be sensitive to signal misalignment. This paper proposes feature extraction methods based on time-frequency representations of microwave data, including the wavelet transform and the empirical mode decomposition. Time-invariant statistics can be generated to provide features more robust to data misalignment. We validate results using clinical data sets combined with numerically simulated tumour responses. Experimental results show that features extracted from decomposition results of the wavelet transform and EMD improve the detection performance when combined with an ensemble selection-based classifier.


Assuntos
Neoplasias da Mama/diagnóstico por imagem , Micro-Ondas/uso terapêutico , Análise de Ondaletas , Mama/diagnóstico por imagem , Feminino , Humanos , Curva ROC
7.
Comput Intell Neurosci ; 2017: 8501683, 2017.
Artigo em Inglês | MEDLINE | ID: mdl-29270197

RESUMO

Anomaly detection, which aims to identify observations that deviate from a nominal sample, is a challenging task for high-dimensional data. Traditional distance-based anomaly detection methods compute the neighborhood distance between each observation and suffer from the curse of dimensionality in high-dimensional space; for example, the distances between any pair of samples are similar and each sample may perform like an outlier. In this paper, we propose a hybrid semi-supervised anomaly detection model for high-dimensional data that consists of two parts: a deep autoencoder (DAE) and an ensemble k-nearest neighbor graphs- (K-NNG-) based anomaly detector. Benefiting from the ability of nonlinear mapping, the DAE is first trained to learn the intrinsic features of a high-dimensional dataset to represent the high-dimensional data in a more compact subspace. Several nonparametric KNN-based anomaly detectors are then built from different subsets that are randomly sampled from the whole dataset. The final prediction is made by all the anomaly detectors. The performance of the proposed method is evaluated on several real-life datasets, and the results confirm that the proposed hybrid model improves the detection accuracy and reduces the computational complexity.


Assuntos
Algoritmos , Inteligência Artificial , Conjuntos de Dados como Assunto , Modelos Teóricos , Reconhecimento Automatizado de Padrão/métodos , Análise Discriminante , Humanos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA