RESUMO
BACKGROUND: Continuous assessment and remote monitoring of cognitive function in individuals with mild cognitive impairment (MCI) enables tracking therapeutic effects and modifying treatment to achieve better clinical outcomes. While standardized neuropsychological tests are inconvenient for this purpose, wearable sensor technology collecting physiological and behavioral data looks promising to provide proxy measures of cognitive function. The objective of this study was to evaluate the predictive ability of digital physiological features, based on sensor data from wrist-worn wearables, in determining neuropsychological test scores in individuals with MCI. METHODS: We used the dataset collected from a 10-week single-arm clinical trial in older adults (50-70 years old) diagnosed with amnestic MCI (N = 30) who received a digitally delivered multidomain therapeutic intervention. Cognitive performance was assessed before and after the intervention using the Neuropsychological Test Battery (NTB) from which composite scores were calculated (executive function, processing speed, immediate memory, delayed memory and global cognition). The Empatica E4, a wrist-wearable medical-grade device, was used to collect physiological data including blood volume pulse, electrodermal activity, and skin temperature. We processed sensors' data and extracted a range of physiological features. We used interpolated NTB scores for 10-day intervals to test predictability of scores over short periods and to leverage the maximum of wearable data available. In addition, we used individually centered data which represents deviations from personal baselines. Supervised machine learning was used to train models predicting NTB scores from digital physiological features and demographics. Performance was evaluated using "leave-one-subject-out" and "leave-one-interval-out" cross-validation. RESULTS: The final sample included 96 aggregated data intervals from 17 individuals. In total, 106 digital physiological features were extracted. We found that physiological features, especially measures of heart rate variability, correlated most strongly to the executive function compared to other cognitive composites. The model predicted the actual executive function scores with correlation r = 0.69 and intra-individual changes in executive function scores with r = 0.61. CONCLUSIONS: Our findings demonstrated that wearable-based physiological measures, primarily HRV, have potential to be used for the continuous assessments of cognitive function in individuals with MCI.
Assuntos
Disfunção Cognitiva , Dispositivos Eletrônicos Vestíveis , Idoso , Humanos , Pessoa de Meia-Idade , Cognição , Disfunção Cognitiva/diagnóstico , Aprendizado de Máquina , Testes Neuropsicológicos , Ensaios Clínicos como AssuntoRESUMO
Milk composition, particularly milk fatty acids, has been extensively studied as an indicator of the metabolic status of dairy cows during early lactation. In addition to milk biomarkers, on-farm sensor data also hold potential in providing insights into the metabolic health status of cows. While numerous studies have explored the collection of a wide range of sensor data from cows, the combination of milk biomarkers and on-farm sensor data remains relatively underexplored. Therefore, this study aims to identify associations between metabolic blood variables, milk variables, and various on-farm sensor data. Second, it seeks to examine the supplementary or substitutive potential of these data sources. Therefore, data from 85 lactations on metabolic status and on-farm data were collected during 3 wk before calving up to 5 wk after calving. Blood samples were taken on d 3, 6, 9, and 21 after calving for determination of ß-hydroxybutyrate (BHB), nonesterified fatty acids (NEFA), glucose, insulin-like growth factor-1 (IGF-1), insulin, and fructosamine. Milk samples were taken during the first 3 wk in lactation and analyzed by mid-infrared for fat, protein, lactose, urea, milk fatty acids, and BHB. Walking activity, feed intake, and body condition score (BCS) were monitored throughout the study. Linear mixed effect models were used to study the association between blood variables and (1) milk variables (i.e., milk models); (2) on-farm data (i.e., on-farm models) consisting of activity and dry matter intake analyzed during the dry period ([D]) and lactation ([L]) and BCS only analyzed during the dry period ([D]); and (3) the combination of both. In addition, to assess whether milk variables can clarify unexplained variation from the on-farm model and vice versa, Pearson marginal residuals from the milk and on-farm models were extracted and related to the on-farm and milk variables, respectively. The milk models had higher coefficient of determination (R2) than the on-farm models, except for IGF-1 and fructosamine. The highest marginal R2 values were found for BHB, glucose, and NEFA (0.508, 0.427, and 0.303 vs. 0.468, 0.358, and 0.225 for the milk models and on-farm models, respectively). Combining milk and on-farm data particularly increased R2 values of models assessing blood BHB, glucose, and NEFA concentrations with the fixed effects of the milk and on-farm variables mutually having marginal R2 values of 0.608, 0.566, and 0.327, respectively. Milk C18:1 was confirmed as an important milk variable in all models, but particularly for blood NEFA prediction. On-farm data were considerably more capable of describing the IGF-1 concentration than milk data (marginal R2 of 0.192 vs. 0.086), mainly due to dry matter intake before calving. The BCS [D] was the most important on-farm variable in relation to blood BHB and NEFA and could explain additional variation in blood BHB concentration compared with models solely based on milk variables. This study has shown that on-farm data combined with milk data can provide additional information concerning the metabolic health status of dairy cows. On-farm data are of interest to be further studied in predictive modeling, particularly because early warning predictions using milk data are highly challenging or even missing.
Assuntos
Fator de Crescimento Insulin-Like I , Leite , Feminino , Bovinos , Animais , Leite/metabolismo , Fator de Crescimento Insulin-Like I/metabolismo , Ácidos Graxos não Esterificados , Fazendas , Frutosamina/metabolismo , Metabolismo Energético , Lactação , Ácidos Graxos/metabolismo , Glucose/metabolismo , Biomarcadores/metabolismo , Ácido 3-Hidroxibutírico , Período Pós-PartoRESUMO
BACKGROUND: Pervasive technologies are used to investigate various phenomena outside the laboratory setting, providing valuable insights into real-world human behavior and interaction with the environment. However, conducting longitudinal field trials in natural settings remains challenging due to factors such as low recruitment success and high dropout rates due to participation burden or data quality issues with wireless sensing in changing environments. OBJECTIVE: This study gathers insights and lessons from 3 real-world longitudinal field studies assessing human behavior and derives factors that impacted their research success. We aim to categorize challenges, observe how they were managed, and offer recommendations for designing and conducting studies involving human participants and pervasive technology in natural settings. METHODS: We developed a qualitative coding framework to categorize and address the unique challenges encountered in real-life studies related to influential factor identification, stakeholder management, data harvesting and management, and analysis and interpretation. We applied inductive reasoning to identify issues and related mitigation actions in 3 separate field studies carried out between 2018 and 2022. These 3 field studies relied on gathering annotated sensor data. The topics involved stress and environmental assessment in an office and a school, collecting self-reports and wrist device and environmental sensor data from 27 participants for 3.5 to 7 months; work activity recognition at a construction site, collecting observations and wearable sensor data from 15 participants for 3 months; and stress recognition in location-independent knowledge work, collecting self-reports and computer use data from 57 participants for 2 to 5 months. Our key extension for the coding framework used a stakeholder identification method to identify the type and role of the involved stakeholder groups, evaluating the nature and degree of their involvement and influence on the field trial success. RESULTS: Our analysis identifies 17 key lessons related to planning, implementing, and managing a longitudinal, sensor-based field study on human behavior. The findings highlight the importance of recognizing different stakeholder groups, including those not directly involved but whose areas of responsibility are impacted by the study and therefore have the power to influence it. In general, customizing communication strategies to engage stakeholders on their terms and addressing their concerns and expectations is essential, while planning for dropouts, offering incentives for participants, conducting field tests to identify problems, and using tools for quality assurance are relevant for successful outcomes. CONCLUSIONS: Our findings suggest that field trial implementation should include additional effort to clarify the expectations of stakeholders and to communicate with them throughout the process. Our framework provides a structured approach that can be adopted by other researchers in the field, facilitating robust and comparable studies across different contexts. Constantly managing the possible challenges will lead to better success in longitudinal field trials and developing future technology-based solutions.
Assuntos
Participação dos Interessados , Humanos , Estudos Longitudinais , Comportamento , Feminino , MasculinoRESUMO
This research study demonstrates an efficient scheme for early detection of cardiorespiratory complications in pandemics by Utilizing Wearable Electrocardiogram (ECG) sensors for pattern generation and Convolution Neural Networks (CNN) for decision analytics. In health-related outbreaks, timely and early diagnosis of such complications is conclusive in reducing mortality rates and alleviating the burden on healthcare facilities. Existing methods rely on clinical assessments, medical history reviews, and hospital-based monitoring, which are valuable but have limitations in terms of accessibility, scalability, and timeliness, particularly during pandemics. The proposed scheme commences by deploying wearable ECG sensors on the patient's body. These sensors collect data by continuously monitoring the cardiac activity and respiratory patterns of the patient. The collected raw data is then transmitted securely in a wireless manner to a centralized server and stored in a database. Subsequently, the stored data is assessed using a preprocessing process which extracts relevant and important features like heart rate variability and respiratory rate. The preprocessed data is then used as input into the CNN model for the classification of normal and abnormal cardiorespiratory patterns. To achieve high accuracy in abnormality detection the CNN model is trained on labeled data with optimized parameters. The performance of the proposed scheme is evaluated and gauged using different scenarios, which shows a robust performance in detecting abnormal cardiorespiratory patterns with a sensitivity of 95% and specificity of 92%. Prominent observations, which highlight the potential for early interventions include subtle changes in heart rate variability and preceding respiratory distress. These findings show the significance of wearable ECG technology in improving pandemic management strategies and informing public health policies, which enhances preparedness and resilience in the face of emerging health threats.
Assuntos
Diagnóstico Precoce , Eletrocardiografia , Redes Neurais de Computação , Dispositivos Eletrônicos Vestíveis , Humanos , Eletrocardiografia/instrumentação , COVID-19/diagnósticoRESUMO
With the continuous advancement of sensing technology, applying large amounts of sensor data to practical prediction processes using artificial intelligence methods has become a developmental direction. In sensing images and remote sensing meteorological data, the dynamic changes in the prediction targets relative to their background information often exhibit more significant dynamic characteristics. Previous prediction methods did not specifically analyze and study the dynamic change information of prediction targets at spatiotemporal multi-scale. Therefore, this paper proposes a neural prediction network based on perceptual multi-scale spatiotemporal dynamic changes (PMSTD-Net). By designing Multi-Scale Space Motion Change Attention Unit (MCAU) to perceive the local situation and spatial displacement dynamic features of prediction targets at different scales, attention is ensured on capturing the dynamic information in their spatial dimensions adequately. On this basis, this paper proposes Multi-Scale Spatiotemporal Evolution Attention (MSEA) unit, which further integrates the spatial change features perceived by MCAU units in higher channel dimensions, and learns the spatiotemporal evolution characteristics at different scales, effectively predicting the dynamic characteristics and regularities of targets in sensor information.Through experiments on spatiotemporal prediction standard datasets such as Moving MNIST, video prediction dataset KTH, and Human3.6m, PMSTD-Net demonstrates prediction performance surpassing previous methods. We construct the GPM satellite remote sensing precipitation dataset, demonstrating the network's advantages in perceiving multi-scale spatiotemporal dynamic changes in remote sensing meteorological data. Finally, through extensive ablation experiments, the performance of each module in PMSTD-Net is thoroughly validated.
RESUMO
This paper discusses the design and implementation of a portable IoT station. Communication and data synchronization issues in several installations are addressed here, making possible a detailed analysis of the entire system during its operation. The system operator requires a synchronized data stream, combining multiple communication protocols into one single time stamp. The hardware selected for the portable IoT station complies with the International Electrotechnical Commission (IEC) industrial standards. A short discussion regarding interface customization shows how easily the hardware can be modified so that it is integrated with almost any system. A programmable logic controller enables the Node-RED to be utilized. This open-source middleware defines operations for each global variable nominated in the Modbus register. Two applications are presented and discussed in this paper; each application has a distinct methodology utilized to publish and visualize the acquired data. The portable IoT station is highly customizable, consisting of a modular structure and providing the best platform for future research and development of dedicated algorithms. This paper also demonstrates how the portable IoT station can be implemented in systems where time-based data synchronization is essential while introducing a seamless implementation and operation.
RESUMO
The noise in sensor data has a substantial impact on the reliability and accuracy of (ML) algorithms. A comprehensive framework is proposed to analyze the effects of diverse noise inputs in sensor data on the accuracy of ML models. Through extensive experimentation and evaluation, this research examines the resilience of a LightGBM ML model to ten different noise models, namely, Flicker, Impulse, Gaussian, Brown, Periodic, and others. A thorough analytical approach with various statistical metrics in a Monte Carlo simulation setting was followed. It was found that the Gaussian and Colored noise were detrimental when compared to Flicker and Brown, which are identified as safe noise categories. It was interesting to find a safe threshold limit of noise intensity for the case of Gaussian noise, which was missing in other noise types. This research work employed the use case of changeover detection in (CNC) manufacturing machines and the corresponding data from the publicly funded research project (OBerA).
RESUMO
To date, significant progress has been made in the field of railway anomaly detection using technologies such as real-time data analytics, the Internet of Things, and machine learning. As technology continues to evolve, the ability to detect and respond to anomalies in railway systems is once again in the spotlight. However, railway anomaly detection faces challenges related to the vast infrastructure, dynamic conditions, aging infrastructure, and adverse environmental conditions on the one hand, and the scale, complexity, and critical safety implications of railway systems on the other. Our study is underpinned by the three objectives. Specifically, we aim to identify time series anomaly detection methods applied to railway sensor device data, recognize the advantages and disadvantages of these methods, and evaluate their effectiveness. To address the research objectives, the first part of the study involved a systematic literature review and a series of controlled experiments. In the case of the former, we adopted well-established guidelines to structure and visualize the review. In the second part, we investigated the effectiveness of selected machine learning methods. To evaluate the predictive performance of each method, a five-fold cross-validation approach was applied to ensure the highest accuracy and generality. Based on the calculated accuracy, the results show that the top three methods are CatBoost (96%), Random Forest (91%), and XGBoost (90%), whereas the lowest accuracy is observed for One-Class Support Vector Machines (48%), Local Outlier Factor (53%), and Isolation Forest (55%). As the industry moves toward a zero-defect paradigm on a global scale, ongoing research efforts are focused on improving existing methods and developing new ones that contribute to the safety and quality of rail transportation. In this sense, there are at least four avenues for future research worth considering: testing richer data sets, hyperparameter optimization, and implementing other methods not included in the current study.
RESUMO
Tunnel fires are generally detected using various sensors, including measuring temperature, CO concentration, and smoke concentration. To address the ambiguity and inconsistency in multi-sensor data, this paper proposes a tunnel fire detection method based on an improved Dempster-Shafer (DS) evidence theory for multi-sensor data fusion. To solve the problem of evidence conflict in the DS theory, a two-level multi-sensor data fusion framework is adopted. The first level of fusion involves feature fusion of the same type of sensor data, removing ambiguous data to obtain characteristic data, and calculating the basic probability assignment (BPA) function through the feature interval. The second-level fusion derives basic probability numbers from the BPA, calculates the degree of evidence conflict, normalizes the BPA to obtain the relative conflict degree, and optimizes the BPA using the trust coefficient. The classical DS evidence theory is then used to integrate and obtain the probability of tunnel fire occurrence. Different heat release rates, tunnel wind speeds, and fire locations are set, forming six fire scenarios. Sensor monitoring data under each simulation condition are extracted and fused using the improved DS evidence theory. The results show that there is a 67.5%, 83.5%, 76.8%, 83%, 79.6%, and 84.1% probability of detecting fire when it occurs, respectively, and identifies fire occurrence in approximately 2.4 s, an improvement from 64.7% to 70% over traditional methods. This demonstrates the feasibility and superiority of the proposed method, highlighting its significant importance in ensuring personnel safety.
RESUMO
Digital image compression is applied to reduce camera bandwidth and storage requirements, but real-time lossless compression on a high-speed high-resolution camera is a challenging task. The article presents hardware implementation of a Bayer colour filter array lossless image compression algorithm on an FPGA-based camera. The compression algorithm reduces colour and spatial redundancy and employs Golomb-Rice entropy coding. A rule limiting the maximum code length is introduced for the edge cases. The proposed algorithm is based on integer operators for efficient hardware implementation. The algorithm is first verified as a C++ model and later implemented on AMD-Xilinx Zynq UltraScale+ device using VHDL. An effective tree-like pipeline structure is proposed to concatenate codes of compressed pixel data to generate a bitstream representing data of 16 parallel pixels. The proposed parallel compression achieves up to 56% reduction in image size for high-resolution images. Pipelined implementation without any state machine ensures operating frequencies up to 320 MHz. Parallelised operation on 16 pixels effectively increases data throughput to 40 Gbit/s while keeping the total memory requirements low due to real-time processing.
RESUMO
Predicting anomalies in manufacturing assembly lines is crucial for reducing time and labor costs and improving processes. For instance, in rocket assembly, premature part failures can lead to significant financial losses and labor inefficiencies. With the abundance of sensor data in the Industry 4.0 era, machine learning (ML) offers potential for early anomaly detection. However, current ML methods for anomaly prediction have limitations, with F1 measure scores of only 50% and 66% for prediction and detection, respectively. This is due to challenges like the rarity of anomalous events, scarcity of high-fidelity simulation data (actual data are expensive), and the complex relationships between anomalies not easily captured using traditional ML approaches. Specifically, these challenges relate to two dimensions of anomaly prediction: predicting when anomalies will occur and understanding the dependencies between them. This paper introduces a new method called Robust and Interpretable 2D Anomaly Prediction (RI2AP) designed to address both dimensions effectively. RI2AP is demonstrated on a rocket assembly simulation, showing up to a 30-point improvement in F1 measure compared to current ML methods. This highlights its potential to enhance automated anomaly prediction in manufacturing. Additionally, RI2AP includes a novel interpretation mechanism inspired by a causal-influence framework, providing domain experts with valuable insights into sensor readings and their impact on predictions. Finally, the RI2AP model was deployed in a real manufacturing setting for assembling rocket parts. Results and insights from this deployment demonstrate the promise of RI2AP for anomaly prediction in manufacturing assembly pipelines.
RESUMO
With the increasing number of households owning pets, the importance of sensor data for recognizing pet behavior has grown significantly. However, challenges arise due to the costs and reliability issues associated with data collection. This paper proposes a method for classifying pet behavior using cleaned meta pseudo labels to overcome these issues. The data for this study were collected using wearable devices equipped with accelerometers, gyroscopes, and magnetometers, and pet behaviors were classified into five categories. Utilizing this data, we analyzed the impact of the quantity of labeled data on accuracy and further enhanced the learning process by integrating an additional Distance Loss. This method effectively improves the learning process by removing noise from unlabeled data. Experimental results demonstrated that while the conventional supervised learning method achieved an accuracy of 82.9%, the existing meta pseudo labels method showed an accuracy of 86.2%, and the cleaned meta pseudo labels method proposed in this study surpassed these with an accuracy of 88.3%. These results hold significant implications for the development of pet monitoring systems, and the approach of this paper provides an effective solution for recognizing and classifying pet behavior in environments with insufficient labels.
RESUMO
Effective security surveillance is crucial in the railway sector to prevent security incidents, including vandalism, trespassing, and sabotage. This paper discusses the challenges of maintaining seamless surveillance over extensive railway infrastructure, considering both technological advances and the growing risks posed by terrorist attacks. Based on previous research, this paper discusses the limitations of current surveillance methods, particularly in managing information overload and false alarms that result from integrating multiple sensor technologies. To address these issues, we propose a new fusion model that utilises Probabilistic Occupancy Maps (POMs) and Bayesian fusion techniques. The fusion model is evaluated on a comprehensive dataset comprising three use cases with a total of eight real life critical scenarios. We show that, with this model, the detection accuracy can be increased while simultaneously reducing the false alarms in railway security surveillance systems. This way, our approach aims to enhance situational awareness and reduce false alarms, thereby improving the effectiveness of railway security measures.
RESUMO
In the domain of mobile robot navigation, conventional path-planning algorithms typically rely on predefined rules and prior map information, which exhibit significant limitations when confronting unknown, intricate environments. With the rapid evolution of artificial intelligence technology, deep reinforcement learning (DRL) algorithms have demonstrated considerable effectiveness across various application scenarios. In this investigation, we introduce a self-exploration and navigation approach based on a deep reinforcement learning framework, aimed at resolving the navigation challenges of mobile robots in unfamiliar environments. Firstly, we fuse data from the robot's onboard lidar sensors and camera and integrate odometer readings with target coordinates to establish the instantaneous state of the decision environment. Subsequently, a deep neural network processes these composite inputs to generate motion control strategies, which are then integrated into the local planning component of the robot's navigation stack. Finally, we employ an innovative heuristic function capable of synthesizing map information and global objectives to select the optimal local navigation points, thereby guiding the robot progressively toward its global target point. In practical experiments, our methodology demonstrates superior performance compared to similar navigation methods in complex, unknown environments devoid of predefined map information.
RESUMO
The Internet of Things generates vast data volumes via diverse sensors, yet its potential remains unexploited for innovative data-driven products and services. Limitations arise from sensor-dependent data handling by manufacturers and user companies, hindering third-party access and comprehension. Initiatives like the European Data Act aim to enable high-quality access to sensor-generated data by regulating accuracy, completeness, and relevance while respecting intellectual property rights. Despite data availability, interoperability challenges impede sensor data reusability. For instance, sensor data shared in HTML formats requires an intricate, time-consuming processing to attain reusable formats like JSON or XML. This study introduces a methodology aimed at converting raw sensor data extracted from web portals into structured formats, thereby enhancing data reusability. The approach utilises large language models to derive structured formats from sensor data initially presented in non-interoperable formats. The effectiveness of these language models was assessed through quantitative and qualitative evaluations in a use case involving meteorological data. In the proposed experiments, GPT-4, the best performing LLM tested, demonstrated the feasibility of this methodology, achieving a precision of 93.51% and a recall of 85.33% in converting HTML to JSON/XML, thus confirming its potential in obtaining reusable sensor data.
RESUMO
In an aging society, the need for efficient emergency detection systems in smart homes is becoming increasingly important. For elderly people living alone, technical solutions for detecting emergencies are essential to receiving help quickly when needed. Numerous solutions already exist based on wearable or ambient sensors. However, existing methods for emergency detection typically assume that sensor data are error-free and contain no false positives, which cannot always be guaranteed in practice. Therefore, we present a novel method for detecting emergencies in private households that detects unusually long inactivity periods and can process erroneous or uncertain activity information. We introduce the Inactivity Score, which provides a probabilistic weighting of inactivity periods based on the reliability of sensor measurements. By analyzing historical Inactivity Scores, anomalies that potentially represent an emergency can be identified. The proposed method is compared with four related approaches on seven different datasets. Our method surpasses existing approaches when considering the number of false positives and the mean time to detect emergencies. It achieves an average detection time of approximately 05:23:28 h with only 0.09 false alarms per day under noise-free conditions. Moreover, unlike related approaches, the proposed method remains effective with noisy data.
Assuntos
Emergências , Humanos , Algoritmos , Dispositivos Eletrônicos Vestíveis , IdosoRESUMO
With the ubiquitous deployment of mobile and sensor technologies in modes of transportation, taxis have become a significant component of public transportation. However, vacant taxis represent an important waste of transportation resources. Forecasting taxi demand within a short time achieves a supply-demand balance and reduces oil emissions. Although earlier studies have forwarded highly developed machine learning- and deep learning-based models to forecast taxicab demands, these models often face significant computational expenses and cannot effectively utilize large-scale trajectory sensor data. To address these challenges, in this paper, we propose a hybrid deep learning-based model for taxi demand prediction. In particular, the Variational Mode Decomposition (VMD) algorithm is integrated along with a Bidirectional Long Short-Term Memory (BiLSTM) model to perform the prediction process. The VMD algorithm is applied to decompose time series-aware traffic features into multiple sub-modes of different frequencies. After that, the BiLSTM method is utilized to predict time series data fed with the relevant demand features. To overcome the limitation of high computational expenses, the designed model is performed on the Spark distributed platform. The performance of the proposed model is tested using a real-world dataset, and it surpasses existing state-of-the-art predictive models in terms of accuracy, efficiency, and distributed performance. These findings provide insights for enhancing the efficiency of passenger search and increasing the profit of taxicabs.
RESUMO
Synthetic data generation addresses the challenges of obtaining extensive empirical datasets, offering benefits such as cost-effectiveness, time efficiency, and robust model development. Nonetheless, synthetic data-generation methodologies still encounter significant difficulties, including a lack of standardized metrics for modeling different data types and comparing generated results. This study introduces PVS-GEN, an automated, general-purpose process for synthetic data generation and verification. The PVS-GEN method parameterizes time-series data with minimal human intervention and verifies model construction using a specific metric derived from extracted parameters. For complex data, the process iteratively segments the empirical dataset until an extracted parameter can reproduce synthetic data that reflects the empirical characteristics, irrespective of the sensor data type. Moreover, we introduce the PoR metric to quantify the quality of the generated data by evaluating its time-series characteristics. Consequently, the proposed method can automatically generate diverse time-series data that covers a wide range of sensor types. We compared PVS-GEN with existing synthetic data-generation methodologies, and PVS-GEN demonstrated a superior performance. It generated data with a similarity of up to 37.1% across multiple data types and by 19.6% on average using the proposed metric, irrespective of the data type.
RESUMO
With the application of robotics in security monitoring, medical care, image analysis, and other high-privacy fields, vision sensor data in robotic operating systems (ROS) faces the challenge of enhancing secure storage and transmission. Recently, it has been proposed that the distributed advantages of blockchain be taken advantage of to improve the security of data in ROS. Still, it has limitations such as high latency and large resource consumption. To address these issues, this paper introduces PrivShieldROS, an extended robotic operating system developed by InterPlanetary File System (IPFS), blockchain, and HybridABEnc to enhance the confidentiality and security of vision sensor data in ROS. The system takes advantage of the decentralized nature of IPFS to enhance data availability and robustness while combining HybridABEnc for fine-grained access control. In addition, it ensures the security and confidentiality of the data distribution mechanism by using blockchain technology to store data content identifiers (CID) persistently. Finally, the effectiveness of this system is verified by three experiments. Compared with the state-of-the-art blockchain-extended ROS, PrivShieldROS shows improvements in key metrics. This paper has been partly submitted to IROS 2024.
RESUMO
Human trajectories can be tracked by the internal processing of a camera as an edge device. This work aims to match peoples' trajectories obtained from cameras to sensor data such as acceleration and angular velocity, obtained from wearable devices. Since human trajectory and sensor data differ in modality, the matching method is not straightforward. Furthermore, complete trajectory information is unavailable; it is difficult to determine which fragments belong to whom. To solve this problem, we newly proposed the SyncScore model to find the similarity between a unit period trajectory and the corresponding sensor data. We also propose a Likelihood Fusion algorithm that systematically updates the similarity data and integrates it over time while keeping other trajectories in mind. We confirmed that the proposed method can match human trajectories and sensor data with an accuracy, a sensitivity, and an F1 of 0.725. Our models achieved decent results on the UEA dataset.