Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 23
Filtrar
Mais filtros

Base de dados
País/Região como assunto
Tipo de documento
Intervalo de ano de publicação
1.
Sensors (Basel) ; 24(2)2024 Jan 10.
Artigo em Inglês | MEDLINE | ID: mdl-38257526

RESUMO

Cloud computing technology is rapidly becoming ubiquitous and indispensable. However, its widespread adoption also exposes organizations and individuals to a broad spectrum of potential threats. Despite the multiple advantages the cloud offers, organizations remain cautious about migrating their data and applications to the cloud due to fears of data breaches and security compromises. In light of these concerns, this study has conducted an in-depth examination of a variety of articles to enhance the comprehension of the challenges related to safeguarding and fortifying data within the cloud environment. Furthermore, the research has scrutinized several well-documented data breaches, analyzing the financial consequences they inflicted. Additionally, it scrutinizes the distinctions between conventional digital forensics and the forensic procedures specific to cloud computing. As a result of this investigation, the study has concluded by proposing potential opportunities for further research in this critical domain. By doing so, it contributes to our collective understanding of the complex panorama of cloud data protection and security, while acknowledging the evolving nature of technology and the need for ongoing exploration and innovation in this field. This study also helps in understanding the compound annual growth rate (CAGR) of cloud digital forensics, which is found to be quite high at ≈16.53% from 2023 to 2031. Moreover, its market is expected to reach ≈USD 36.9 billion by the year 2031; presently, it is ≈USD 11.21 billion, which shows that there are great opportunities for investment in this area. This study also strategically addresses emerging challenges in cloud digital forensics, providing a comprehensive approach to navigating and overcoming the complexities associated with the evolving landscape of cloud computing.

2.
Sensors (Basel) ; 24(5)2024 Feb 23.
Artigo em Inglês | MEDLINE | ID: mdl-38474993

RESUMO

The Internet of Things (IoT) is playing a pivotal role in transforming various industries, and Wireless Sensor Networks (WSNs) are emerging as the key drivers of this innovation. This research explores the utilization of a heterogeneous network model to optimize the deployment of sensors in agricultural settings. The primary objective is to strategically position sensor nodes for efficient energy consumption, prolonged network lifetime, and dependable data transmission. The proposed strategy incorporates an offline model for placing sensor nodes within the target region, taking into account the coverage requirements and network connectivity. We propose a two-stage centralized control model that ensures cohesive decision making, grouping sensor nodes into protective boxes. This grouping facilitates shared resource utilization, including batteries and bandwidth, while minimizing box number for cost-effectiveness. Noteworthy contributions of this research encompass addressing connectivity and coverage challenges through an offline deployment model in the first stage, and resolving real-time adaptability concerns using an online energy optimization model in the second stage. Emphasis is placed on the energy efficiency, achieved through the sensor consolidation within boxes, minimizing data transmission hops, and considering energy expenditures in sensing, transmitting, and active/sleep modes. Our simulations on an agricultural farmland highlights its practicality, particularly focusing on the sensor placement for measuring soil temperature and humidity. Hardware tests validate the proposed model, incorporating parameters from the real-world implementation to enhance calculation accuracy. This study provides not only theoretical insights but also extends its relevance to smart farming practices, illustrating the potential of WSNs in revolutionizing sustainable agriculture.

3.
Sensors (Basel) ; 23(2)2023 Jan 14.
Artigo em Inglês | MEDLINE | ID: mdl-36679762

RESUMO

Data redundancy and data loss are relevant issues in condition monitoring. Sampling strategies for segment intervals can address these at the source, but do not receive the attention they deserve. Currently, the sampling methods in relevant research lack sufficient adaptability to the condition. In this paper, an adaptive sampling framework of segment intervals is proposed, based on the summary and improvement of existing problems. The framework is implemented to monitor mechanical degradation, and experiments are implemented on simulation data and real datasets. Subsequently, the distributions of the samples collected by different sampling strategies are visually presented through a color map, and five metrics are designed to assess the sampling results. The intuitive and numerical results show the superiority of the proposed method in comparison to existing methods, and the results are closely related to data status and degradation indicators. The smaller the data fluctuation and the more stable the degradation trend, the better the result. Furthermore, the results of the objective physical indicators are obviously better than those of the feature indicators. By addressing existing problems, the proposed framework opens up a new idea of predictive sampling, which significantly improves the degradation monitoring.


Assuntos
Simulação por Computador
4.
Sensors (Basel) ; 22(3)2022 Jan 23.
Artigo em Inglês | MEDLINE | ID: mdl-35161604

RESUMO

In the application of a bridge weigh-in-motion (WIM) system, the collected data may be temporarily or permanently lost due to sensor failure or system transmission failure. The high data loss rate weakens the distribution characteristics of the collected data and the ability of the monitoring system to conduct assessments on bridge condition. A deep learning-based model, or generative adversarial network (GAN), is proposed to reconstruct the missing data in the bridge WIM systems. The proposed GAN in this study can model the collected dataset and predict the missing data. Firstly, the data from stable measurements before the data loss are provided, and then the generator is trained to extract the retained features from the dataset and the data lost in the process are collected by using only the responses of the remaining functional sensors. The discriminator feeds back the recognition results to the generator in order to improve its reconstruction accuracy. In the model training, two loss functions, generation loss and confrontation loss, are used, and the general outline and potential distribution characteristics of the signal are well processed by the model. Finally, by applying the engineering data of the Hangzhou Jiangdong Bridge to the GAN model, this paper verifies the effectiveness of the proposed method. The results show that the final reconstructed dataset is in good agreement with the actual dataset in terms of total vehicle weight and axle weight. Furthermore, the approximate contour and potential distribution characteristics of the original dataset are reproduced. It is suggested that the proposed method can be used in real-life applications. This research can provide a promising method for the data reconstruction of bridge monitoring systems.


Assuntos
Processamento de Imagem Assistida por Computador , Redes Neurais de Computação , Movimento (Física)
5.
Dev Psychobiol ; 63(8): e22208, 2021 12.
Artigo em Inglês | MEDLINE | ID: mdl-34813097

RESUMO

The P300 is an event-related potential component that reflects attention to motivationally salient stimuli and may be a promising tool to examine individual differences in cognitive-affective processing very early in development. However, the psychometric properties of the P300 in infancy are unknown, a fact that limits the component's utility as an individual difference measure in developmental research. To address this gap, 38 infants completed an auditory three-stimulus oddball task that included frequent standard, infrequent deviant, and novel stimuli. We quantified the P300 at a single electrode site and at region of interest (ROI) and examined the internal consistency reliability of the component, both via split-half reliability and as a function of trial number. Results indicated that the P300 to standard, deviant, and novel stimuli fell within moderate to high internal consistency reliability thresholds, and that scoring the component at an ROI led to slightly higher estimates of reliability. However, the percentage of data loss due to artifacts increased across the course of the task, suggesting that including more trials will not necessarily improve the reliability of the P300. Together, these results suggest that robust and reliable measurement of the P300 will require designing tasks that minimize trial number and maximize infant tolerability.


Assuntos
Artefatos , Potenciais Evocados P300 , Estimulação Acústica/métodos , Eletroencefalografia , Potenciais Evocados , Humanos , Reprodutibilidade dos Testes
6.
Sensors (Basel) ; 21(20)2021 Oct 18.
Artigo em Inglês | MEDLINE | ID: mdl-34696114

RESUMO

Small and medium-sized enterprises represent the majority of enterprises globally and yet have some difficulties in understanding the impact that cybersecurity threats could have on their businesses and the damage they could do to their assets. This study aims to measure the effectiveness of security practices at small-sized enterprises in Saudi Arabia in the event of a cybersecurity attack. Our paper is among the first research papers to measure the effectiveness of cybersecurity practices and the threat posed by cybersecurity breaches among small enterprises in the event of cybersecurity attacks. A total of 282 respondents participated, all of them representing small-sized enterprises in Saudi Arabia. The study applies multiple regression tests to analyze the effectiveness of 12 cybersecurity practices in three aspects: financial damage, loss of sensitive data, and restoration time, at small enterprises. The findings indicate that having an inspection team and a recovery plan may limit the financial damage caused by cybersecurity attacks on small enterprises. The results also show that cybersecurity awareness, knowledge of cybersecurity damage, and professionals' salaries were related to the loss of sensitive data. Furthermore, the results indicate that contact with cybersecurity authorities and having an inspection team have statistically significant effects on restoration time.


Assuntos
Segurança Computacional , Arábia Saudita
7.
Entropy (Basel) ; 21(8)2019 Aug 15.
Artigo em Inglês | MEDLINE | ID: mdl-33267510

RESUMO

Model construction is a very fundamental and important issue in the field of complex dynamical networks. With the state-coupling complex dynamical network model proposed, many kinds of complex dynamical network models were introduced by considering various practical situations. In this paper, aiming at the data loss which may take place in the communication between any pair of directly connected nodes in a complex dynamical network, we propose a new discrete-time complex dynamical network model by constructing an auxiliary observer and choosing the observer states to compensate for the lost states in the coupling term. By employing Lyapunov stability theory and stochastic analysis, a sufficient condition is derived to guarantee the compensation values finally equal to the lost values, namely, the influence of data loss is finally eliminated in the proposed model. Moreover, we generalize the modeling method to output-coupling complex dynamical networks. Finally, two numerical examples are provided to demonstrate the effectiveness of the proposed model.

8.
Sensors (Basel) ; 17(7)2017 Jul 22.
Artigo em Inglês | MEDLINE | ID: mdl-28737677

RESUMO

Accurate information acquisition is of vital importance for wireless sensor array network (WSAN) direction of arrival (DOA) estimation. However, due to the lossy nature of low-power wireless links, data loss, especially block data loss induced by adopting a large packet size, has a catastrophic effect on DOA estimation performance in WSAN. In this paper, we propose a double-layer compressive sensing (CS) framework to eliminate the hazards of block data loss, to achieve high accuracy and efficient DOA estimation. In addition to modeling the random packet loss during transmission as a passive CS process, an active CS procedure is introduced at each array sensor to further enhance the robustness of transmission. Furthermore, to avoid the error propagation from signal recovery to DOA estimation in conventional methods, we propose a direct DOA estimation technique under the double-layer CS framework. Leveraging a joint frequency and spatial domain sparse representation of the sensor array data, the fusion center (FC) can directly obtain the DOA estimation results according to the received data packets, skipping the phase of signal recovery. Extensive simulations demonstrate that the double-layer CS framework can eliminate the adverse effects induced by block data loss and yield a superior DOA estimation performance in WSAN.

9.
Sensors (Basel) ; 17(5)2017 May 12.
Artigo em Inglês | MEDLINE | ID: mdl-28498312

RESUMO

There are wireless networks in which typically communications are unsafe. Most terrestrial wireless sensor networks belong to this category of networks. Another example of an unsafe communication network is an underwater acoustic sensor network (UWASN). In UWASNs in particular, communication failures occur frequently and the failure durations can range from seconds up to a few hours, days, or even weeks. These communication failures can cause data losses significant enough to seriously damage human life or property, depending on their application areas. In this paper, we propose a framework to reduce sensor data loss during communication failures and we present a formal approach to the Selection by Minimum Error and Pattern (SMEP) method that plays the most important role for the reduction in sensor data loss under the proposed framework. The SMEP method is compared with other methods to validate its effectiveness through experiments using real-field sensor data sets. Moreover, based on our experimental results and performance comparisons, the SMEP method has been validated to be better than others in terms of the average sensor data value error rate caused by sensor data loss.

10.
Behav Res Methods ; 49(5): 1802-1823, 2017 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-27800582

RESUMO

Eye-tracking research in infants and older children has gained a lot of momentum over the last decades. Although eye-tracking research in these participant groups has become easier with the advance of the remote eye-tracker, this often comes at the cost of poorer data quality than in research with well-trained adults (Hessels, Andersson, Hooge, Nyström, & Kemner Infancy, 20, 601-633, 2015; Wass, Forssman, & Leppänen Infancy, 19, 427-460, 2014). Current fixation detection algorithms are not built for data from infants and young children. As a result, some researchers have even turned to hand correction of fixation detections (Saez de Urabain, Johnson, & Smith Behavior Research Methods, 47, 53-72, 2015). Here we introduce a fixation detection algorithm-identification by two-means clustering (I2MC)-built specifically for data across a wide range of noise levels and when periods of data loss may occur. We evaluated the I2MC algorithm against seven state-of-the-art event detection algorithms, and report that the I2MC algorithm's output is the most robust to high noise and data loss levels. The algorithm is automatic, works offline, and is suitable for eye-tracking data recorded with remote or tower-mounted eye-trackers using static stimuli. In addition to application of the I2MC algorithm in eye-tracking research with infants, school children, and certain patient groups, the I2MC algorithm also may be useful when the noise and data loss levels are markedly different between trials, participants, or time points (e.g., longitudinal research).


Assuntos
Algoritmos , Processamento Eletrônico de Dados/métodos , Medições dos Movimentos Oculares/estatística & dados numéricos , Humanos
11.
Neural Netw ; 179: 106504, 2024 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-38996690

RESUMO

This study discusses the robust stability problem of Boolean networks (BNs) with data loss and disturbances, where data loss is appropriately described by random Bernoulli distribution sequences. Firstly, a BN with data loss and disturbances is converted into an algebraic form via the semi-tensor product (STP) technique. Accordingly, the original system is constructed as a probabilistic augmented system, based on which the problem of stability with probability one for the original system becomes a set stability with probability one for the augmented system. Subsequently, certain criteria are proposed for the robust stability of the systems. Moreover, an algorithm is developed to verify the robust set stability of the augmented system based on truth matrices. Finally, the validity of the obtained results is demonstrated by an illustrative example.


Assuntos
Algoritmos , Redes Neurais de Computação , Probabilidade
12.
Interact J Med Res ; 13: e50849, 2024 Jul 31.
Artigo em Inglês | MEDLINE | ID: mdl-39083801

RESUMO

BACKGROUND: The impact of missing data on individual continuous glucose monitoring (CGM) data is unknown but can influence clinical decision-making for patients. OBJECTIVE: We aimed to investigate the consequences of data loss on glucose metrics in individual patient recordings from continuous glucose monitors and assess its implications on clinical decision-making. METHODS: The CGM data were collected from patients with type 1 and 2 diabetes using the FreeStyle Libre sensor (Abbott Diabetes Care). We selected 7-28 days of 24 hours of continuous data without any missing values from each individual patient. To mimic real-world data loss, missing data ranging from 5% to 50% were introduced into the data set. From this modified data set, clinical metrics including time below range (TBR), TBR level 2 (TBR2), and other common glucose metrics were calculated in the data sets with and that without data loss. Recordings in which glucose metrics deviated relevantly due to data loss, as determined by clinical experts, were defined as expert panel boundary error (εEPB). These errors were expressed as a percentage of the total number of recordings. The errors for the recordings with glucose management indicator <53 mmol/mol were investigated. RESULTS: A total of 84 patients contributed to 798 recordings over 28 days. With 5%-50% data loss for 7-28 days recordings, the εEPB varied from 0 out of 798 (0.0%) to 147 out of 736 (20.0%) for TBR and 0 out of 612 (0.0%) to 22 out of 408 (5.4%) recordings for TBR2. In the case of 14-day recordings, TBR and TBR2 episodes completely disappeared due to 30% data loss in 2 out of 786 (0.3%) and 32 out of 522 (6.1%) of the cases, respectively. However, the initial values of the disappeared TBR and TBR2 were relatively small (<0.1%). In the recordings with glucose management indicator <53 mmol/mol the εEPB was 9.6% for 14 days with 30% data loss. CONCLUSIONS: With a maximum of 30% data loss in 14-day CGM recordings, there is minimal impact of missing data on the clinical interpretation of various glucose metrics. TRIAL REGISTRATION: ClinicalTrials.gov NCT05584293; https://clinicaltrials.gov/study/NCT05584293.

13.
Big Data ; 2023 Oct 31.
Artigo em Inglês | MEDLINE | ID: mdl-37906117

RESUMO

Organizations have been investing in analytics relying on internal and external data to gain a competitive advantage. However, the legal and regulatory acts imposed nationally and internationally have become a challenge, especially for highly regulated sectors such as health or finance/banking. Data handlers such as Facebook and Amazon have already sustained considerable fines or are under investigation due to violations of data governance. The era of big data has further intensified the challenges of minimizing the risk of data loss by introducing the dimensions of Volume, Velocity, and Variety into confidentiality. Although Volume and Velocity have been extensively researched, Variety, "the ugly duckling" of big data, is often neglected and difficult to solve, thus increasing the risk of data exposure and data loss. In mitigating the risk of data exposure and data loss in this article, a framework is proposed to utilize algorithmic classification and workflow capabilities to provide a consistent approach toward data evaluations across the organizations. A rule-based system, implementing the corporate data classification policy, will minimize the risk of exposure by facilitating users to identify the approved guidelines and enforce them quickly. The framework includes an exception handling process with appropriate approval for extenuating circumstances. The system was implemented in a proof of concept working prototype to showcase the capabilities and provide a hands-on experience. The information system was evaluated and accredited by a diverse audience of academics and senior business executives in the fields of security and data management. The audience had an average experience of ∼25 years and amasses a total experience of almost three centuries (294 years). The results confirmed that the 3Vs are of concern and that Variety, with a majority of 90% of the commentators, is the most troubling. In addition to that, with an approximate average of 60%, it was confirmed that appropriate policies, procedure, and prerequisites for classification are in place while implementation tools are lagging.

14.
Diabetes Technol Ther ; 24(10): 749-753, 2022 10.
Artigo em Inglês | MEDLINE | ID: mdl-35653736

RESUMO

Aims: To determine if a longer duration of continuous glucose monitoring (CGM) sampling is needed to correctly assess the quality of glycemic control given different types of data loss. Materials and Methods: Data loss was generated in two different methods until the desired percentage of data loss (10-50%) was achieved with (1) eliminating random individual CGM values and (2) eliminating gaps of a predefined time length (1-5 h). For CGM metrics, days required to cross predetermined targets for median absolute percentage error (MdAPE) for the different data loss strategies were calculated and compared with current international consensus recommendation of >70% of optimal data sampling. Results: Up to 90 days of CGM data from 291 adults with type 1 diabetes were analyzed. MdAPE threshold crossing remained virtually constant for random CGM data loss up to 50% for all CGM metrics. However, the MdAPE crossing threshold increased when losing data with longer gaps. For all CGM metrics assessed in our study (%T70-180, %T < 70, %T < 54, %T > 180, and %T > 250), up to 50% data loss in a random manner did not cause any significant change on optimal sampling duration; however, >30% of data loss in gaps up to 5 h required longer optimal sampling duration. Conclusions: Optimal sampling duration for CGM metrics depends on percentage of data loss as well as duration of data loss. International consensus recommendation for 70% CGM data adequacy is sufficient to report %T70-180 with 2 weeks of data without large data gaps.


Assuntos
Diabetes Mellitus Tipo 1 , Hipoglicemia , Adulto , Glicemia , Automonitorização da Glicemia/métodos , Diabetes Mellitus Tipo 1/tratamento farmacológico , Hemoglobinas Glicadas/análise , Humanos
15.
Biosensors (Basel) ; 11(10)2021 Sep 23.
Artigo em Inglês | MEDLINE | ID: mdl-34677306

RESUMO

Bluetooth Low Energy (BLE) plays a critical role in wireless data transmission in wearable technologies. The previous work in this field has mostly focused on optimizing the transmission throughput and power consumption. However, not much work has been reported on a systematic evaluation of the data packet loss of BLE in the wearable healthcare ecosystem, which is essential for reliable and secure data transmission. Considering that diverse wearable devices are used as peripherals and off-the-shelf smartphones (Android, iPhone) or Raspberry Pi with various chipsets and operating systems (OS) as hubs in the wearable ecosystem, there is an urgent need to understand the factors that influence data loss in BLE and develop a mitigation solution to address the data loss issue. In this work, we have systematically evaluated packet losses in Android and iOS based wearable ecosystems and proposed a reduced transmission frequency and data bundling strategy along with queue-based packet transmission protocol to mitigate data packet loss in BLE. The proposed protocol provides flexibility to the peripheral device to work with the host either in real-time mode for timely data transmission or offline mode for accumulated data transmission when there is a request from the host. The test results show that lowered transmission frequency and data bundling reduce the packet losses to less than 1%. The queue-based packet transmission protocol eliminates any remaining packet loss by using re-request routines. The data loss mitigation protocol developed in this research can be widely applied to the BLE-based wearable ecosystem for various applications, such as body sensor networks (BSN), the Internet of Things (IoT), and smart homes.


Assuntos
Atenção à Saúde , Tecnologia sem Fio , Algoritmos , Ecossistema , Smartphone , Software , Dispositivos Eletrônicos Vestíveis
16.
Appl Physiol Nutr Metab ; 46(2): 148-154, 2021 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-32813987

RESUMO

Like many wearables, flash glucose monitoring relies on user compliance and is subject to missing data. As recent research is beginning to utilise glucose technologies as behaviour change tools, it is important to understand whether missing data are tolerable. Complete Freestyle Libre data files were amputed to remove 1-6 h of data both at random and over mealtimes (breakfast, lunch, and dinner). Absolute percent errors (MAPE) and intraclass correlation coefficients (ICC) were calculated to evaluate agreement and reliability. Thirty-two (91%) participants provided at least 1 complete day (24 h) of data (age: 44.8 ± 8.6 years, female: 18 (56%); mean fasting glucose: 5.0 ± 0.6 mmol/L). Mean and continuous overall net glycaemic action (CONGA) (60 min) were robust to data loss (MAPE ≤3%). Larger errors were calculated for standard deviation, coefficient of variation (CV) and mean amplitude of glycaemic excursions (MAGE) at increasing missingness (MAPE: 2%-10%, 2%-9%, and 4%-18%, respectively). ICC decreased as missing data increased, with most indicating excellent reliability (>0.9) apart from certain MAGE ICCs, which indicated good reliability (0.84-0.9). Researchers and clinicians should be aware of the potential for larger errors when reporting standard deviation, CV, and MAGE at higher rates of data loss in nondiabetic populations. But where mean and CONGA are of interest, data loss is less of a concern. Novelty: As research now utilises flash glucose monitoring as behavioural change tools in nondiabetic populations, it is important to consider the influence of missing data. Glycaemic variability indices of mean and CONGA are robust to data loss, but standard deviation, CV, and MAGE are influenced at higher rates of missingness.


Assuntos
Automonitorização da Glicemia/instrumentação , Automonitorização da Glicemia/estatística & dados numéricos , Monitores de Aptidão Física/estatística & dados numéricos , Adulto , Automonitorização da Glicemia/normas , Interpretação Estatística de Dados , Feminino , Monitores de Aptidão Física/normas , Humanos , Masculino , Pessoa de Meia-Idade
17.
Dev Cogn Neurosci ; 45: 100809, 2020 10.
Artigo em Inglês | MEDLINE | ID: mdl-32658760

RESUMO

EEG is a widely used tool to study the infant brain and its relationship with behavior. As infants usually have small attention spans, move at free will, and do not respond to task instructions, attrition rates are usually high. Increasing our understanding of what influences data loss is therefore vital. The current paper examines external factors to data loss in a large-scale on-going longitudinal study (the YOUth project; 1279 five-month-olds, 1024 ten-months-olds, and 109 three-year-olds). Data loss is measured for both continuous EEG and ERP tasks as the percentage data loss after artifact removal. Our results point to a wide array of external factors that contribute to data loss, some related to the child (e.g., gender; age; head shape) and some related to experimental settings (e.g., choice of research assistant; time of day; season; and course of the experiment). Data loss was also more pronounced in the ERP experiment than in the EEG experiment. Finally, evidence was found for within-subject stability in data loss characteristics over multiple sessions. We end with recommendations to limit data loss in future studies.


Assuntos
Eletroencefalografia/métodos , Pré-Escolar , Análise de Dados , Feminino , Humanos , Lactente , Estudos Longitudinais , Masculino
18.
Math Biosci Eng ; 16(5): 4526-4545, 2019 05 22.
Artigo em Inglês | MEDLINE | ID: mdl-31499675

RESUMO

Wireless sensor networks (WSNs) are usually used to helps many basic scientific works to gather and observe environmental data, whose completeness and accuracy are the key to ensuring the success of scientific works. However, due to a lot of noise, collision and unreliable data link, data loss and damage in WSNs are rather common. Although some existing works, e.g. interpolation methods or prediction methods, can recover original data to some extent, they maybe provide an unsatisfac-tory accuracy when the missing data becomes large. To address this problem, this paper proposes a new reliable data transmission scheme in WSNs by using data decomposition and ensemble recovery mechanism. Firstly, the original data are collected by sensor nodes and then are expanded and split into multiple data shares by using multi-ary Vandermonde matrix. Subsequently, these data shares are transmitted respectively to source node via the sensor networks, which is made up of a large number of sensor nodes. Since each share contains data redundancy, the source node can reconstruct the original data even if some data shares are damaged or lost during delivery. Finally, extensive simulation experi-ments show that the proposed scheme outperforms significantly existing solutions in terms of recovery accuracy and robustness.

19.
PeerJ Comput Sci ; 5: e210, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-33816863

RESUMO

In most areas of machine learning, it is assumed that data quality is fairly consistent between training and inference. Unfortunately, in real systems, data are plagued by noise, loss, and various other quality reducing factors. While a number of deep learning algorithms solve end-stage problems of prediction and classification, very few aim to solve the intermediate problems of data pre-processing, cleaning, and restoration. Long Short-Term Memory (LSTM) networks have previously been proposed as a solution for data restoration, but they suffer from a major bottleneck: a large number of sequential operations. We propose using attention mechanisms to entirely replace the recurrent components of these data-restoration networks. We demonstrate that such an approach leads to reduced model sizes by as many as two orders of magnitude, a 2-fold to 4-fold reduction in training times, and 95% accuracy for automotive data restoration. We also show in a case study that this approach improves the performance of downstream algorithms reliant on clean data.

20.
J Meas Phys Behav ; 1(1): 26-31, 2018 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-30159548

RESUMO

The Seniors USP study measured sedentary behaviour (activPAL3, 9 day wear) in older adults. The measurement protocol had three key characteristics: enabling 24-hour wear (monitor location, waterproofing); minimising data loss (reducing monitor failure, staff training, communication); and quality assurance (removal by researcher, confidence about wear). Two monitors were not returned; 91% (n=700) of returned monitors had 7 valid days of data. Sources of data loss included monitor failure (n=11), exclusion after quality assurance (n=5), early removal for skin irritation (n=8) or procedural errors (n=10). Objective measurement of physical activity and sedentary behaviour in large studies requires decisional trade-offs between data quantity (collecting representative data) and utility (derived outcomes that reflect actual behaviour).

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA