Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 124
Filtrar
Mais filtros

Base de dados
País/Região como assunto
Tipo de documento
Intervalo de ano de publicação
1.
Sensors (Basel) ; 24(10)2024 May 11.
Artigo em Inglês | MEDLINE | ID: mdl-38793899

RESUMO

Metabolic syndrome poses a significant health challenge worldwide, prompting the need for comprehensive strategies integrating physical activity monitoring and energy expenditure. Wearable sensor devices have been used both for energy intake and energy expenditure (EE) estimation. Traditionally, sensors are attached to the hip or wrist. The primary aim of this research is to investigate the use of an eyeglass-mounted wearable energy intake sensor (Automatic Ingestion Monitor v2, AIM-2) for simultaneous recognition of physical activity (PAR) and estimation of steady-state EE as compared to a traditional hip-worn device. Study data were collected from six participants performing six structured activities, with the reference EE measured using indirect calorimetry (COSMED K5) and reported as metabolic equivalents of tasks (METs). Next, a novel deep convolutional neural network-based multitasking model (Multitasking-CNN) was developed for PAR and EE estimation. The Multitasking-CNN was trained with a two-step progressive training approach for higher accuracy, where in the first step the model for PAR was trained, and in the second step the model was fine-tuned for EE estimation. Finally, the performance of Multitasking-CNN on AIM-2 attached to eyeglasses was compared to the ActiGraph GT9X (AG) attached to the right hip. On the AIM-2 data, Multitasking-CNN achieved a maximum of 95% testing accuracy of PAR, a minimum of 0.59 METs mean square error (MSE), and 11% mean absolute percentage error (MAPE) in EE estimation. Conversely, on AG data, the Multitasking-CNN model achieved a maximum of 82% testing accuracy in PAR, a minimum of 0.73 METs MSE, and 13% MAPE in EE estimation. These results suggest the feasibility of using an eyeglass-mounted sensor for both PAR and EE estimation.


Assuntos
Metabolismo Energético , Exercício Físico , Óculos , Redes Neurais de Computação , Dispositivos Eletrônicos Vestíveis , Humanos , Metabolismo Energético/fisiologia , Exercício Físico/fisiologia , Adulto , Masculino , Calorimetria Indireta/instrumentação , Calorimetria Indireta/métodos , Feminino , Monitorização Fisiológica/instrumentação , Monitorização Fisiológica/métodos
2.
Nicotine Tob Res ; 25(7): 1391-1399, 2023 Jun 09.
Artigo em Inglês | MEDLINE | ID: mdl-36905322

RESUMO

INTRODUCTION: There has been little research objectively examining use-patterns among individuals who use electronic cigarettes (e-cigarettes). The primary aim of this study was to identify patterns of e-cigarette use and categorize distinct use-groups by analyzing patterns of puff topography variables over time. The secondary aim was to identify the extent to which self-report questions about use accurately assess e-cigarette use-behavior. AIMS AND METHODS: Fifty-seven adult e-cigarette-only users completed a 4-hour ad libitum puffing session. Self-reports of use were collected both before and after this session. RESULTS: Three distinct use-groups emerged from exploratory and confirmatory cluster analyses. The first was labeled the "Graze" use-group (29.8% of participants), in which the majority of puffs were unclustered (ie, puffs were greater than 60 seconds apart) with a small minority in short clusters (2-5 puffs). The second was labeled the "Clumped" use group (12.3%), in which the majority of puffs were within clusters (short, medium [6-10 puffs], and/or long [>10 puffs]) and a small minority of puffs were unclustered. The third was labeled the "Hybrid" use-group (57.9%), in which most puffs were either within short clusters or were unclustered. Significant differences emerged between observed and self-reported use-behaviors with a general tendency for participants to overreport use. Furthermore, commonly utilized assessments demonstrated limited accuracy in capturing use behaviors observed in this sample. CONCLUSIONS: This research addressed several limitations previously identified in the e-cigarette literature and collected novel data that provided substantial information about e-cigarette puff topography and its relationship with self-report measures and use-type categorization. IMPLICATIONS: This is the first study to identify and distinguish three empirically based e-cigarette use-groups. These use-groups, as well as the specific topography data discussed, can provide a foundation for future research assessing the impact of use across different use types. Furthermore, as participants tended to overreport use and assessments did not capture use accurately, this study can serve as a foundation for future work developing more appropriate assessments for use in research studies as well as clinical practice.


Assuntos
Sistemas Eletrônicos de Liberação de Nicotina , Produtos do Tabaco , Vaping , Adulto , Humanos , Autorrelato , Coleta de Dados
3.
IEEE Sens J ; 23(5): 5391-5400, 2023 Mar 01.
Artigo em Inglês | MEDLINE | ID: mdl-37799776

RESUMO

Automatic food portion size estimation (FPSE) with minimal user burden is a challenging task. Most of the existing FPSE methods use fiducial markers and/or virtual models as dimensional references. An alternative approach is to estimate the dimensions of the eating containers prior to estimating the portion size. In this article, we propose a wearable sensor system (the automatic ingestion monitor integrated with a ranging sensor) and a related method for the estimation of dimensions of plates and bowls. The contributions of this study are: 1) the model eliminates the need for fiducial markers; 2) the camera system [automatic ingestion monitor version 2 (AIM-2)] is not restricted in terms of positioning relative to the food item; 3) our model accounts for radial lens distortion caused due to lens aberrations; 4) a ranging sensor directly gives the distance between the sensor and the eating surface; 5) the model is not restricted to circular plates; and 6) the proposed system implements a passive method that can be used for assessment of container dimensions with minimum user interaction. The error rates (mean ± std. dev) for dimension estimation were 2.01% ± 4.10% for plate widths/diameters, 2.75% ± 38.11% for bowl heights, and 4.58% ± 6.78% for bowl diameters.

4.
Sensors (Basel) ; 23(2)2023 Jan 04.
Artigo em Inglês | MEDLINE | ID: mdl-36679357

RESUMO

Sensor-based food intake monitoring has become one of the fastest-growing fields in dietary assessment. Researchers are exploring imaging-sensor-based food detection, food recognition, and food portion size estimation. A major problem that is still being tackled in this field is the segmentation of regions of food when multiple food items are present, mainly when similar-looking foods (similar in color and/or texture) are present. Food image segmentation is a relatively under-explored area compared with other fields. This paper proposes a novel approach to food imaging consisting of two imaging sensors: color (Red-Green-Blue) and thermal. Furthermore, we propose a multi-modal four-Dimensional (RGB-T) image segmentation using a k-means clustering algorithm to segment regions of similar-looking food items in multiple combinations of hot, cold, and warm (at room temperature) foods. Six food combinations of two food items each were used to capture RGB and thermal image data. RGB and thermal data were superimposed to form a combined RGB-T image and three sets of data (RGB, thermal, and RGB-T) were tested. A bootstrapped optimization of within-cluster sum of squares (WSS) was employed to determine the optimal number of clusters for each case. The combined RGB-T data achieved better results compared with RGB and thermal data, used individually. The mean ± standard deviation (std. dev.) of the F1 score for RGB-T data was 0.87 ± 0.1 compared with 0.66 ± 0.13 and 0.64 ± 0.39, for RGB and Thermal data, respectively.


Assuntos
Algoritmos , Temperatura Baixa , Análise por Conglomerados , Reconhecimento Psicológico , Imagem Multimodal , Cor
5.
Sensors (Basel) ; 23(11)2023 Jun 05.
Artigo em Inglês | MEDLINE | ID: mdl-37300082

RESUMO

Walking in real-world environments involves constant decision-making, e.g., when approaching a staircase, an individual decides whether to engage (climbing the stairs) or avoid. For the control of assistive robots (e.g., robotic lower-limb prostheses), recognizing such motion intent is an important but challenging task, primarily due to the lack of available information. This paper presents a novel vision-based method to recognize an individual's motion intent when approaching a staircase before the potential transition of motion mode (walking to stair climbing) occurs. Leveraging the egocentric images from a head-mounted camera, the authors trained a YOLOv5 object detection model to detect staircases. Subsequently, an AdaBoost and gradient boost (GB) classifier was developed to recognize the individual's intention of engaging or avoiding the upcoming stairway. This novel method has been demonstrated to provide reliable (97.69%) recognition at least 2 steps before the potential mode transition, which is expected to provide ample time for the controller mode transition in an assistive robot in real-world use.


Assuntos
Intenção , Robótica , Humanos , Caminhada
6.
Int J Obes (Lond) ; 46(11): 2050-2057, 2022 11.
Artigo em Inglês | MEDLINE | ID: mdl-36192533

RESUMO

OBJECTIVES: Dietary assessment methods not relying on self-report are needed. The Automatic Ingestion Monitor 2 (AIM-2) combines a wearable camera that captures food images with sensors that detect food intake. We compared energy intake (EI) estimates of meals derived from AIM-2 chewing sensor signals, AIM-2 images, and an internet-based diet diary, with researcher conducted weighed food records (WFR) as the gold standard. SUBJECTS/METHODS: Thirty adults wore the AIM-2 for meals self-selected from a university food court on one day in mixed laboratory and free-living conditions. Daily EI was determined from a sensor regression model, manual image analysis, and a diet diary and compared with that from WFR. A posteriori analysis identified sources of error for image analysis and WFR differences. RESULTS: Sensor-derived EI from regression modeling (R2 = 0.331) showed the closest agreement with EI from WFR, followed by diet diary estimates. EI from image analysis differed significantly from that by WFR. Bland-Altman analysis showed wide limits of agreement for all three test methods with WFR, with the sensor method overestimating at lower and underestimating at higher EI. Nutritionist error in portion size estimation and irreconcilable differences in portion size between food and nutrient databases used for WFR and image analyses were the greatest contributors to image analysis and WFR differences (44.4% and 44.8% of WFR EI, respectively). CONCLUSIONS: Estimation of daily EI from meals using sensor-derived features offers a promising alternative to overcome limitations of self-report. Image analysis may benefit from computerized analytical procedures to reduce identified sources of error.


Assuntos
Ingestão de Energia , Dispositivos Eletrônicos Vestíveis , Humanos , Adulto , Registros de Dieta , Refeições , Dieta
7.
Public Health Nutr ; : 1-11, 2022 May 26.
Artigo em Inglês | MEDLINE | ID: mdl-35616087

RESUMO

OBJECTIVE: Passive, wearable sensors can be used to obtain objective information in infant feeding, but their use has not been tested. Our objective was to compare assessment of infant feeding (frequency, duration and cues) by self-report and that of the Automatic Ingestion Monitor-2 (AIM-2). DESIGN: A cross-sectional pilot study was conducted in Ghana. Mothers wore the AIM-2 on eyeglasses for 1 d during waking hours to assess infant feeding using images automatically captured by the device every 15 s. Feasibility was assessed using compliance with wearing the device. Infant feeding practices collected by the AIM-2 images were annotated by a trained evaluator and compared with maternal self-report via interviewer-administered questionnaire. SETTING: Rural and urban communities in Ghana. PARTICIPANTS: Participants were thirty eight (eighteen rural and twenty urban) breast-feeding mothers of infants (child age ≤7 months). RESULTS: Twenty-five mothers reported exclusive breast-feeding, which was common among those < 30 years of age (n 15, 60 %) and those residing in urban communities (n 14, 70 %). Compliance with wearing the AIM-2 was high (83 % of wake-time), suggesting low user burden. Maternal report differed from the AIM-2 data, such that mothers reported higher mean breast-feeding frequency (eleven v. eight times, P = 0·041) and duration (18·5 v. 10 min, P = 0·007) during waking hours. CONCLUSION: The AIM-2 was a feasible tool for the assessment of infant feeding among mothers in Ghana as a passive, objective method and identified overestimation of self-reported breast-feeding frequency and duration. Future studies using the AIM-2 are warranted to determine validity on a larger scale.

8.
Sensors (Basel) ; 22(9)2022 Apr 26.
Artigo em Inglês | MEDLINE | ID: mdl-35590990

RESUMO

Imaging-based methods of food portion size estimation (FPSE) promise higher accuracies compared to traditional methods. Many FPSE methods require dimensional cues (fiducial markers, finger-references, object-references) in the scene of interest and/or manual human input (wireframes, virtual models). This paper proposes a novel passive, standalone, multispectral, motion-activated, structured light-supplemented, stereo camera for food intake monitoring (FOODCAM) and an associated methodology for FPSE that does not need a dimensional reference given a fixed setup. The proposed device integrated a switchable band (visible/infrared) stereo camera with a structured light emitter. The volume estimation methodology focused on the 3-D reconstruction of food items based on the stereo image pairs captured by the device. The FOODCAM device and the methodology were validated using five food models with complex shapes (banana, brownie, chickpeas, French fries, and popcorn). Results showed that the FOODCAM was able to estimate food portion sizes with an average accuracy of 94.4%, which suggests that the FOODCAM can potentially be used as an instrument in diet and eating behavior studies.


Assuntos
Fotografação , Tamanho da Porção , Dieta , Comportamento Alimentar , Alimentos , Humanos , Fotografação/métodos
9.
Sensors (Basel) ; 22(7)2022 Apr 02.
Artigo em Inglês | MEDLINE | ID: mdl-35408358

RESUMO

This paper presents a plantar pressure sensor system (P2S2) integrated in the insoles of shoes to detect thirteen commonly used human movements including walking, stooping left and right, pulling a cart backward, squatting, descending, ascending stairs, running, and falling (front, back, right, left). Six force sensitive resistors (FSR) sensors were positioned on critical pressure points on the insoles to capture the electrical signature of pressure change in the various movements. A total of 34 adult participants were tested with the P2S2. The pressure data were collected and processed using a Principal Component Analysis (PCA) for input to the multiple machine learning (ML) algorithms, including k-NN, neural network and Support-Vector Machine (SVM) algorithms. The ML models were trained using four-fold cross-validation. Each fold kept subject data independent from other folds. The model proved effective with an accuracy of 86%, showing a promising result in predicting human movements using the P2S2 integrated in shoes.


Assuntos
Sapatos , Caminhada , Adulto , Humanos , Aprendizado de Máquina , Movimento , Pressão
10.
Sensors (Basel) ; 23(1)2022 Dec 26.
Artigo em Inglês | MEDLINE | ID: mdl-36616825

RESUMO

Extreme angles in lower body joints may adversely increase the risk of injury to joints. These injuries are common in the workplace and cause persistent pain and significant financial losses to people and companies. The purpose of this study was to predict lower body joint angles from the ankle to the lumbosacral joint (L5S1) by measuring plantar pressures in shoes. Joint angle prediction was aided by a designed footwear sensor consisting of six force-sensing resistors (FSR) and a microcontroller fitted with Bluetooth LE sensors. An Xsens motion capture system was utilized as a ground truth validation measuring 3D joint angles. Thirty-seven human subjects were tested squatting in an IRB-approved study. The Gaussian Process Regression (GPR) linear regression algorithm was used to create a progressive model that predicted the angles of ankle, knee, hip, and L5S1. The footwear sensor showed a promising root mean square error (RMSE) for each joint. The L5S1 angle was predicted to be RMSE of 0.21° for the X-axis and 0.22° for the Y-axis, respectively. This result confirmed that the proposed plantar sensor system had the capability to predict and monitor lower body joint angles for potential injury prevention and training of occupational workers.


Assuntos
Articulação do Joelho , Extremidade Inferior , Humanos , Articulação do Tornozelo , Pressão , Fenômenos Biomecânicos , Aprendizado de Máquina , Marcha
11.
Sensors (Basel) ; 22(4)2022 Feb 15.
Artigo em Inglês | MEDLINE | ID: mdl-35214399

RESUMO

Knowing the amounts of energy and nutrients in an individual's diet is important for maintaining health and preventing chronic diseases. As electronic and AI technologies advance rapidly, dietary assessment can now be performed using food images obtained from a smartphone or a wearable device. One of the challenges in this approach is to computationally measure the volume of food in a bowl from an image. This problem has not been studied systematically despite the bowl being the most utilized food container in many parts of the world, especially in Asia and Africa. In this paper, we present a new method to measure the size and shape of a bowl by adhering a paper ruler centrally across the bottom and sides of the bowl and then taking an image. When observed from the image, the distortions in the width of the paper ruler and the spacings between ruler markers completely encode the size and shape of the bowl. A computational algorithm is developed to reconstruct the three-dimensional bowl interior using the observed distortions. Our experiments using nine bowls, colored liquids, and amorphous foods demonstrate high accuracy of our method for food volume estimation involving round bowls as containers. A total of 228 images of amorphous foods were also used in a comparative experiment between our algorithm and an independent human estimator. The results showed that our algorithm overperformed the human estimator who utilized different types of reference information and two estimation methods, including direct volume estimation and indirect estimation through the fullness of the bowl.


Assuntos
Dieta , Ingestão de Energia , Algoritmos , Alimentos , Humanos , Smartphone
12.
IEEE Sens J ; 21(24): 27728-27735, 2021 Dec 15.
Artigo em Inglês | MEDLINE | ID: mdl-35813985

RESUMO

Objective detection of periods of wear and non-wear is critical for human studies that rely on information from wearable sensors, such as food intake sensors. In this paper, we present a novel method of compliance detection on the example of the Automatic Ingestion Monitor v2 (AIM-2) sensor, containing a tri-axial accelerometer, a still camera, and a chewing sensor. The method was developed and validated using data from a study of 30 participants aged 18-39, each wearing the AIM-2 for two days (a day in pseudo-free-living and a day in free-living). Four types of wear compliance were analyzed: 'normal-wear', 'non-compliant-wear', 'non-wear-carried', and 'non-wear-stationary'. The ground truth of those four types of compliance was obtained by reviewing the images of the egocentric camera. The features for compliance detection were the standard deviation of acceleration, average pitch, and roll angles, and mean square error of two consecutive images. These were used to train three random forest classifiers 1) accelerometer-based, 2) image-based, and 3) combined accelerometer and image-based. Satisfactory wear compliance measurement accuracy was obtained using the combined classifier (89.24%) on leave one subject out cross-validation. The average duration of compliant wear in the study was 9h with a standard deviation of 2h or 70.96% of total on-time. This method can be used to calculate the wear and non-wear time of AIM-2, and potentially be extended to other devices. The study also included assessments of sensor burden and privacy concerns. The survey results suggest recommendations that may be used to increase wear compliance.

13.
Sensors (Basel) ; 21(11)2021 May 30.
Artigo em Inglês | MEDLINE | ID: mdl-34070843

RESUMO

Ankle injuries may adversely increase the risk of injury to the joints of the lower extremity and can lead to various impairments in workplaces. The purpose of this study was to predict the ankle angles by developing a footwear pressure sensor and utilizing a machine learning technique. The footwear sensor was composed of six FSRs (force sensing resistors), a microcontroller and a Bluetooth LE chipset in a flexible substrate. Twenty-six subjects were tested in squat and stoop motions, which are common positions utilized when lifting objects from the floor and pose distinct risks to the lifter. The kNN (k-nearest neighbor) machine learning algorithm was used to create a representative model to predict the ankle angles. For the validation, a commercial IMU (inertial measurement unit) sensor system was used. The results showed that the proposed footwear pressure sensor could predict the ankle angles at more than 93% accuracy for squat and 87% accuracy for stoop motions. This study confirmed that the proposed plantar sensor system is a promising tool for the prediction of ankle angles and thus may be used to prevent potential injuries while lifting objects in workplaces.


Assuntos
Articulação do Tornozelo , Tornozelo , Fenômenos Biomecânicos , Humanos , Extremidade Inferior , Aprendizado de Máquina
14.
Sensors (Basel) ; 21(3)2021 Jan 24.
Artigo em Inglês | MEDLINE | ID: mdl-33498956

RESUMO

For the controller of wearable lower-limb assistive devices, quantitative understanding of human locomotion serves as the basis for human motion intent recognition and joint-level motion control. Traditionally, the required gait data are obtained in gait research laboratories, utilizing marker-based optical motion capture systems. Despite the high accuracy of measurement, marker-based systems are largely limited to laboratory environments, making it nearly impossible to collect the desired gait data in real-world daily-living scenarios. To address this problem, the authors propose a novel exoskeleton-based gait data collection system, which provides the capability of conducting independent measurement of lower limb movement without the need for stationary instrumentation. The basis of the system is a lightweight exoskeleton with articulated knee and ankle joints. To minimize the interference to a wearer's natural lower-limb movement, a unique two-degrees-of-freedom joint design is incorporated, integrating a primary degree of freedom for joint motion measurement with a passive degree of freedom to allow natural joint movement and improve the comfort of use. In addition to the joint-embedded goniometers, the exoskeleton also features multiple positions for the mounting of inertia measurement units (IMUs) as well as foot-plate-embedded force sensing resistors to measure the foot plantar pressure. All sensor signals are routed to a microcontroller for data logging and storage. To validate the exoskeleton-provided joint angle measurement, a comparison study on three healthy participants was conducted, which involves locomotion experiments in various modes, including overground walking, treadmill walking, and sit-to-stand and stand-to-sit transitions. Joint angle trajectories measured with an eight-camera motion capture system served as the benchmark for comparison. Experimental results indicate that the exoskeleton-measured joint angle trajectories closely match those obtained through the optical motion capture system in all modes of locomotion (correlation coefficients of 0.97 and 0.96 for knee and ankle measurements, respectively), clearly demonstrating the accuracy and reliability of the proposed gait measurement system.


Assuntos
Exoesqueleto Energizado , Marcha , Fenômenos Biomecânicos , Coleta de Dados , Feminino , Humanos , Masculino , Reprodutibilidade dos Testes , Caminhada
15.
Nicotine Tob Res ; 22(10): 1883-1890, 2020 10 08.
Artigo em Inglês | MEDLINE | ID: mdl-31693162

RESUMO

INTRODUCTION: Wearable sensors may be used for the assessment of behavioral manifestations of cigarette smoking under natural conditions. This paper introduces a new camera-based sensor system to monitor smoking behavior. The goals of this study were (1) identification of the best position of sensor placement on the body and (2) feasibility evaluation of the sensor as a free-living smoking-monitoring tool. METHODS: A sensor system was developed with a 5MP camera that captured images every second for continuously up to 26 hours. Five on-body locations were tested for the selection of sensor placement. A feasibility study was then performed on 10 smokers to monitor full-day smoking under free-living conditions. Captured images were manually annotated to obtain behavioral metrics of smoking including smoking frequency, smoking environment, and puffs per cigarette. The smoking environment and puff counts captured by the camera were compared with self-reported smoking. RESULTS: A camera located on the eyeglass temple produced the maximum number of images of smoking and the minimal number of blurry or overexposed images (53.9%, 4.19%, and 0.93% of total captured, respectively). During free-living conditions, 286,245 images were captured with a mean (±standard deviation) duration of sensor wear of 647(±74) minutes/participant. Image annotation identified consumption of 5(±2.3) cigarettes/participant, 3.1(±1.1) cigarettes/participant indoors, 1.9(±0.9) cigarettes/participant outdoors, and 9.02(±2.5) puffs/cigarette. Statistical tests found significant differences between manual annotations and self-reported smoking environment or puff counts. CONCLUSIONS: A wearable camera-based sensor may facilitate objective monitoring of cigarette smoking, categorization of smoking environments, and identification of behavioral metrics of smoking in free-living conditions. IMPLICATIONS: The proposed camera-based sensor system can be employed to examine cigarette smoking under free-living conditions. Smokers may accept this unobtrusive sensor for extended wear, as the sensor would not restrict the natural pattern of smoking or daily activities, nor would it require any active participation from a person except wearing it. Critical metrics of smoking behavior, such as the smoking environment and puff counts obtained from this sensor, may generate important information for smoking interventions.


Assuntos
Fumar Cigarros , Monitorização Ambulatorial/instrumentação , Dispositivos Eletrônicos Vestíveis , Estudos de Viabilidade , Humanos
16.
IEEE Sens J ; 20(10): 5379-5388, 2020 May 15.
Artigo em Inglês | MEDLINE | ID: mdl-33746621

RESUMO

This paper presents wearable sensors for detecting differences in chewing strength while eating foods with different hardness (carrot as a hard, apple as moderate and banana as soft food). Four wearable sensor systems were evaluated. They were: (1) a gas pressure sensor measuring changes in ear pressure proportional to ear canal deformation during chewing, (2) a flexible, curved bend sensor attached to right temple of eyeglass measuring the contraction of the temporalis muscle, (3) a piezoelectric strain sensor placed on the temporalis muscle, and (4) an electromyography sensor with electrodes placed on the temporalis muscle. Data from 15 participants, wearing all four sensors at once were collected. Each participant took and consumed 10 bites of carrot, apple, and banana. The hardness of foods were measured by a food penetrometer. Single-factor ANOVA found a significant effect of food hardness on the standard deviation of signals for all four sensors (P-value < .001). Tukey's multiple comparison test with 5% significance level confirmed that the mean of the standard deviations were significantly different for the provided test foods for all four sensors. Results of this study indicate that the wearable sensors may potentially be used for measuring chewing strength and assessing the food hardness.

17.
Sensors (Basel) ; 19(3)2019 Jan 29.
Artigo em Inglês | MEDLINE | ID: mdl-30700056

RESUMO

In recent years, a number of wearable approaches have been introduced for objective monitoring of cigarette smoking based on monitoring of hand gestures, breathing or cigarette lighting events. However, non-reactive, objective and accurate measurement of everyday cigarette consumption in the wild remains a challenge. This study utilizes a wearable sensor system (Personal Automatic Cigarette Tracker 2.0, PACT2.0) and proposes a method that integrates information from an instrumented lighter and a 6-axis Inertial Measurement Unit (IMU) on the wrist for accurate detection of smoking events. The PACT2.0 was utilized in a study of 35 moderate to heavy smokers in both controlled (1.5⁻2 h) and unconstrained free-living conditions (~24 h). The collected dataset contained approximately 871 h of IMU data, 463 lighting events, and 443 cigarettes. The proposed method identified smoking events from the cigarette lighter data and estimated puff counts by detecting hand-to-mouth gestures (HMG) in the IMU data by a Support Vector Machine (SVM) classifier. The leave-one-subject-out (LOSO) cross-validation on the data from the controlled portion of the study achieved high accuracy and F1-score of smoking event detection and estimation of puff counts (97%/98% and 93%/86%, respectively). The results of validation in free-living demonstrate 84.9% agreement with self-reported cigarettes. These results suggest that an IMU and instrumented lighter may potentially be used in studies of smoking behavior under natural conditions.

18.
Sensors (Basel) ; 19(21)2019 Oct 28.
Artigo em Inglês | MEDLINE | ID: mdl-31661856

RESUMO

Globally, cigarette smoking is widespread among all ages, and smokers struggle to quit. The design of effective cessation interventions requires an accurate and objective assessment of smoking frequency and smoke exposure metrics. Recently, wearable devices have emerged as a means of assessing cigarette use. However, wearable technologies have inherent limitations, and their sensor responses are often influenced by wearers' behavior, motion and environmental factors. This paper presents a systematic review of current and forthcoming wearable technologies, with a focus on sensing elements, body placement, detection accuracy, underlying algorithms and applications. Full-texts of 86 scientific articles were reviewed in accordance with the Preferred Reporting Items for Systematic Review and Meta-Analyses (PRISMA) guidelines to address three research questions oriented to cigarette smoking, in order to: (1) Investigate the behavioral and physiological manifestations of cigarette smoking targeted by wearable sensors for smoking detection; (2) explore sensor modalities employed for detecting these manifestations; (3) evaluate underlying signal processing and pattern recognition methodologies and key performance metrics. The review identified five specific smoking manifestations targeted by sensors. The results suggested that no system reached 100% accuracy in the detection or evaluation of smoking-related features. Also, the testing of these sensors was mostly limited to laboratory settings. For a realistic evaluation of accuracy metrics, wearable devices require thorough testing under free-living conditions.


Assuntos
Fumar Cigarros , Dispositivos Eletrônicos Vestíveis , Eletrocardiografia , Mãos/fisiologia , Humanos , Boca/fisiologia , Processamento de Sinais Assistido por Computador
19.
IEEE Sens J ; 18(9): 3752-3758, 2018 May 01.
Artigo em Inglês | MEDLINE | ID: mdl-30364677

RESUMO

The goal of this pilot study is to evaluate the feasibility of using a 3-axis accelerometer attached to the frame of eyeglasses for automatic detection of food intake. A 3D acceleration sensor was attached to the temple of the regular eyeglasses. Ten participants wore the device in two visits (first, laboratory; second, free-living) on different days, reporting the food intake episodes using a pushbutton. Hold-one-out procedure was used to test the algorithm for food intake detection. The accelerometer signal was split into epochs of varying durations (3s, 5s, 10s 15s, 20s, 25s, and 30s); 152 time and frequency domain features were computed for each epoch. A two-stage procedure was used for finding the best feature set suitable for classification. The first stage used minimum Redundancy and Maximum Relevance (mRMR) to get the 30 top-ranked features and the second stage used forward feature selection along with a kNN classifier to get the optimum feature set for each hold-one-out set. The best average F1-score combined from laboratory and free-living experiments was 87.9 +/- 13.8% (Mean±Standard Deviation) for 20s epochs; and 84.7 +/- 7.95% for the shortest epoch of 3s. The results suggest that accelerometer may provide a compelling alternative to other sensor modalities, as the proposed sensor does not require direct attachment to the body and, therefore, significantly improves user comfort and social acceptability of the food intake monitoring system.

20.
Sensors (Basel) ; 17(12)2017 Nov 23.
Artigo em Inglês | MEDLINE | ID: mdl-29168769

RESUMO

Assistance during sit-to-stand (SiSt) transitions for frail elderly may be provided by powered orthotic devices. The control of the powered orthosis may be performed by the means of electromyography (EMG), which requires direct contact of measurement electrodes to the skin. The purpose of this study was to determine if a non-EMG-based method that uses inertial sensors placed at different positions on the orthosis, and a lightweight pattern recognition algorithm may accurately identify SiSt transitions without false positives. A novel method is proposed to eliminate false positives based on a two-stage design: stage one detects the sitting posture; stage two recognizes the initiation of a SiSt transition from a sitting position. The method was validated using data from 10 participants who performed 34 different activities and posture transitions. Features were obtained from the sensor signals and then combined into lagged epochs. A reduced number of features was selected using a minimum-redundancy-maximum-relevance (mRMR) algorithm and forward feature selection. To obtain a recognition model with low computational complexity, we compared the use of an extreme learning machine (ELM) and multilayer perceptron (MLP) for both stages of the recognition algorithm. Both classifiers were able to accurately identify all posture transitions with no false positives. The average detection time was 0.19 ± 0.33 s for ELM and 0.13 ± 0.32 s for MLP. The MLP classifier exhibited less time complexity in the recognition phase compared to ELM. However, the ELM classifier presented lower computational demands in the training phase. Results demonstrated that the proposed algorithm could potentially be adopted to control a powered orthosis.


Assuntos
Aparelhos Ortopédicos , Algoritmos , Eletromiografia , Humanos , Movimento , Postura
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA