Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 17 de 17
Filtrar
1.
Sensors (Basel) ; 24(13)2024 Jul 06.
Artículo en Inglés | MEDLINE | ID: mdl-39001173

RESUMEN

Microplastics (MPs, size ≤ 5 mm) have emerged as a significant worldwide concern, threatening marine and freshwater ecosystems, and the lack of MP detection technologies is notable. The main goal of this research is the development of a camera sensor for the detection of MPs and measuring their size and velocity while in motion. This study introduces a novel methodology involving computer vision and artificial intelligence (AI) for the detection of MPs. Three different camera systems, including fixed-focus 2D and autofocus (2D and 3D), were implemented and compared. A YOLOv5-based object detection model was used to detect MPs in the captured image. DeepSORT was then implemented for tracking MPs through consecutive images. In real-time testing in a laboratory flume setting, the precision in MP counting was found to be 97%, and during field testing in a local river, the precision was 96%. This study provides foundational insights into utilizing AI for detecting MPs in different environmental settings, contributing to more effective efforts and strategies for managing and mitigating MP pollution.

2.
Sensors (Basel) ; 23(5)2023 Feb 22.
Artículo en Inglés | MEDLINE | ID: mdl-36904615

RESUMEN

Maternal health includes health during pregnancy and childbirth. Each stage during pregnancy should be a positive experience, ensuring that women and their babies reach their full potential in health and well-being. However, this cannot always be achieved. According to UNFPA (United Nations Population Fund), approximately 800 women die every day from avoidable causes related to pregnancy and childbirth, so it is important to monitor mother and fetal health throughout the pregnancy. Many wearable sensors and devices have been developed to monitor both fetal and the mother's health and physical activities and reduce risk during pregnancy. Some wearables monitor fetal ECG or heart rate and movement, while others focus on the mother's health and physical activities. This study presents a systematic review of these analyses. Twelve scientific articles were reviewed to address three research questions oriented to (1) sensors and method of data acquisition; (2) processing methods of the acquired data; and (3) detection of the activities or movements of the fetus or the mother. Based on these findings, we discuss how sensors can help effectively monitor maternal and fetal health during pregnancy. We have observed that most of the wearable sensors were used in a controlled environment. These sensors need more testing in free-living conditions and to be employed for continuous monitoring before being recommended for mass implementation.


Asunto(s)
Salud Materna , Dispositivos Electrónicos Vestibles , Embarazo , Femenino , Humanos , Feto/fisiología , Frecuencia Cardíaca , Movimiento
3.
Sensors (Basel) ; 23(2)2023 Jan 04.
Artículo en Inglés | MEDLINE | ID: mdl-36679357

RESUMEN

Sensor-based food intake monitoring has become one of the fastest-growing fields in dietary assessment. Researchers are exploring imaging-sensor-based food detection, food recognition, and food portion size estimation. A major problem that is still being tackled in this field is the segmentation of regions of food when multiple food items are present, mainly when similar-looking foods (similar in color and/or texture) are present. Food image segmentation is a relatively under-explored area compared with other fields. This paper proposes a novel approach to food imaging consisting of two imaging sensors: color (Red-Green-Blue) and thermal. Furthermore, we propose a multi-modal four-Dimensional (RGB-T) image segmentation using a k-means clustering algorithm to segment regions of similar-looking food items in multiple combinations of hot, cold, and warm (at room temperature) foods. Six food combinations of two food items each were used to capture RGB and thermal image data. RGB and thermal data were superimposed to form a combined RGB-T image and three sets of data (RGB, thermal, and RGB-T) were tested. A bootstrapped optimization of within-cluster sum of squares (WSS) was employed to determine the optimal number of clusters for each case. The combined RGB-T data achieved better results compared with RGB and thermal data, used individually. The mean ± standard deviation (std. dev.) of the F1 score for RGB-T data was 0.87 ± 0.1 compared with 0.66 ± 0.13 and 0.64 ± 0.39, for RGB and Thermal data, respectively.


Asunto(s)
Algoritmos , Frío , Análisis por Conglomerados , Reconocimiento en Psicología , Imagen Multimodal , Color
4.
Sensors (Basel) ; 23(11)2023 Jun 05.
Artículo en Inglés | MEDLINE | ID: mdl-37300082

RESUMEN

Walking in real-world environments involves constant decision-making, e.g., when approaching a staircase, an individual decides whether to engage (climbing the stairs) or avoid. For the control of assistive robots (e.g., robotic lower-limb prostheses), recognizing such motion intent is an important but challenging task, primarily due to the lack of available information. This paper presents a novel vision-based method to recognize an individual's motion intent when approaching a staircase before the potential transition of motion mode (walking to stair climbing) occurs. Leveraging the egocentric images from a head-mounted camera, the authors trained a YOLOv5 object detection model to detect staircases. Subsequently, an AdaBoost and gradient boost (GB) classifier was developed to recognize the individual's intention of engaging or avoiding the upcoming stairway. This novel method has been demonstrated to provide reliable (97.69%) recognition at least 2 steps before the potential mode transition, which is expected to provide ample time for the controller mode transition in an assistive robot in real-world use.


Asunto(s)
Intención , Robótica , Humanos , Caminata
5.
Sensors (Basel) ; 22(23)2022 Nov 25.
Artículo en Inglés | MEDLINE | ID: mdl-36501858

RESUMEN

Commercial use of biometric authentication is becoming increasingly popular, which has sparked the development of EEG-based authentication. To stimulate the brain and capture characteristic brain signals, these systems generally require the user to perform specific activities such as deeply concentrating on an image, mental activity, visual counting, etc. This study investigates whether effective authentication would be feasible for users tasked with a minimal daily activity such as lifting a tiny object. With this novel protocol, the minimum number of EEG electrodes (channels) with the highest performance (ranked) was identified to improve user comfort and acceptance over traditional 32-64 electrode-based EEG systems while also reducing the load of real-time data processing. For this proof of concept, a public dataset was employed, which contains 32 channels of EEG data from 12 participants performing a motor task without intent for authentication. The data was filtered into five frequency bands, and 12 different features were extracted to train a random forest-based machine learning model. All channels were ranked according to Gini Impurity. It was found that only 14 channels are required to perform authentication when EEG data is filtered into the Gamma sub-band within a 1% accuracy of using 32-channels. This analysis will allow (a) the design of a custom headset with 14 electrodes clustered over the frontal and occipital lobe of the brain, (b) a reduction in data collection difficulty while performing authentication, (c) minimizing dataset size to allow real-time authentication while maintaining reasonable performance, and (d) an API for use in ranking authentication performance in different headsets and tasks.


Asunto(s)
Identificación Biométrica , Electroencefalografía , Humanos , Electroencefalografía/métodos , Identificación Biométrica/métodos , Encéfalo , Electrodos
6.
Sensors (Basel) ; 21(3)2021 Jan 24.
Artículo en Inglés | MEDLINE | ID: mdl-33498956

RESUMEN

For the controller of wearable lower-limb assistive devices, quantitative understanding of human locomotion serves as the basis for human motion intent recognition and joint-level motion control. Traditionally, the required gait data are obtained in gait research laboratories, utilizing marker-based optical motion capture systems. Despite the high accuracy of measurement, marker-based systems are largely limited to laboratory environments, making it nearly impossible to collect the desired gait data in real-world daily-living scenarios. To address this problem, the authors propose a novel exoskeleton-based gait data collection system, which provides the capability of conducting independent measurement of lower limb movement without the need for stationary instrumentation. The basis of the system is a lightweight exoskeleton with articulated knee and ankle joints. To minimize the interference to a wearer's natural lower-limb movement, a unique two-degrees-of-freedom joint design is incorporated, integrating a primary degree of freedom for joint motion measurement with a passive degree of freedom to allow natural joint movement and improve the comfort of use. In addition to the joint-embedded goniometers, the exoskeleton also features multiple positions for the mounting of inertia measurement units (IMUs) as well as foot-plate-embedded force sensing resistors to measure the foot plantar pressure. All sensor signals are routed to a microcontroller for data logging and storage. To validate the exoskeleton-provided joint angle measurement, a comparison study on three healthy participants was conducted, which involves locomotion experiments in various modes, including overground walking, treadmill walking, and sit-to-stand and stand-to-sit transitions. Joint angle trajectories measured with an eight-camera motion capture system served as the benchmark for comparison. Experimental results indicate that the exoskeleton-measured joint angle trajectories closely match those obtained through the optical motion capture system in all modes of locomotion (correlation coefficients of 0.97 and 0.96 for knee and ankle measurements, respectively), clearly demonstrating the accuracy and reliability of the proposed gait measurement system.


Asunto(s)
Dispositivo Exoesqueleto , Marcha , Fenómenos Biomecánicos , Recolección de Datos , Femenino , Humanos , Masculino , Reproducibilidad de los Resultados , Caminata
7.
Nicotine Tob Res ; 22(10): 1883-1890, 2020 10 08.
Artículo en Inglés | MEDLINE | ID: mdl-31693162

RESUMEN

INTRODUCTION: Wearable sensors may be used for the assessment of behavioral manifestations of cigarette smoking under natural conditions. This paper introduces a new camera-based sensor system to monitor smoking behavior. The goals of this study were (1) identification of the best position of sensor placement on the body and (2) feasibility evaluation of the sensor as a free-living smoking-monitoring tool. METHODS: A sensor system was developed with a 5MP camera that captured images every second for continuously up to 26 hours. Five on-body locations were tested for the selection of sensor placement. A feasibility study was then performed on 10 smokers to monitor full-day smoking under free-living conditions. Captured images were manually annotated to obtain behavioral metrics of smoking including smoking frequency, smoking environment, and puffs per cigarette. The smoking environment and puff counts captured by the camera were compared with self-reported smoking. RESULTS: A camera located on the eyeglass temple produced the maximum number of images of smoking and the minimal number of blurry or overexposed images (53.9%, 4.19%, and 0.93% of total captured, respectively). During free-living conditions, 286,245 images were captured with a mean (±standard deviation) duration of sensor wear of 647(±74) minutes/participant. Image annotation identified consumption of 5(±2.3) cigarettes/participant, 3.1(±1.1) cigarettes/participant indoors, 1.9(±0.9) cigarettes/participant outdoors, and 9.02(±2.5) puffs/cigarette. Statistical tests found significant differences between manual annotations and self-reported smoking environment or puff counts. CONCLUSIONS: A wearable camera-based sensor may facilitate objective monitoring of cigarette smoking, categorization of smoking environments, and identification of behavioral metrics of smoking in free-living conditions. IMPLICATIONS: The proposed camera-based sensor system can be employed to examine cigarette smoking under free-living conditions. Smokers may accept this unobtrusive sensor for extended wear, as the sensor would not restrict the natural pattern of smoking or daily activities, nor would it require any active participation from a person except wearing it. Critical metrics of smoking behavior, such as the smoking environment and puff counts obtained from this sensor, may generate important information for smoking interventions.


Asunto(s)
Fumar Cigarrillos , Monitoreo Ambulatorio/instrumentación , Dispositivos Electrónicos Vestibles , Estudios de Factibilidad , Humanos
8.
Sensors (Basel) ; 19(21)2019 Oct 28.
Artículo en Inglés | MEDLINE | ID: mdl-31661856

RESUMEN

Globally, cigarette smoking is widespread among all ages, and smokers struggle to quit. The design of effective cessation interventions requires an accurate and objective assessment of smoking frequency and smoke exposure metrics. Recently, wearable devices have emerged as a means of assessing cigarette use. However, wearable technologies have inherent limitations, and their sensor responses are often influenced by wearers' behavior, motion and environmental factors. This paper presents a systematic review of current and forthcoming wearable technologies, with a focus on sensing elements, body placement, detection accuracy, underlying algorithms and applications. Full-texts of 86 scientific articles were reviewed in accordance with the Preferred Reporting Items for Systematic Review and Meta-Analyses (PRISMA) guidelines to address three research questions oriented to cigarette smoking, in order to: (1) Investigate the behavioral and physiological manifestations of cigarette smoking targeted by wearable sensors for smoking detection; (2) explore sensor modalities employed for detecting these manifestations; (3) evaluate underlying signal processing and pattern recognition methodologies and key performance metrics. The review identified five specific smoking manifestations targeted by sensors. The results suggested that no system reached 100% accuracy in the detection or evaluation of smoking-related features. Also, the testing of these sensors was mostly limited to laboratory settings. For a realistic evaluation of accuracy metrics, wearable devices require thorough testing under free-living conditions.


Asunto(s)
Fumar Cigarrillos , Dispositivos Electrónicos Vestibles , Electrocardiografía , Mano/fisiología , Humanos , Boca/fisiología , Procesamiento de Señales Asistido por Computador
9.
Artículo en Inglés | MEDLINE | ID: mdl-38083112

RESUMEN

A comprehensive assessment of cigarette smoking behavior and its effect on health requires a detailed examination of smoke exposure. We propose a CNN-LSTM-based deep learning architecture named DeepPuff to quantify Respiratory Smoke Exposure Metrics (RSEM). Smoke inhalations were detected from the breathing and hand gesture sensors of the Personal Automatic Cigarette Tracker v2 (PACT 2.0). The DeepPuff model for smoke inhalation detection was developed using data collected from 190 cigarette smoking events from 38 medium to heavy smokers and optimized for precision (avoidance of false positives). An independent dataset of 459 smoking events from 45 participants (90 smoking events in the lab and 369 smoking events in free-living conditions) was used for testing the model. The proposed model achieved a precision of 82.39% on the training and 93.80% on the testing dataset (95.88% in the lab and 93.78% in free-living). RSEM metrics were then computed from the breathing signal of each detected smoke inhalation. Results from the RSEM algorithm were compared with respiratory metrics obtained from video annotation. Smoke exposure metrics of puff duration, inhale-exhale duration, and inhalation duration were not statistically different from the ground truth generated through video annotation. The results suggest that DeepPuff may be used as a reliable means to measure respiratory smoke exposure metrics collected under free-living conditions.


Asunto(s)
Fumar Cigarrillos , Aprendizaje Profundo , Productos de Tabaco , Humanos , Respiración
10.
Annu Int Conf IEEE Eng Med Biol Soc ; 2022: 1787-1791, 2022 07.
Artículo en Inglés | MEDLINE | ID: mdl-36086477

RESUMEN

Detailed assessment of smoking topography (puffing and post-puffing metrics) can lead to a better understanding of factors that influence tobacco use. Research suggests that portable mouthpiece-based devices used for puff topography measurement may alter natural smoking behavior. This paper evaluated the impact of a portable puff topography device (CReSS Pocket) on puffing & post-puffing topography using a wearable system, the Personal Automatic Cigarette Tracker v2 (PACT 2.0) as a reference measurement. Data from 45 smokers who smoked one cigarette in the lab and an unrestricted number of cigarettes under free-living conditions over 4 consecutive days were used for analysis. PACT 2.0 was worn on all four days. A puff topography instrument (CReSS pocket) was used for cigarette smoking on two random days during the four days of study in the laboratory and free-living conditions. Smoke inhalations were automatically detected using PACT2.0 signals. Respiratory smoke exposure metrics (i.e., puff count, duration of cigarette, puff duration, inhale-exhale duration, inhale-exhale volume, volume over time, smoke hold duration, inter-puff interval) were computed for each puff/smoke inhalation. Analysis comparing respiratory smoke exposure metrics during CReSS days and days without CReSS revealed a significant difference in puff duration, inhale-exhale duration and volume, smoke hold duration, inter-puff interval, and volume over time. However, the number of cigarettes per day and number of puffs per cigarette were statistically the same irrespective of the use of the CReSS device. The results suggested that the use of mouthpiece-based puff topography devices may influence measures of smoking topography with corresponding changes in smoking behavior and smoke exposure.


Asunto(s)
Productos de Tabaco , Tabaquismo , Dispositivos Electrónicos Vestibles , Humanos , Nicotina , Fumar
11.
IEEE J Biomed Health Inform ; 25(2): 568-576, 2021 02.
Artículo en Inglés | MEDLINE | ID: mdl-32750904

RESUMEN

Use of food image capture and/or wearable sensors for dietary assessment has grown in popularity. "Active" methods rely on the user to take an image of each eating episode. "Passive" methods use wearable cameras that continuously capture images. Most of "passively" captured images are not related to food consumption and may present privacy concerns. In this paper, we propose a novel wearable sensor (Automatic Ingestion Monitor, AIM-2) designed to capture images only during automatically detected eating episodes. The capture method was validated on a dataset collected from 30 volunteers in the community wearing the AIM-2 for 24h in pseudo-free-living and 24h in a free-living environment. The AIM-2 was able to detect food intake over 10-second epochs with a (mean and standard deviation) F1-score of 81.8 ± 10.1%. The accuracy of eating episode detection was 82.7%. Out of a total of 180,570 images captured, 8,929 (4.9%) images belonged to detected eating episodes. Privacy concerns were assessed by a questionnaire on a scale 1-7. Continuous capture had concern value of 5.0 ± 1.6 (concerned) while image capture only during food intake had concern value of 1.9 ±1.7 (not concerned). Results suggest that AIM-2 can provide accurate detection of food intake, reduce the number of images for analysis and alleviate the privacy concerns of the users.


Asunto(s)
Dispositivos Electrónicos Vestibles , Recolección de Datos , Ingestión de Alimentos , Alimentos , Humanos , Monitoreo Fisiológico
12.
Biomed Eng Lett ; 10(2): 195-203, 2020 May.
Artículo en Inglés | MEDLINE | ID: mdl-32431952

RESUMEN

A detailed assessment of smoking behavior under free-living conditions is a key challenge for health behavior research. A number of methods using wearable sensors and puff topography devices have been developed for smoking and individual puff detection. In this paper, we propose a novel algorithm for automatic detection of puffs in smoking episodes by using a combination of Respiratory Inductance Plethysmography and Inertial Measurement Unit sensors. The detection of puffs was performed by using a deep network containing convolutional and recurrent neural networks. Convolutional neural networks (CNN) were utilized to automate feature learning from raw sensor streams. Long Short Term Memory (LSTM) network layers were utilized to obtain the temporal dynamics of sensor signals and classify sequence of time segmented sensor streams. An evaluation was performed by using a large, challenging dataset containing 467 smoking events from 40 participants under free-living conditions. The proposed approach achieved an F1-score of 78% in leave-one-subject-out cross-validation. The results suggest that CNN-LSTM based neural network architecture sufficiently detect puffing episodes in free-living condition. The proposed model be used as a detection tool for smoking cessation programs and scientific research.

13.
IEEE Trans Biomed Eng ; 67(8): 2309-2316, 2020 08.
Artículo en Inglés | MEDLINE | ID: mdl-31831405

RESUMEN

Traditional metrics of smoke exposure in cigarette smokers are derived either from self-report, biomarkers, or puff topography. Methods involving biomarkers measure concentrations of nicotine, nicotine metabolites, or carbon monoxide. Puff-topography methods employ portable instruments to measure puff count, puff volume, puff duration, and inter-puff interval. In this article, we propose smoke exposure metrics calculated from the breathing signal and describe a novel algorithm for the computation of these metrics. The Personal Automatic Cigarette Tracker v2 (PACT-2) sensors, puff topography devices (CReSS), and video observation were used in a study of 38 moderate to heavy smokers in a controlled environment. Parameters of smoke inhalation including the start and end of each puff, inhale and exhale cycle, and smoke holding were computed from the breathing signal. From these, the traditional metrics of puff duration, inhale-exhale cycle duration, smoke holding duration, inter-puff interval, and novel Respiratory Smoke Exposure Metrics (RSEMs) such as inhale-exhale cycle volume, and inhale-exhale volume over time were calculated. The proposed RSEM algorithm to extract smoke exposure metrics named generated interclass correlations (ICCs) of 0.85 and 0.87 and Pearson's correlations of 0.97 and 0.77 with video observation and CReSS, respectively, for puff duration. Similarly, for the inhale-exhale duration, an ICC of 0.84 and Pearson's correlation of 0.81 was obtained with video observation. The RSEMs provided measures previously unavailable in research that are proportional to the depth and duration of smoke inhalation. The results suggest that the breathing signal may be used to compute smoke exposure metrics.


Asunto(s)
Benchmarking , Productos de Tabaco , Humanos , Nicotina , Respiración , Fumar
14.
Early Hum Dev ; 146: 105044, 2020 07.
Artículo en Inglés | MEDLINE | ID: mdl-32361560

RESUMEN

OBJECTIVE: To assess patterns of nutritive sucking in very preterm infants ≤32 weeks of gestation. STUDY DESIGN: Very preterm infants who attained independent oral feeding were prospectively assessed with an instrumented feeding bottle that measures nutritive sucking. The primary outcome measure was nutritive sucking performance at independent oral feeding. RESULT: We assessed nutritive sucking patterns in 33 very preterm infants. We recorded 63 feeding sessions. The median number of sucks was 784 (IQR: 550-1053), the median sucking rate was 36/min (IQR: 27-55), and the median number of sucking bursts during the first 5 min of oral feeding was 14 (IQR: 12-16). Maximum sucking strength correlated with the number of sucks (r = 0.62; p < 0.01). No safety concerns were identified during the study. CONCLUSION: The quantitative analysis of nutritive sucking patterns with a newly developed instrumented bottle in stable, very preterm infants is safe and feasible. More research is needed to develop and refine the instrument further.


Asunto(s)
Alimentación con Biberón/instrumentación , Recien Nacido Prematuro , Conducta en la Lactancia/fisiología , Diseño de Equipo , Femenino , Edad Gestacional , Humanos , Recien Nacido Extremadamente Prematuro , Recién Nacido , Masculino
15.
Biomed Signal Process Control ; 51: 106-112, 2019 May.
Artículo en Inglés | MEDLINE | ID: mdl-30854022

RESUMEN

A number of studies have been introduced for the detection of smoking via a variety of features extracted from the wrist IMU data. However, none of the previous studies investigated gesture regularity as a way to detect smoking events. This study describes a novel method to detect smoking events by monitoring the regularity of hand gestures. Here, the regularity of hand gestures was estimated from a one axis accelerometer worn on the wrist of the dominant hand. To quantify the regularity score, this paper applied a novel approach of unbiased autocorrelation to process the temporal sequence of hand gestures. The comparison of regularity score of smoking events with other activities substantiated that hand-to-mouth gestures are highly regular during smoking events and have the potential to detect smoking from among a plethora of daily activities. This hypothesis was validated on a dataset of 140 cigarette smoking events generated by 35 regular smokers in a controlled setting. The regularity of gestures detected smoking events with an F1-score of 0.81. However, the accuracy dropped to 0.49 in the free-living study of same 35 smokers smoking 295 cigarettes. Nevertheless, regularity of gestures may be useful as a supportive tool for other detection methods. To validate that proposition, this paper further incorporated the regularity of gestures in an instrumented lighter based smoking detection algorithm and achieved an improvement in F1-score from 0.89 (lighter only) to 0.91 (lighter and regularity of hand gestures).

16.
Annu Int Conf IEEE Eng Med Biol Soc ; 2019: 3262-3265, 2019 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-31946581

RESUMEN

Wearable sensors have successfully been used in recent studies to monitor cigarette smoking events and analyze people's smoking behavior. Respiratory inductive plethysmography (RIP) has been employed to track breathing and to identify characteristic breathing pattern specific to smoking. Pattern recognition algorithms such as Support Vector Machine (SVM), Hidden Markov Model, Decision tree, or ensemble approaches have been used to identify smoke inhalations. However, no deep learning approaches, which have been proved effective to many time series datasets, have ever been tested yet. Hence, a Convolutional Neural Network (CNN) and Long Term Short Memory (LSTM) based approach is presented in this paper to detect smoke inhalations in the breathing signal. To illustrate the effectiveness of this deep learning approach, a traditional machine learning (SVM) based approach was used for comparison. On the validation dataset of 120 smoking sessions performed in a laboratory setting by 30 moderate-to-heavy smokers, the CNN-LSTM approach achieved an F1-score of 72% in leave-one-subject-out (LOSO) cross-validation method whereas the classical SVM approach scored 63%. These results suggest that deep learning-based approaches might provide a better analytical method for detection of smoke inhalations than more conventional machine learning approaches.


Asunto(s)
Aprendizaje Automático , Redes Neurales de la Computación , Pletismografía , Fumar , Máquina de Vectores de Soporte , Algoritmos , Humanos , Pletismografía/métodos , Humo , Dispositivos Electrónicos Vestibles
17.
Annu Int Conf IEEE Eng Med Biol Soc ; 2019: 3563-3566, 2019 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-31946648

RESUMEN

Cigarette smoking has severe health impacts on those who smoke and the people around them. Several wearable sensing modalities have recently been investigated to collect objective data on daily smoking, including detection of smoking episodes from breathing patterns, hand to mouth behavior, and characteristic hand gestures or cigarette lighting events. In order to provide new insight into ongoing research on the objective collection of smoking-related events, this paper proposes a novel method to identify smoking events from the associated changes in heart rate parameters specific to smoking. The proposed method also accounts for the breathing rate and body motion of the person who is smoking to better distinguish these changes from intense physical activities. In this research, a human study was first performed on 20 daily cigarette smokers to record heart rate, breathing rate, and body acceleration collected from a wearable chest sensor consisting of an ECG and bioimpedance measurement sensor and a 3D inertial sensor. Each participant spent ~2 hours in a laboratory environment (mimicking daily activities that included smoking 4 cigarettes) and ~24 hours under unconstrained free-living conditions. A support vector machine-based classifier was developed to automatically detect smoking episodes from the captured sensor signals using fifteen features selected by a forward-feature selection method. In a leave one subject out cross-validation, the proposed approach detected smoking events (187 out of total 232) with the sensitivity and F-score accuracy of 0.87 and 0.79, respectively, in the laboratory setting (known activities) and 0.77 and 0.61, respectively, under free-living conditions. These results validate the proof-of-concept that, although further research is necessary for performance improvement, characteristic changes in heart rate parameters could be a useful indicator of cigarette smoking even under free-living conditions.


Asunto(s)
Fumar Cigarrillos , Frecuencia Cardíaca , Tabaquismo , Electrocardiografía , Humanos , Humo , Fumar , Máquina de Vectores de Soporte , Nicotiana , Tabaquismo/diagnóstico
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA