Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 67
Filtrar
Mais filtros

Bases de dados
País/Região como assunto
Tipo de documento
Intervalo de ano de publicação
1.
Methods ; 202: 164-172, 2022 06.
Artigo em Inglês | MEDLINE | ID: mdl-33636312

RESUMO

Analysis of electroencephalogram (EEG) is a crucial diagnostic criterion for many sleep disorders, of which sleep staging is an important component. Manual stage classification is a labor-intensive process and usually suffered from many subjective factors. Recently, more and more computer-aided techniques have been applied to this task, among which deep convolutional neural network has been performing well as an effective automatic classification model. Despite some comprehensive models have been developed to improve classification results, the accuracy for clinical applications has not been reached due to the lack of sufficient labeled data and the limitation of extracting latent discriminative EEG features. Therefore, we propose a novel hybrid manifold-deep convolutional neural network with hyperbolic attention. To overcome the shortage of labeled data, we update the semi-supervised training scheme as an optimal solution. In order to extract the latent feature representation, we introduce the manifold learning module and the hyperbolic module to extract more discriminative information. Eight subjects from the public dataset are utilized to evaluate our pipeline, and the model achieved 89% accuracy, 70% precision, 80% sensitivity, 72% f1-score and kappa coefficient of 78%, respectively. The proposed model demonstrates powerful ability in extracting feature representation and achieves promising results by using semi-supervised training scheme. Therefore, our approach shows strong potential for future clinical development.


Assuntos
Redes Neurais de Computação , Fases do Sono , Eletroencefalografia/métodos , Humanos , Sono
2.
Methods ; 204: 84-91, 2022 08.
Artigo em Inglês | MEDLINE | ID: mdl-35364278

RESUMO

Despite the progress recently made towards automatic sleep staging for adults, children have complicated sleep structures that require attention to the pediatric sleep staging. Semi-supervised learning, i.e., training networks with both labeled and unlabeled data, greatly reduces the burden of epoch-by-epoch annotation for physicians. However, the inherent class-imbalance problem in sleep staging task undermines the effectiveness of semi-supervised methods such as pseudo-labeling. In this paper, we propose a Bi-Stream Adversarial Learning network (BiSALnet) to generate pseudo-labels with higher confidence for network optimization. Adversarial learning strategy is adopted in Student and Teacher branches of the two-stream networks. The similarity measurement function minimizes the divergence between the outputs of the Student and Teacher branches, and the discriminator continuously enhances its discriminative ability. In addition, we employ a powerful symmetric positive definite (SPD) manifold structure in the Student branch to capture the desired feature distribution properties. The joint discriminative power of convolutional features and nonlinear complex information aggregated by SPD matrices is combined by the attention feature fusion module to improve the sleep stage classification performance. The BiSALnet is tested on pediatric dataset collected from local hospital. Experimental results show that our method yields the overall classification accuracy of 0.80, kappa of 0.73 and F1-score of 0.76. We also examine the generality of our method on a well-known public dataset Sleep-EDF. Our BiSALnet exhibits noticeable performance with accuracy of 0.91, kappa of 0.85 and F1-score of 0.77. Remarkably, we have obtained comparable performance with state-of-the-art supervised approaches with fairly limited labeled data.


Assuntos
Eletroencefalografia , Fases do Sono , Adulto , Criança , Humanos , Sono , Aprendizado de Máquina Supervisionado
3.
Public Health Nutr ; : 1-11, 2022 May 26.
Artigo em Inglês | MEDLINE | ID: mdl-35616087

RESUMO

OBJECTIVE: Passive, wearable sensors can be used to obtain objective information in infant feeding, but their use has not been tested. Our objective was to compare assessment of infant feeding (frequency, duration and cues) by self-report and that of the Automatic Ingestion Monitor-2 (AIM-2). DESIGN: A cross-sectional pilot study was conducted in Ghana. Mothers wore the AIM-2 on eyeglasses for 1 d during waking hours to assess infant feeding using images automatically captured by the device every 15 s. Feasibility was assessed using compliance with wearing the device. Infant feeding practices collected by the AIM-2 images were annotated by a trained evaluator and compared with maternal self-report via interviewer-administered questionnaire. SETTING: Rural and urban communities in Ghana. PARTICIPANTS: Participants were thirty eight (eighteen rural and twenty urban) breast-feeding mothers of infants (child age ≤7 months). RESULTS: Twenty-five mothers reported exclusive breast-feeding, which was common among those < 30 years of age (n 15, 60 %) and those residing in urban communities (n 14, 70 %). Compliance with wearing the AIM-2 was high (83 % of wake-time), suggesting low user burden. Maternal report differed from the AIM-2 data, such that mothers reported higher mean breast-feeding frequency (eleven v. eight times, P = 0·041) and duration (18·5 v. 10 min, P = 0·007) during waking hours. CONCLUSION: The AIM-2 was a feasible tool for the assessment of infant feeding among mothers in Ghana as a passive, objective method and identified overestimation of self-reported breast-feeding frequency and duration. Future studies using the AIM-2 are warranted to determine validity on a larger scale.

4.
Sensors (Basel) ; 22(4)2022 Feb 15.
Artigo em Inglês | MEDLINE | ID: mdl-35214399

RESUMO

Knowing the amounts of energy and nutrients in an individual's diet is important for maintaining health and preventing chronic diseases. As electronic and AI technologies advance rapidly, dietary assessment can now be performed using food images obtained from a smartphone or a wearable device. One of the challenges in this approach is to computationally measure the volume of food in a bowl from an image. This problem has not been studied systematically despite the bowl being the most utilized food container in many parts of the world, especially in Asia and Africa. In this paper, we present a new method to measure the size and shape of a bowl by adhering a paper ruler centrally across the bottom and sides of the bowl and then taking an image. When observed from the image, the distortions in the width of the paper ruler and the spacings between ruler markers completely encode the size and shape of the bowl. A computational algorithm is developed to reconstruct the three-dimensional bowl interior using the observed distortions. Our experiments using nine bowls, colored liquids, and amorphous foods demonstrate high accuracy of our method for food volume estimation involving round bowls as containers. A total of 228 images of amorphous foods were also used in a comparative experiment between our algorithm and an independent human estimator. The results showed that our algorithm overperformed the human estimator who utilized different types of reference information and two estimation methods, including direct volume estimation and indirect estimation through the fullness of the bowl.


Assuntos
Dieta , Ingestão de Energia , Algoritmos , Alimentos , Humanos , Smartphone
5.
Entropy (Basel) ; 22(5)2020 May 02.
Artigo em Inglês | MEDLINE | ID: mdl-33286292

RESUMO

Due to the wide inter- and intra-individual variability, short-term heart rate variability (HRV) analysis (usually 5 min) might lead to inaccuracy in detecting heart failure. Therefore, RR interval segmentation, which can reflect the individual heart condition, has been a key research challenge for accurate detection of heart failure. Previous studies mainly focus on analyzing the entire 24-h ECG recordings from all individuals in the database which often led to poor detection rate. In this study, we propose a set of data refinement procedures, which can automatically extract heart failure segments and yield better detection of heart failure. The procedures roughly contain three steps: (1) select fast heart rate sequences, (2) apply dynamic time warping (DTW) measure to filter out dissimilar segments, and (3) pick out individuals with large numbers of segments preserved. A physical threshold-based Sample Entropy (SampEn) was applied to distinguish congestive heart failure (CHF) subjects from normal sinus rhythm (NSR) ones, and results using the traditional threshold were also discussed. Experiment on the PhysioNet/MIT RR Interval Databases showed that in SampEn analysis (embedding dimension m = 1, tolerance threshold r = 12 ms and time series length N = 300), the accuracy value after data refinement has increased to 90.46% from 75.07%. Meanwhile, for the proposed procedures, the area under receiver operating characteristic curve (AUC) value has reached 95.73%, which outperforms the original method (i.e., without applying the proposed data refinement procedures) with AUC of 76.83%. The results have shown that our proposed data refinement procedures can significantly improve the accuracy in heart failure detection.

6.
IEEE J Biomed Health Inform ; 28(2): 765-776, 2024 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-38010934

RESUMO

Motor Imagery (MI) Electroencephalography (EEG) is one of the most common Brain-Computer Interface (BCI) paradigms that has been widely used in neural rehabilitation and gaming. Although considerable research efforts have been dedicated to developing MI EEG classification algorithms, they are mostly limited in handling scenarios where the training and testing data are not from the same subject or session. Such poor generalization capability significantly limits the realization of BCI in real-world applications. In this paper, we proposed a novel framework to disentangle the representation of raw EEG data into three components, subject/session-specific, MI-task-specific, and random noises, so that the subject/session-specific feature extends the generalization capability of the system. This is realized by a joint discriminative and generative framework, supported by a series of fundamental training losses and training strategies. We evaluated our framework on three public MI EEG datasets, and detailed experimental results show that our method can achieve superior performance by a large margin compared to current state-of-the-art benchmark algorithms.


Assuntos
Interfaces Cérebro-Computador , Humanos , Eletroencefalografia/métodos , Algoritmos , Benchmarking , Imaginação
7.
IEEE Rev Biomed Eng ; 17: 42-62, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-37471188

RESUMO

The integration of machine/deep learning and sensing technologies is transforming healthcare and medical practice. However, inherent limitations in healthcare data, namely scarcity, quality, and heterogeneity, hinder the effectiveness of supervised learning techniques which are mainly based on pure statistical fitting between data and labels. In this article, we first identify the challenges present in machine learning for pervasive healthcare and we then review the current trends beyond fully supervised learning that are developed to address these three issues. Rooted in the inherent drawbacks of empirical risk minimization that underpins pure fully supervised learning, this survey summarizes seven key lines of learning strategies, to promote the generalization performance for real-world deployment. In addition, we point out several directions that are emerging and promising in this area, to develop data-efficient, scalable, and trustworthy computational models, and to leverage multi-modality and multi-source sensing informatics, for pervasive healthcare.


Assuntos
Aprendizado de Máquina , Tecnologia , Humanos , Aprendizado de Máquina Supervisionado
8.
Ther Adv Gastrointest Endosc ; 17: 26317745241246899, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38712011

RESUMO

Background: Acute upper gastrointestinal bleeding (AUGIB) is a major cause of morbidity and mortality. This presentation however is not universally high risk as only 20-30% of bleeds require urgent haemostatic therapy. Nevertheless, the current standard of care is for all patients admitted to an inpatient bed to undergo endoscopy within 24 h for risk stratification which is invasive, costly and difficult to achieve in routine clinical practice. Objectives: To develop novel non-endoscopic machine learning models for AUGIB to predict the need for haemostatic therapy by endoscopic, radiological or surgical intervention. Design: A retrospective cohort study. Method: We analysed data from patients admitted with AUGIB to hospitals from 2015 to 2020 (n = 970). Machine learning models were internally validated to predict the need for haemostatic therapy. The performance of the models was compared to the Glasgow-Blatchford score (GBS) using the area under receiver operating characteristic (AUROC) curves. Results: The random forest classifier [AUROC 0.84 (0.80-0.87)] had the best performance and was superior to the GBS [AUROC 0.75 (0.72-0.78), p < 0.001] in predicting the need for haemostatic therapy in patients with AUGIB. A GBS cut-off of ⩾12 was associated with an accuracy, sensitivity, specificity, positive predictive value (PPV) and negative predictive value (NPV) of 0.74, 0.49, 0.81, 0.41 and 0.85, respectively. The Random Forrest model performed better with an accuracy, sensitivity, specificity, PPV and NPV of 0.82, 0.54, 0.90, 0.60 and 0.88, respectively. Conclusion: We developed and validated a machine learning algorithm with high accuracy and specificity in predicting the need for haemostatic therapy in AUGIB. This could be used to risk stratify high-risk patients to urgent endoscopy.

9.
Artigo em Inglês | MEDLINE | ID: mdl-38900623

RESUMO

Conventional approaches to dietary assessment are primarily grounded in self-reporting methods or structured interviews conducted under the supervision of dietitians. These methods, however, are often subjective, potentially inaccurate, and time-intensive. Although artificial intelligence (AI)-based solutions have been devised to automate the dietary assessment process, prior AI methodologies tackle dietary assessment in a fragmented landscape (e.g., merely recognizing food types or estimating portion size), and encounter challenges in their ability to generalize across a diverse range of food categories, dietary behaviors, and cultural contexts. Recently, the emergence of multimodal foundation models, such as GPT-4V, has exhibited transformative potential across a wide range of tasks (e.g., scene understanding and image captioning) in various research domains. These models have demonstrated remarkable generalist intelligence and accuracy, owing to their large-scale pre-training on broad datasets and substantially scaled model size. In this study, we explore the application of GPT-4V powering multimodal ChatGPT for dietary assessment, along with prompt engineering and passive monitoring techniques. We evaluated the proposed pipeline using a self-collected, semi free-living dietary intake dataset comprising 16 real-life eating episodes, captured through wearable cameras. Our findings reveal that GPT-4V excels in food detection under challenging conditions without any fine-tuning or adaptation using food-specific datasets. By guiding the model with specific language prompts (e.g., African cuisine), it shifts from recognizing common staples like rice and bread to accurately identifying regional dishes like banku and ugali. Another GPT-4V's standout feature is its contextual awareness. GPT-4V can leverage surrounding objects as scale references to deduce the portion sizes of food items, further facilitating the process of dietary assessment.

10.
IEEE Trans Cybern ; 54(2): 679-692, 2024 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-37028043

RESUMO

Camera-based passive dietary intake monitoring is able to continuously capture the eating episodes of a subject, recording rich visual information, such as the type and volume of food being consumed, as well as the eating behaviors of the subject. However, there currently is no method that is able to incorporate these visual clues and provide a comprehensive context of dietary intake from passive recording (e.g., is the subject sharing food with others, what food the subject is eating, and how much food is left in the bowl). On the other hand, privacy is a major concern while egocentric wearable cameras are used for capturing. In this article, we propose a privacy-preserved secure solution (i.e., egocentric image captioning) for dietary assessment with passive monitoring, which unifies food recognition, volume estimation, and scene understanding. By converting images into rich text descriptions, nutritionists can assess individual dietary intake based on the captions instead of the original images, reducing the risk of privacy leakage from images. To this end, an egocentric dietary image captioning dataset has been built, which consists of in-the-wild images captured by head-worn and chest-worn cameras in field studies in Ghana. A novel transformer-based architecture is designed to caption egocentric dietary images. Comprehensive experiments have been conducted to evaluate the effectiveness and to justify the design of the proposed architecture for egocentric dietary image captioning. To the best of our knowledge, this is the first work that applies image captioning for dietary intake assessment in real-life settings.


Assuntos
Ingestão de Alimentos , Privacidade , Dieta , Avaliação Nutricional , Comportamento Alimentar
11.
Surg Innov ; 20(1): 86-94, 2013 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-22641465

RESUMO

Surgery to the trunk often results in a change of gait, most pronounced during walking. This change is usually transient, often as a result of wound pain, and returns to normal as the patient recovers. Quantifying and monitoring gait impairment therefore represents a novel means of functional postoperative home recovery follow-up. Until now, this type of assessment could only be made in a gait lab, which is both expensive and labor intensive to administer on a large scale. The objective of this work is to validate the use of an ear-worn activity recognition (e-AR) sensor for quantification of gait impairment after abdominal wall and perianal surgery. The e-AR sensor was used on 2 comparative simulated data sets (N = 32) of truncal impairment to observe walking patterns. The sensor was also used to observe the walking patterns of preoperative and postoperative surgical patients who had undergone abdominal wall (n = 5) and perianal surgery (n = 5). Methods for multiresolution feature extraction, selection, and classification are investigated using the raw ear-sensor data. Results show that the method demonstrates a good separation between impaired and nonimpaired classes for both simulated and real patient data sets. This indicates that the e-AR sensor may be used as a tool for the pervasive assessment of postoperative gait impairment, as part of functional recovery monitoring, in patients at their own homes.


Assuntos
Orelha , Marcha/fisiologia , Monitorização Ambulatorial/instrumentação , Recuperação de Função Fisiológica/fisiologia , Caminhada/fisiologia , Tecnologia sem Fio/instrumentação , Parede Abdominal/cirurgia , Algoritmos , Canal Anal/cirurgia , Simulação por Computador , Humanos , Limitação da Mobilidade , Modelos Teóricos , Monitorização Ambulatorial/métodos , Redes Neurais de Computação , Período Pós-Operatório , Processamento de Sinais Assistido por Computador , Caminhada/classificação
12.
Artigo em Inglês | MEDLINE | ID: mdl-38082849

RESUMO

IoT devices are sorely underutilized in the medical field, especially within machine learning for medicine, yet they offer unrivaled benefits. IoT devices are low cost, energy efficient, small and intelligent devices [1].In this paper, we propose a distributed federated learning framework for IoT devices, more specifically for IoMT (In-ternet of Medical Things), using blockchain to allow for a decentralized scheme improving privacy and efficiency over a centralized system; this allows us to move from the cloud based architectures, that are prevalent, to the edge.The system is designed for three paradigms: 1) Training neural networks on IoT devices to allow for collaborative training of a shared model whilst decoupling the learning from the dataset [2] to ensure privacy [3]. Training is performed in an online manner simultaneously amongst all participants, allowing for training of actual data that may not have been present in a dataset collected in the traditional way and dynamically adapt the system whilst it is being trained. 2) Training of an IoMT system in a fully private manner such as to mitigate the issue with confidentiality of medical data and to build robust, and potentially bespoke [4], models where not much, if any, data exists. 3) Distribution of the actual network training, something federated learning itself does not do, to allow hospitals, for example, to utilize their spare computing resources to train network models.


Assuntos
Blockchain , Medicina , Humanos , Hospitais , Inteligência , Aprendizado de Máquina
13.
IEEE J Biomed Health Inform ; 27(6): 2647-2655, 2023 06.
Artigo em Inglês | MEDLINE | ID: mdl-36215345

RESUMO

The continuing increase in the incidence and recognition of children's sleep disorders has heightened the demand for automatic pediatric sleep staging. Supervised sleep stage recognition algorithms, however, are often faced with challenges such as limited availability of pediatric sleep physicians and data heterogeneity. Drawing upon two quickly advancing fields, i.e., semi-supervised learning and self-supervised contrastive learning, we propose a multi-task contrastive learning strategy for semi-supervised pediatric sleep stage recognition, abbreviated as MtCLSS. Specifically, signal-adapted transformations are applied to electroencephalogram (EEG) recordings of the full night polysomnogram, which facilitates the network to improve its representation ability through identifying the transformations. We also introduce an extension of contrastive loss function, thus adapting contrastive learning to the semi-supervised setting. In this way, the proposed framework learns not only task-specific features from a small amount of supervised data, but also extracts general features from signal transformations, improving the model robustness. MtCLSS is evaluated on a real-world pediatric sleep dataset with promising performance (0.80 accuracy, 0.78 F1-score and 0.74 kappa). We also examine its generality on a well-known public dataset. The experimental results demonstrate the effectiveness of the MtCLSS framework for EEG based automatic pediatric sleep staging in very limited labeled data scenarios.


Assuntos
Fases do Sono , Sono , Criança , Humanos , Polissonografia , Algoritmos , Eletroencefalografia
14.
IEEE J Biomed Health Inform ; 27(2): 1118-1128, 2023 02.
Artigo em Inglês | MEDLINE | ID: mdl-36350856

RESUMO

With the development of modern cameras, more physiological signals can be obtained from portable devices like smartphone. Some hemodynamically based non-invasive video processing applications have been applied for blood pressure classification and blood glucose prediction objectives for unobtrusive physiological monitoring at home. However, this approach is still under development with very few publications. In this paper, we propose an end-to-end framework, entitled cocktail causal container, to fuse multiple physiological representations and to reconstruct the correlation between frequency and temporal information during multi-task learning. Cocktail causal container processes hematologic reflex information to classify blood pressure and blood glucose. Since the learning of discriminative features from video physiological representations is quite challenging, we propose a token feature fusion block to fuse the multi-view fine-grained representations to a union discrete frequency space. A causal net is used to analyze the fused higher-order information, so that the framework can be enforced to disentangle the latent factors into the related endogenous association that corresponds to down-stream fusion information to improve the semantic interpretation. Moreover, a pair-wise temporal frequency map is developed to provide valuable insights into extraction of salient photoplethysmograph (PPG) information from fingertip videos obtained by a standard smartphone camera. Extensive comparisons have been implemented for the validation of cocktail causal container using a Clinical dataset and PPG-BP benchmark. The root mean square error of 1.329±0.167 for blood glucose prediction and precision of 0.89±0.03 for blood pressure classification are achieved in Clinical dataset.


Assuntos
Algoritmos , Glicemia , Humanos , Pressão Sanguínea , Determinação da Pressão Arterial , Monitorização Fisiológica
15.
IEEE J Biomed Health Inform ; 27(12): 6074-6087, 2023 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-37738186

RESUMO

Large AI models, or foundation models, are models recently emerging with massive scales both parameter-wise and data-wise, the magnitudes of which can reach beyond billions. Once pretrained, large AI models demonstrate impressive performance in various downstream tasks. A prime example is ChatGPT, whose capability has compelled people's imagination about the far-reaching influence that large AI models can have and their potential to transform different domains of our lives. In health informatics, the advent of large AI models has brought new paradigms for the design of methodologies. The scale of multi-modal data in the biomedical and health domain has been ever-expanding especially since the community embraced the era of deep learning, which provides the ground to develop, validate, and advance large AI models for breakthroughs in health-related areas. This article presents a comprehensive review of large AI models, from background to their applications. We identify seven key sectors in which large AI models are applicable and might have substantial influence, including: 1) bioinformatics; 2) medical diagnosis; 3) medical imaging; 4) medical informatics; 5) medical education; 6) public health; and 7) medical robotics. We examine their challenges, followed by a critical discussion about potential future directions and pitfalls of large AI models in transforming the field of health informatics.


Assuntos
Informática Médica , Robótica , Humanos , Biologia Computacional , Imaginação , Saúde Pública
16.
Front Nutr ; 10: 1191962, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37575335

RESUMO

Introduction: Dietary assessment is important for understanding nutritional status. Traditional methods of monitoring food intake through self-report such as diet diaries, 24-hour dietary recall, and food frequency questionnaires may be subject to errors and can be time-consuming for the user. Methods: This paper presents a semi-automatic dietary assessment tool we developed - a desktop application called Image to Nutrients (I2N) - to process sensor-detected eating events and images captured during these eating events by a wearable sensor. I2N has the capacity to offer multiple food and nutrient databases (e.g., USDA-SR, FNDDS, USDA Global Branded Food Products Database) for annotating eating episodes and food items. I2N estimates energy intake, nutritional content, and the amount consumed. The components of I2N are three-fold: 1) sensor-guided image review, 2) annotation of food images for nutritional analysis, and 3) access to multiple food databases. Two studies were used to evaluate the feasibility and usefulness of I2N: 1) a US-based study with 30 participants and a total of 60 days of data and 2) a Ghana-based study with 41 participants and a total of 41 days of data). Results: In both studies, a total of 314 eating episodes were annotated using at least three food databases. Using I2N's sensor-guided image review, the number of images that needed to be reviewed was reduced by 93% and 85% for the two studies, respectively, compared to reviewing all the images. Discussion: I2N is a unique tool that allows for simultaneous viewing of food images, sensor-guided image review, and access to multiple databases in one tool, making nutritional analysis of food images efficient. The tool is flexible, allowing for nutritional analysis of images if sensor signals aren't available.

17.
Nutrients ; 15(18)2023 Sep 20.
Artigo em Inglês | MEDLINE | ID: mdl-37764857

RESUMO

BACKGROUND: Accurate estimation of dietary intake is challenging. However, whilst some progress has been made in high-income countries, low- and middle-income countries (LMICs) remain behind, contributing to critical nutritional data gaps. This study aimed to validate an objective, passive image-based dietary intake assessment method against weighed food records in London, UK, for onward deployment to LMICs. METHODS: Wearable camera devices were used to capture food intake on eating occasions in 18 adults and 17 children of Ghanaian and Kenyan origin living in London. Participants were provided pre-weighed meals of Ghanaian and Kenyan cuisine and camera devices to automatically capture images of the eating occasions. Food images were assessed for portion size, energy, nutrient intake, and the relative validity of the method compared to the weighed food records. RESULTS: The Pearson and Intraclass correlation coefficients of estimates of intakes of food, energy, and 19 nutrients ranged from 0.60 to 0.95 and 0.67 to 0.90, respectively. Bland-Altman analysis showed good agreement between the image-based method and the weighed food record. Under-estimation of dietary intake by the image-based method ranged from 4 to 23%. CONCLUSIONS: Passive food image capture and analysis provides an objective assessment of dietary intake comparable to weighed food records.


Assuntos
Ingestão de Alimentos , Alimentos , Humanos , Adulto , Criança , Londres , Gana , Quênia
18.
Mater Today Bio ; 15: 100298, 2022 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-35634169

RESUMO

Totally implanted access ports (TIAP) are widely used with oncology patients requiring long term central venous access for the delivery of chemotherapeutic agents, infusions, transfusions, blood sample collection and parenteral nutrition. Such devices offer a significant improvement to the quality of life for patients and reduced complication rates, particularly infection, in contrast to the classical central venous catheters. Nevertheless, infections do occur, with biofilm formation bringing difficulties to the treatment of infection-related complications that can ultimately lead to the explantation of the device. A smart TIAP device that is sensor-enabled to detect infection prior to extensive biofilm formation would reduce the cases for potential device explantation, whereas biomarkers detection within body fluids such as pH or lactate would provide vital information regarding metabolic processes occurring inside the body. In this paper, we propose a novel batteryless and wireless device suitable for the interrogation of such markers in an embodiment model of an TIAP, with miniature biochemical sensing needles. Device readings can be carried out by a smartphone equipped with Near Field Communication (NFC) interface at relative short distances off-body, while providing radiofrequency energy harvesting capability to the TIAP, useful for assessing patient's health and potential port infection on demand.

19.
IEEE J Biomed Health Inform ; 26(3): 1034-1044, 2022 03.
Artigo em Inglês | MEDLINE | ID: mdl-34449400

RESUMO

Accurate lower-limb pose estimation is aprerequisite of skeleton based pathological gait analysis. To achieve this goal in free-living environments for long-term monitoring, single depth sensor has been proposed in research. However, the depth map acquired from a single viewpoint encodes only partial geometric information of the lower limbs and exhibits large variations across different viewpoints. Existing off-the-shelf 3D pose tracking algorithms and public datasets for depth based human pose estimation are mainly targeted at activity recognition applications. They are relatively insensitive to skeleton estimation accuracy, especially at the foot segments. Furthermore, acquiring ground truth skeleton data for detailed biomechanics analysis also requires considerable efforts. To address these issues, we propose a novel cross-domain self-supervised complete geometric representation learning framework, with knowledge transfer from the unlabelled synthetic point clouds of full lower-limb surfaces. The proposed method can significantly reduce the number of ground truth skeletons (with only 1%) in the training phase, meanwhile ensuring accurate and precise pose estimation and capturing discriminative features across different pathological gait patterns compared to other methods.


Assuntos
Computação em Nuvem , Análise da Marcha , Algoritmos , Marcha , Humanos
20.
IEEE Rev Biomed Eng ; 15: 85-102, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-33961564

RESUMO

Hands are vital in a wide range of fundamental daily activities, and neurological diseases that impede hand function can significantly affect quality of life. Wearable hand gesture interfaces hold promise to restore and assist hand function and to enhance human-human and human-computer communication. The purpose of this review is to synthesize current novel sensing interfaces and algorithms for hand gesture recognition, and the scope of applications covers rehabilitation, prosthesis control, exoskeletons for augmentation, sign language recognition, human-computer interaction, and user authentication. Results showed that electrical, mechanical, acoustical/vibratory, and optical sensing were the primary input modalities in gesture recognition interfaces. Two categories of algorithms were identified: 1) classification algorithms for predefined, fixed hand poses and 2) regression algorithms for continuous finger and wrist joint angles. Conventional machine learning algorithms, including linear discriminant analysis, support vector machines, random forests, and non-negative matrix factorization, have been widely used for a variety of gesture recognition applications, and deep learning algorithms have more recently been applied to further facilitate the complex relationship between sensor signals and multi-articulated hand postures. Future research should focus on increasing recognition accuracy with larger hand gesture datasets, improving reliability and robustness for daily use outside of the laboratory, and developing softer, less obtrusive interfaces.


Assuntos
Gestos , Dispositivos Eletrônicos Vestíveis , Algoritmos , Humanos , Qualidade de Vida , Reprodutibilidade dos Testes
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA