Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 33
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Sci Rep ; 14(1): 1665, 2024 01 18.
Artigo em Inglês | MEDLINE | ID: mdl-38238423

RESUMO

The first step in any dietary monitoring system is the automatic detection of eating episodes. To detect eating episodes, either sensor data or images can be used, and either method can result in false-positive detection. This study aims to reduce the number of false positives in the detection of eating episodes by a wearable sensor, Automatic Ingestion Monitor v2 (AIM-2). Thirty participants wore the AIM-2 for two days each (pseudo-free-living and free-living). The eating episodes were detected by three methods: (1) recognition of solid foods and beverages in images captured by AIM-2; (2) recognition of chewing from the AIM-2 accelerometer sensor; and (3) hierarchical classification to combine confidence scores from image and accelerometer classifiers. The integration of image- and sensor-based methods achieved 94.59% sensitivity, 70.47% precision, and 80.77% F1-score in the free-living environment, which is significantly better than either of the original methods (8% higher sensitivity). The proposed method successfully reduces the number of false positives in the detection of eating episodes.


Assuntos
Dieta , Mastigação , Humanos , Monitorização Fisiológica , Reconhecimento Psicológico , Processos Mentais
2.
Sensors (Basel) ; 23(16)2023 Aug 14.
Artigo em Inglês | MEDLINE | ID: mdl-37631707

RESUMO

Capsule endoscopy (CE) is a widely used medical imaging tool for the diagnosis of gastrointestinal tract abnormalities like bleeding. However, CE captures a huge number of image frames, constituting a time-consuming and tedious task for medical experts to manually inspect. To address this issue, researchers have focused on computer-aided bleeding detection systems to automatically identify bleeding in real time. This paper presents a systematic review of the available state-of-the-art computer-aided bleeding detection algorithms for capsule endoscopy. The review was carried out by searching five different repositories (Scopus, PubMed, IEEE Xplore, ACM Digital Library, and ScienceDirect) for all original publications on computer-aided bleeding detection published between 2001 and 2023. The Preferred Reporting Items for Systematic Review and Meta-Analyses (PRISMA) methodology was used to perform the review, and 147 full texts of scientific papers were reviewed. The contributions of this paper are: (I) a taxonomy for computer-aided bleeding detection algorithms for capsule endoscopy is identified; (II) the available state-of-the-art computer-aided bleeding detection algorithms, including various color spaces (RGB, HSV, etc.), feature extraction techniques, and classifiers, are discussed; and (III) the most effective algorithms for practical use are identified. Finally, the paper is concluded by providing future direction for computer-aided bleeding detection research.


Assuntos
Endoscopia por Cápsula , Humanos , Computadores , Sistemas Computacionais , Algoritmos , Hemorragia
3.
Front Nutr ; 10: 1191962, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37575335

RESUMO

Introduction: Dietary assessment is important for understanding nutritional status. Traditional methods of monitoring food intake through self-report such as diet diaries, 24-hour dietary recall, and food frequency questionnaires may be subject to errors and can be time-consuming for the user. Methods: This paper presents a semi-automatic dietary assessment tool we developed - a desktop application called Image to Nutrients (I2N) - to process sensor-detected eating events and images captured during these eating events by a wearable sensor. I2N has the capacity to offer multiple food and nutrient databases (e.g., USDA-SR, FNDDS, USDA Global Branded Food Products Database) for annotating eating episodes and food items. I2N estimates energy intake, nutritional content, and the amount consumed. The components of I2N are three-fold: 1) sensor-guided image review, 2) annotation of food images for nutritional analysis, and 3) access to multiple food databases. Two studies were used to evaluate the feasibility and usefulness of I2N: 1) a US-based study with 30 participants and a total of 60 days of data and 2) a Ghana-based study with 41 participants and a total of 41 days of data). Results: In both studies, a total of 314 eating episodes were annotated using at least three food databases. Using I2N's sensor-guided image review, the number of images that needed to be reviewed was reduced by 93% and 85% for the two studies, respectively, compared to reviewing all the images. Discussion: I2N is a unique tool that allows for simultaneous viewing of food images, sensor-guided image review, and access to multiple databases in one tool, making nutritional analysis of food images efficient. The tool is flexible, allowing for nutritional analysis of images if sensor signals aren't available.

4.
Front Nutr ; 10: 1119542, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37252243

RESUMO

Introduction: The aim of this feasibility and proof-of-concept study was to examine the use of a novel wearable device for automatic food intake detection to capture the full range of free-living eating environments of adults with overweight and obesity. In this paper, we document eating environments of individuals that have not been thoroughly described previously in nutrition software as current practices rely on participant self-report and methods with limited eating environment options. Methods: Data from 25 participants and 116 total days (7 men, 18 women, Mage = 44 ± 12 years, BMI 34.3 ± 5.2 kg/mm2), who wore the passive capture device for at least 7 consecutive days (≥12h waking hours/d) were analyzed. Data were analyzed at the participant level and stratified amongst meal type into breakfast, lunch, dinner, and snack categories. Out of 116 days, 68.1% included breakfast, 71.5% included lunch, 82.8% included dinner, and 86.2% included at least one snack. Results: The most prevalent eating environment among all eating occasions was at home and with one or more screens in use (breakfast: 48.1%, lunch: 42.2%, dinner: 50%, and snacks: 55%), eating alone (breakfast: 75.9%, lunch: 89.2%, dinner: 74.3%, snacks: 74.3%), in the dining room (breakfast: 36.7%, lunch: 30.1%, dinner: 45.8%) or living room (snacks: 28.0%), and in multiple locations (breakfast: 44.3%, lunch: 28.8%, dinner: 44.8%, snacks: 41.3%). Discussion: Results suggest a passive capture device can provide accurate detection of food intake in multiple eating environments. To our knowledge, this is the first study to classify eating occasions in multiple eating environments and may be a useful tool for future behavioral research studies to accurately codify eating environments.

5.
Heliyon ; 9(4): e14637, 2023 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-37025788

RESUMO

Despite possessing attractive features such as autotrophic growth on minimal media, industrial applications of cyanobacteria are hindered by a lack of genetic manipulative tools. There are two important features that are important for an effective manipulation: a vector which can carry the gene, and an induction system activated through external stimuli, giving us control over the expression. In this study, we describe the construction of an improved RSF1010-based vector as well as a temperature-inducible RNA thermometer. RSF1010 is a well-studied incompatibility group Q (IncQ) vector, capable of replication in most Gram negative, and some Gram positive bacteria. Our designed vector, named pSM201v, can be used as an expression vector in some Gram positive and a wide range of Gram negative bacteria including cyanobacteria. An induction system activated via physical external stimuli such as temperature, allows precise control of overexpression. pSM201v addresses several drawbacks of the RSF1010 plasmid; it has a reduced backbone size of 5189 bp compared to 8684 bp of the original plasmid, which provides more space for cloning and transfer of cargo DNA into the host organism. The mobilization function, required for plasmid transfer into several cyanobacterial strains, is reduced to a 99 bp region, as a result that mobilization of this plasmid is no longer linked to the plasmid replication. The RNA thermometer, named DTT1, is based on a RNA hairpin strategy that prevents expression of downstream genes at temperatures below 30 °C. Such RNA elements are expected to find applications in biotechnology to economically control gene expression in a scalable manner.

6.
Int J Obes (Lond) ; 46(11): 2050-2057, 2022 11.
Artigo em Inglês | MEDLINE | ID: mdl-36192533

RESUMO

OBJECTIVES: Dietary assessment methods not relying on self-report are needed. The Automatic Ingestion Monitor 2 (AIM-2) combines a wearable camera that captures food images with sensors that detect food intake. We compared energy intake (EI) estimates of meals derived from AIM-2 chewing sensor signals, AIM-2 images, and an internet-based diet diary, with researcher conducted weighed food records (WFR) as the gold standard. SUBJECTS/METHODS: Thirty adults wore the AIM-2 for meals self-selected from a university food court on one day in mixed laboratory and free-living conditions. Daily EI was determined from a sensor regression model, manual image analysis, and a diet diary and compared with that from WFR. A posteriori analysis identified sources of error for image analysis and WFR differences. RESULTS: Sensor-derived EI from regression modeling (R2 = 0.331) showed the closest agreement with EI from WFR, followed by diet diary estimates. EI from image analysis differed significantly from that by WFR. Bland-Altman analysis showed wide limits of agreement for all three test methods with WFR, with the sensor method overestimating at lower and underestimating at higher EI. Nutritionist error in portion size estimation and irreconcilable differences in portion size between food and nutrient databases used for WFR and image analyses were the greatest contributors to image analysis and WFR differences (44.4% and 44.8% of WFR EI, respectively). CONCLUSIONS: Estimation of daily EI from meals using sensor-derived features offers a promising alternative to overcome limitations of self-report. Image analysis may benefit from computerized analytical procedures to reduce identified sources of error.


Assuntos
Ingestão de Energia , Dispositivos Eletrônicos Vestíveis , Humanos , Adulto , Registros de Dieta , Refeições , Dieta
7.
Annu Int Conf IEEE Eng Med Biol Soc ; 2022: 2993-2996, 2022 07.
Artigo em Inglês | MEDLINE | ID: mdl-36085821

RESUMO

The choice of appropriate machine learning algorithms is crucial for classification problems. This study compares the performance of state-of-the-art time-series deep learning algorithms for classifying food intake using sensor signals. The sensor signals were collected with the help of a wearable sensor system (the Automatic Ingestion Monitor v2, or AIM-2). AIM-2 used an optical and 3-axis accelerometer sensor to capture temporalis muscle activation. Raw signals from those sensors were used to train five classifiers (multilayer perceptron (MLP), time Convolutional Neural Network (time-CNN), Fully Convolutional Neural Network (FCN), Residual Neural Network (ResNet), and Inception network) to differentiate food intake (eating and drinking) from other activities. Data were collected from 17 pilot subjects over the course of 23 days in free-living conditions. A leave one subject out cross-validation scheme was used for training and testing. Time-CNN, FCN, ResNet, and Inception achieved average balanced classification accuracy of 88.84%, 90.18%, 93.47%, and 92.15%, respectively. The results indicate that ResNet outperforms other state-of-the-art deep learning algorithms for this specific problem.


Assuntos
Aprendizado Profundo , Algoritmos , Progressão da Doença , Ingestão de Alimentos , Humanos , Aprendizado de Máquina , Redes Neurais de Computação
8.
Front Nutr ; 9: 941001, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35958246

RESUMO

Background: A fast rate of eating is associated with a higher risk for obesity but existing studies are limited by reliance on self-report and the consistency of eating rate has not been examined across all meals in a day. The goal of the current analysis was to examine associations between meal duration, rate of eating, and body mass index (BMI) and to assess the variance of meal duration and eating rate across different meals during the day. Methods: Using an observational cross-sectional study design, non-smoking participants aged 18-45 years (N = 29) consumed all meals (breakfast, lunch, and dinner) on a single day in a pseudo free-living environment. Participants were allowed to choose any food and beverages from a University food court and consume their desired amount with no time restrictions. Weighed food records and a log of meal start and end times, to calculate duration, were obtained by a trained research assistant. Spearman's correlations and multiple linear regressions examined associations between BMI and meal duration and rate of eating. Results: Participants were 65% male and 48% white. A shorter meal duration was associated with a higher BMI at breakfast but not lunch or dinner, after adjusting for age and sex (p = 0.03). Faster rate of eating was associated with higher BMI across all meals (p = 0.04) and higher energy intake for all meals (p < 0.001). Intra-individual rates of eating were not significantly different across breakfast, lunch, and dinner (p = 0.96). Conclusion: Shorter beakfast and a faster rate of eating across all meals were associated with higher BMI in a pseudo free-living environment. An individual's rate of eating is constant over all meals in a day. These data support weight reduction interventions focusing on the rate of eating at all meals throughout the day and provide evidence for specifically directing attention to breakfast eating behaviors.

9.
Front Nutr ; 9: 877775, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35811954

RESUMO

Objective: To describe best practices for manual nutritional analyses of data from passive capture wearable devices in free-living conditions. Method: 18 participants (10 female) with a mean age of 45 ± 10 years and mean BMI of 34.2 ± 4.6 kg/m2 consumed usual diet for 3 days in a free-living environment while wearing an automated passive capture device. This wearable device facilitates capture of images without manual input from the user. Data from the first nine participants were used by two trained nutritionists to identify sources contributing to inter-nutritionist variance in nutritional analyses. The nutritionists implemented best practices to mitigate these sources of variance in the next nine participants. The three best practices to reduce variance in analysis of energy intake (EI) estimation were: (1) a priori standardized food selection, (2) standardized nutrient database selection, and (3) increased number of images captured around eating episodes. Results: Inter-rater repeatability for EI, using intraclass correlation coefficient (ICC), improved by 0.39 from pre-best practices to post-best practices (0.14 vs 0.85, 95% CI, respectively), Bland-Altman analysis indicated strongly improved agreement between nutritionists for limits of agreement (LOA) post-best practices. Conclusion: Significant improvement of ICC and LOA for estimation of EI following implementation of best practices demonstrates that these practices improve the reproducibility of dietary analysis from passive capture device images in free-living environments.

10.
Int J Biol Macromol ; 185: 644-653, 2021 Aug 31.
Artigo em Inglês | MEDLINE | ID: mdl-34217741

RESUMO

Highly specific graphene-DNA interactions have been at the forefront of graphene-based sensor design for various analytes, including DNA itself. However, in addition to its detection, DNA also needs to be characterized according to its size and concentration in a sample, which is an additional analytical step. Designing a highly sensitive and selective DNA sensing and characterization platform is, thus, of great interest. The present study demonstrates that a bio-derived, naturally fluorescent protein C-phycoerythrin (CPE) - graphene oxide (GO) bio-composite can be used to detect dsDNA in nanomolar quantities efficiently via fluorescent "turn off/on" mechanism. Interaction with GO temporarily quenches CPE fluorescence in a dose-dependent manner. Analytical characterization indicates an indirect charge transfer with a corresponding loss of crystalline GO structure. The fluorescence is regained with the addition of DNA, while other biomolecules do not pose any hinderance in the detection process. The extent of regain is DNA length dependent, and the corresponding calibration curve successfully quantifies the size of an unknown DNA. The incubation time for detection is ~3-5 min. The bio-composite platform also works successfully in a complex biomolecule matrix and cell lysate. However, the presence of serum albumin poses a hinderance in the serum sample. Particle size analysis proves that CPE displacement from GO surface by the incoming DNA is the reason for the 'turn on' response, and that the sensing process is exclusive to dsDNA. This new platform could be an exciting and rapid DNA sensing and characterization tool.


Assuntos
DNA/análise , Grafite/química , Ficoeritrina/química , Proteína C/química , Técnicas Biossensoriais , Difusão Dinâmica da Luz , Fluorescência , Tamanho da Partícula , Difração de Raios X
11.
J Digit Imaging ; 34(2): 404-417, 2021 04.
Artigo em Inglês | MEDLINE | ID: mdl-33728563

RESUMO

PURPOSE: The objective of this paper was to develop a computer-aided diagnostic (CAD) tools for automated analysis of capsule endoscopic (CE) images, more precisely, detect small intestinal abnormalities like bleeding. METHODS: In particular, we explore a convolutional neural network (CNN)-based deep learning framework to identify bleeding and non-bleeding CE images, where a pre-trained AlexNet neural network is used to train a transfer learning CNN that carries out the identification. Moreover, bleeding zones in a bleeding-identified image are also delineated using deep learning-based semantic segmentation that leverages a SegNet deep neural network. RESULTS: To evaluate the performance of the proposed framework, we carry out experiments on two publicly available clinical datasets and achieve a 98.49% and 88.39% F1 score, respectively, on the capsule endoscopy.org and KID datasets. For bleeding zone identification, 94.42% global accuracy and 90.69% weighted intersection over union (IoU) are achieved. CONCLUSION: Finally, our performance results are compared to other recently developed state-of-the-art methods, and consistent performance advances are demonstrated in terms of performance measures for bleeding image and bleeding zone detection. Relative to the present and established practice of manual inspection and annotation of CE images by a physician, our framework enables considerable annotation time and human labor savings in bleeding detection in CE images, while providing the additional benefits of bleeding zone delineation and increased detection accuracy. Moreover, the overall cost of CE enabled by our framework will also be much lower due to the reduction of manual labor, which can make CE affordable for a larger population.


Assuntos
Endoscopia por Cápsula , Aprendizado Profundo , Hemorragia Gastrointestinal/diagnóstico por imagem , Humanos , Processamento de Imagem Assistida por Computador , Intestino Delgado , Redes Neurais de Computação
12.
IEEE Sens J ; 21(24): 27728-27735, 2021 Dec 15.
Artigo em Inglês | MEDLINE | ID: mdl-35813985

RESUMO

Objective detection of periods of wear and non-wear is critical for human studies that rely on information from wearable sensors, such as food intake sensors. In this paper, we present a novel method of compliance detection on the example of the Automatic Ingestion Monitor v2 (AIM-2) sensor, containing a tri-axial accelerometer, a still camera, and a chewing sensor. The method was developed and validated using data from a study of 30 participants aged 18-39, each wearing the AIM-2 for two days (a day in pseudo-free-living and a day in free-living). Four types of wear compliance were analyzed: 'normal-wear', 'non-compliant-wear', 'non-wear-carried', and 'non-wear-stationary'. The ground truth of those four types of compliance was obtained by reviewing the images of the egocentric camera. The features for compliance detection were the standard deviation of acceleration, average pitch, and roll angles, and mean square error of two consecutive images. These were used to train three random forest classifiers 1) accelerometer-based, 2) image-based, and 3) combined accelerometer and image-based. Satisfactory wear compliance measurement accuracy was obtained using the combined classifier (89.24%) on leave one subject out cross-validation. The average duration of compliant wear in the study was 9h with a standard deviation of 2h or 70.96% of total on-time. This method can be used to calculate the wear and non-wear time of AIM-2, and potentially be extended to other devices. The study also included assessments of sensor burden and privacy concerns. The survey results suggest recommendations that may be used to increase wear compliance.

13.
IEEE J Biomed Health Inform ; 25(2): 568-576, 2021 02.
Artigo em Inglês | MEDLINE | ID: mdl-32750904

RESUMO

Use of food image capture and/or wearable sensors for dietary assessment has grown in popularity. "Active" methods rely on the user to take an image of each eating episode. "Passive" methods use wearable cameras that continuously capture images. Most of "passively" captured images are not related to food consumption and may present privacy concerns. In this paper, we propose a novel wearable sensor (Automatic Ingestion Monitor, AIM-2) designed to capture images only during automatically detected eating episodes. The capture method was validated on a dataset collected from 30 volunteers in the community wearing the AIM-2 for 24h in pseudo-free-living and 24h in a free-living environment. The AIM-2 was able to detect food intake over 10-second epochs with a (mean and standard deviation) F1-score of 81.8 ± 10.1%. The accuracy of eating episode detection was 82.7%. Out of a total of 180,570 images captured, 8,929 (4.9%) images belonged to detected eating episodes. Privacy concerns were assessed by a questionnaire on a scale 1-7. Continuous capture had concern value of 5.0 ± 1.6 (concerned) while image capture only during food intake had concern value of 1.9 ±1.7 (not concerned). Results suggest that AIM-2 can provide accurate detection of food intake, reduce the number of images for analysis and alleviate the privacy concerns of the users.


Assuntos
Dispositivos Eletrônicos Vestíveis , Coleta de Dados , Ingestão de Alimentos , Alimentos , Humanos , Monitorização Fisiológica
14.
Annu Int Conf IEEE Eng Med Biol Soc ; 2020: 4191-4195, 2020 07.
Artigo em Inglês | MEDLINE | ID: mdl-33018921

RESUMO

With technological advancement, wearable egocentric camera systems have extensively been studied to develop food intake monitoring devices for the assessment of eating behavior. This paper provides a detailed description of the implementation of CNN based image classifier in the Cortex-M7 microcontroller. The proposed network classifies the captured images by the wearable egocentric camera as food and no food images in real-time. This real-time food image detection can potentially lead the monitoring devices to consume less power, less storage, and more user-friendly in terms of privacy by saving only images that are detected as food images. A derivative of pre-trained MobileNet is trained to detect food images from camera captured images. The proposed network needs 761.99KB of flash and 501.76KB of RAM to implement which is built for an optimal trade-off between accuracy, computational cost, and memory footprint considering implementation on a Cortex-M7 microcontroller. The image classifier achieved an average precision of 82%±3% and an average F-score of 74%±2% while testing on 15343 (2127 food images and 13216 no food images) images of five full days collected from five participants.


Assuntos
Comportamento Alimentar , Dispositivos Eletrônicos Vestíveis , Coleta de Dados , Ingestão de Alimentos , Alimentos , Humanos
15.
Int J Biol Macromol ; 163: 977-984, 2020 Nov 15.
Artigo em Inglês | MEDLINE | ID: mdl-32629054

RESUMO

A naturally fluorescent protein, C-phycocyanin (CPC), was used as a fluorophore to study the effect of graphene oxide (GO) as a quencher. The protein was purified using established procedures and titrated with increasing GO concentrations. UV-visible titration showed a minor effect on the phycocyanobilin absorbance but significant interactions with the amino acid backbone. Fluorescence titration showed notable CPC quenching upon increasing GO concentration to 30 µg ml-1; the corresponding fluorescence dropped by ~97%. A non-linear Stern Volmer curve showed that the fluorophores did not interact directly with the quencher. Powder X-ray diffraction studies showed that the bio-composite lost the crystalline arrangement of GO and became amorphous, akin to CPC. SEM analysis showed GO sheets enfolding a protein nucleus with an increase in oxygen after the interaction compared to CPC. A 20 min incubation of the bio-composite with various biomolecules including amino acids, sugars, polydispersed exopolysaccharides (EPS), other proteins and DNA showed that only DNA could recover the CPC fluorescence. The 'turn on' effect of DNA was distinguishable even when all the other molecules were in the same sample matrix. These results showed that CPC GO could be a fluorescence 'turn off/on' DNA probe.


Assuntos
Materiais Biocompatíveis/química , Técnicas de Química Sintética , Sondas de DNA/síntese química , Corantes Fluorescentes/síntese química , Grafite/química , Ficocianina/química , Sondas de DNA/química , Corantes Fluorescentes/química , Espectrometria de Fluorescência , Difração de Raios X
16.
Spectrochim Acta A Mol Biomol Spectrosc ; 239: 118469, 2020 Oct 05.
Artigo em Inglês | MEDLINE | ID: mdl-32450537

RESUMO

A naturally fluorescent cyanobacterial protein C-phycoerythrin (CPE) was investigated as a fluorescent probe for biologically and environmentally important hydrosulphide (HS-) ion. It was selective for HS amongst a large anion screen and the optical response was rapid. Sequential UV-visible titration showed considerable peak shift and attenuation with increasing [HS-] while fluorescence titration proved that HS- quenched CPE fluorescence in a concentration dependent manner. The linear response range was 0-2 mM HS- while the Stern Volmer curve was non-linear and the limit of detection was 185.12 µM. Except bicarbonate and glycine, no anion or biomolecule interfered with the detection even at 10 times the concentration of HS-. It was also free of influences from other sulphur forms like sulphite, sulphate and thiosulphate. CPE reliably detected HS- in freshwater and effluent samples, though some under- and over - estimation was evident. The % recovery ranged from ~96 to 105% (RSD ~ 0.035-0.188%). FTIR analysis showed significant changes in the amide I and II regions of CPE, along with minor modifications in the amide III region as well, showing that HS- was able to influence the protein secondary structure at higher concentrations.


Assuntos
Ficoeritrina , Proteína C , Fluorescência , Corantes Fluorescentes , Água
17.
IEEE Access ; 8: 101934-101945, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-33747674

RESUMO

Methods for measuring of eating behavior (known as meal microstructure) often rely on manual annotation of bites, chews, and swallows on meal videos or wearable sensor signals. The manual annotation may be time consuming and erroneous, while wearable sensors may not capture every aspect of eating (e.g. chews only). The aim of this study is to develop a method to detect and count bites and chews automatically from meal videos. The method was developed on a dataset of 28 volunteers consuming unrestricted meals in the laboratory under video observation. First, the faces in the video (regions of interest, ROI) were detected using Faster R-CNN. Second, a pre-trained AlexNet was trained on the detected faces to classify images as a bite/no bite image. Third, the affine optical flow was applied in consecutively detected faces to find the rotational movement of the pixels in the ROIs. The number of chews in a meal video was counted by converting the 2-D images to a 1-D optical flow parameter and finding peaks. The developed bite and chew count algorithm was applied to 84 meal videos collected from 28 volunteers. A mean accuracy (±STD) of 85.4% (±6.3%) with respect to manual annotation was obtained for the number of bites and 88.9% (±7.4%) for the number of chews. The proposed method for an automatic bite and chew counting shows promising results that can be used as an alternative solution to manual annotation.

18.
J Fluoresc ; 28(2): 671-680, 2018 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-29667001

RESUMO

C-phycoerythrin (CPE) was investigated as a colorimetric and fluorometric quantitative sensor for Cu2+ ions in an aqueous medium. UV - visible studies with 50 µM concentration of different metals were carried out with only Cu and Ag showing changes in the absorption spectra. Fluorescence emission studies showed similar results. UV - visible titration of CPE with different [Cu] resulted in a linear relationship within 10 µM Cu and a 'naked eye' visible difference in colour, most likely due to the formation of a CPE - Cu complex. Fluorescence emission of CPE was quenched rapidly within 5 min of mixing. Fluorescence emission titration studies revealed gradually decreasing CPE emission with increasing [Cu] with a Stern - Volmer constant of 2.5 × 104 M-1 and a detection limit of 5 µM.. CPE was selective for Cu even in the presence of different metals which were 5 times the concentration of Cu; it was also effective in aqueous samples spiked with Cu. FTIR studies showed considerable changes in the amide III, indicating side chain interactions, even as the protein backbone remained largely unaffected.


Assuntos
Colorimetria/métodos , Cobre/análise , Cobre/química , Fluorometria/métodos , Limite de Detecção , Ficoeritrina/química , Água/química
19.
IEEE J Transl Eng Health Med ; 6: 1800112, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-29468094

RESUMO

Wireless capsule endoscopy (WCE) is the most advanced technology to visualize whole gastrointestinal (GI) tract in a non-invasive way. But the major disadvantage here, it takes long reviewing time, which is very laborious as continuous manual intervention is necessary. In order to reduce the burden of the clinician, in this paper, an automatic bleeding detection method for WCE video is proposed based on the color histogram of block statistics, namely CHOBS. A single pixel in WCE image may be distorted due to the capsule motion in the GI tract. Instead of considering individual pixel values, a block surrounding to that individual pixel is chosen for extracting local statistical features. By combining local block features of three different color planes of RGB color space, an index value is defined. A color histogram, which is extracted from those index values, provides distinguishable color texture feature. A feature reduction technique utilizing color histogram pattern and principal component analysis is proposed, which can drastically reduce the feature dimension. For bleeding zone detection, blocks are classified using extracted local features that do not incorporate any computational burden for feature extraction. From extensive experimentation on several WCE videos and 2300 images, which are collected from a publicly available database, a very satisfactory bleeding frame and zone detection performance is achieved in comparison to that obtained by some of the existing methods. In the case of bleeding frame detection, the accuracy, sensitivity, and specificity obtained from proposed method are 97.85%, 99.47%, and 99.15%, respectively, and in the case of bleeding zone detection, 95.75% of precision is achieved. The proposed method offers not only low feature dimension but also highly satisfactory bleeding detection performance, which even can effectively detect bleeding frame and zone in a continuous WCE video data.

20.
Comput Biol Med ; 94: 41-54, 2018 03 01.
Artigo em Inglês | MEDLINE | ID: mdl-29407997

RESUMO

Wireless capsule endoscopy (WCE) is capable of demonstrating the entire gastrointestinal tract at an expense of exhaustive reviewing process for detecting bleeding disorders. The main objective is to develop an automatic method for identifying the bleeding frames and zones from WCE video. Different statistical features are extracted from the overlapping spatial blocks of the preprocessed WCE image in a transformed color plane containing green to red pixel ratio. The unique idea of the proposed method is to first perform unsupervised clustering of different blocks for obtaining two clusters and then extract cluster based features (CBFs). Finally, a global feature consisting of the CBFs and differential CBF is used to detect bleeding frame via supervised classification. In order to handle continuous WCE video, a post-processing scheme is introduced utilizing the feature trends in neighboring frames. The CBF along with some morphological operations is employed to identify bleeding zones. Based on extensive experimentation on several WCE videos, it is found that the proposed method offers significantly better performance in comparison to some existing methods in terms of bleeding detection accuracy, sensitivity, specificity and precision in bleeding zone detection. It is found that the bleeding detection performance obtained by using the proposed CBF based global feature is better than the feature extracted from the non-clustered image. The proposed method can reduce the burden of physicians in investigating WCE video to detect bleeding frame and zone with a high level of accuracy.


Assuntos
Endoscopia por Cápsula/métodos , Diagnóstico por Computador/métodos , Hemorragia Gastrointestinal , Processamento de Imagem Assistida por Computador/métodos , Feminino , Hemorragia Gastrointestinal/diagnóstico , Hemorragia Gastrointestinal/diagnóstico por imagem , Humanos , Masculino
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...