Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 13 de 13
Filtrar
Más filtros











Base de datos
Intervalo de año de publicación
1.
JMIR Mhealth Uhealth ; 12: e54509, 2024 Sep 04.
Artículo en Inglés | MEDLINE | ID: mdl-39233588

RESUMEN

Background: Controlling saturated fat and cholesterol intake is important for the prevention of cardiovascular diseases. Although the use of mobile diet-tracking apps has been increasing, the reliability of nutrition apps in tracking saturated fats and cholesterol across different nations remains underexplored. Objective: This study aimed to examine the reliability and consistency of nutrition apps focusing on saturated fat and cholesterol intake across different national contexts. The study focused on 3 key concerns: data omission, inconsistency (variability) of saturated fat and cholesterol values within an app, and the reliability of commercial apps across different national contexts. Methods: Nutrient data from 4 consumer-grade apps (COFIT, MyFitnessPal-Chinese, MyFitnessPal-English, and LoseIt!) and an academic app (Formosa FoodApp) were compared against 2 national reference databases (US Department of Agriculture [USDA]-Food and Nutrient Database for Dietary Studies [FNDDS] and Taiwan Food Composition Database [FCD]). Percentages of missing nutrients were recorded, and coefficients of variation were used to compute data inconsistencies. One-way ANOVAs were used to examine differences among apps, and paired 2-tailed t tests were used to compare the apps to national reference data. The reliability across different national contexts was investigated by comparing the Chinese and English versions of MyFitnessPal with the USDA-FNDDS and Taiwan FCD. Results: Across the 5 apps, 836 food codes from 42 items were analyzed. Four apps, including COFIT, MyFitnessPal-Chinese, MyFitnessPal-English, and LoseIt!, significantly underestimated saturated fats, with errors ranging from -13.8% to -40.3% (all P<.05). All apps underestimated cholesterol, with errors ranging from -26.3% to -60.3% (all P<.05). COFIT omitted 47% of saturated fat data, and MyFitnessPal-Chinese missed 62% of cholesterol data. The coefficients of variation of beef, chicken, and seafood ranged from 78% to 145%, from 74% to 112%, and from 97% to 124% across MyFitnessPal-Chinese, MyFitnessPal-English, and LoseIt!, respectively, indicating a high variability in saturated fats across different food groups. Similarly, cholesterol variability was consistently high in dairy (71%-118%) and prepackaged foods (84%-118%) across all selected apps. When examining the reliability of MyFitnessPal across different national contexts, errors in MyFitnessPal were consistent across different national FCDs (USDA-FNDSS and Taiwan FCD). Regardless of the FCDs used as a reference, these errors persisted to be statistically significant, indicating that the app's core database is the source of the problems rather than just mismatches or variances in external FCDs. Conclusions: The findings reveal substantial inaccuracies and inconsistencies in diet-tracking apps' reporting of saturated fats and cholesterol. These issues raise concerns for the effectiveness of using consumer-grade nutrition apps in cardiovascular disease prevention across different national contexts and within the apps themselves.


Asunto(s)
Enfermedades Cardiovasculares , Aplicaciones Móviles , Humanos , Aplicaciones Móviles/normas , Aplicaciones Móviles/estadística & datos numéricos , Reproducibilidad de los Resultados , Enfermedades Cardiovasculares/prevención & control , Taiwán
2.
Nutrition ; 116: 112212, 2023 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-37776838

RESUMEN

OBJECTIVE: Mobile nutrition applications (apps) provide a simple way for individuals to record their diet, but the validity and inherent errors need to be carefully evaluated. The aim of this study was to assess the validity and clarify the sources of measurement errors of image-assisted mobile nutrition apps. METHODS: This was a cross-sectional study with 98 students recruited from School of Nutrition and Health Sciences, Taipei Medical University. A 3-d nutrient intake record by Formosa Food and Nutrient Recording App (FoodApp) was compared with a 24-h dietary recall (24-HDR). A two-stage data modification process, manual data cleaning, and reanalyzing of prepackaged foods were employed to address inherent errors. Nutrient intake levels obtained by the two methods were compared with the recommended daily intake (DRI), Taiwan. Paired t test, Spearman's correlation coefficients, and Bland-Altman plots were used to assess agreement between the FoodApp and 24-HDR. RESULTS: Manual data cleaning identified 166 food coding errors (12%; stage 1), and 426 food codes with missing micronutrients (32%) were reanalyzed (stage 2). Positive linear trends were observed for total energy and micronutrient intake (all Ptrend < 0.05) after the two stages of data modification, but not for dietary fat, carbohydrates, or vitamin D. There were no statistical differences in mean energy and macronutrient intake between the FoodApp and 24-HDR, and this agreement was confirmed by Bland-Altman plots. Spearman's correlation analyses showed strong to moderate correlations (r = 0.834 ∼ 0.386) between the two methods. Participants' nutrient intake tended to be lower than the DRI, but no differences in proportions of adequacy/inadequacy for DRI values were observed between the two methods. CONCLUSIONS: Mitigating errors significantly improved the accuracy of the Formosa FoodApp, indicating its validity and reliability as a self-reporting mobile-based dietary assessment tool. Dietitians and health professionals should be mindful of potential errors associated with self-reporting nutrition apps, and manual data cleaning is vital to obtain reliable nutrient intake data.


Asunto(s)
Aplicaciones Móviles , Humanos , Reproducibilidad de los Resultados , Estudios Transversales , Evaluación Nutricional , Dieta , Ingestión de Energía , Grasas de la Dieta , Registros de Dieta
3.
Nutrients ; 14(16)2022 Aug 12.
Artículo en Inglés | MEDLINE | ID: mdl-36014819

RESUMEN

Background and aims: Digital food viewing is a vital skill for connecting dieticians to e-health. The aim of this study was to integrate a novel pedagogical framework that combines interactive three- (3-D) and two-dimensional (2-D) food models into a formal dietetic training course. The level of agreement between the digital food models (first semester) and the effectiveness of educational integration of digital food models during the school closure due to coronavirus disease 2019 (COVID-19) (second semester) were evaluated. Method: In total, 65 second-year undergraduate dietetic students were enrolled in a nutritional practicum course at the School of Nutrition and Health Sciences, Taipei Medical University (Taipei, Taiwan). A 3-D food model was created using Agisoft Metashape. Students' digital food viewing skills and receptiveness towards integrating digital food models were evaluated. Results: In the first semester, no statistical differences were observed between 2-D and 3-D food viewing skills in food identification (2-D: 89% vs. 3-D: 85%) and quantification (within ±10% difference in total calories) (2-D: 19.4% vs. 3-D: 19.3%). A Spearman correlation analysis showed moderate to strong correlations of estimated total calories (0.69~0.93; all p values < 0.05) between the 3-D and 2-D models. Further analysis showed that students who struggled to master both 2-D and 3-D food viewing skills had lower estimation accuracies than those who did not (equal performers: 28% vs. unequal performers:16%, p = 0.041), and interactive 3-D models may help them perform better than 2-D models. In the second semester, the digital food viewing skills significantly improved (food identification: 91.5% and quantification: 42.9%) even for those students who struggled to perform digital food viewing skills equally in the first semester (equal performers: 44% vs. unequal performers: 40%). Conclusion: Although repeated training greatly enhanced students' digital food viewing skills, a tailored training program may be needed to master 2-D and 3-D digital food viewing skills. Future study is needed to evaluate the effectiveness of digital food models for future "eHealth" care.


Asunto(s)
COVID-19 , Entrenamiento Simulado , COVID-19/epidemiología , Humanos , Estado Nutricional , Proyectos Piloto , Tamaño de la Porción
4.
Nutrients ; 13(1)2021 Jan 08.
Artículo en Inglés | MEDLINE | ID: mdl-33430147

RESUMEN

The use of image-based dietary assessments (IBDAs) has rapidly increased; however, there is no formalized training program to enhance the digital viewing skills of dieticians. An IBDA was integrated into a nutritional practicum course in the School of Nutrition and Health Sciences, Taipei Medical University Taiwan. An online IBDA platform was created as an off-campus remedial teaching tool to reinforce the conceptualization of food portion sizes. Dietetic students' receptiveness and response to the IBDA, and their performance in food identification and quantification, were compared between the IBDA and real food visual estimations (RFVEs). No differences were found between the IBDA and RFVE in terms of food identification (67% vs. 71%) or quantification (±10% of estimated calories: 23% vs. 24%). A Spearman correlation analysis showed a moderate to high correlation for calorie estimates between the IBDA and RFVE (r ≥ 0.33~0.75, all p < 0.0001). Repeated IBDA training significantly improved students' image-viewing skills [food identification: first semester: 67%; pretest: 77%; second semester: 84%) and quantification [±10%: first semester: 23%; pretest: 28%; second semester: 32%; and ±20%: first semester: 38%; pretest: 48%; second semester: 59%] and reduced absolute estimated errors from 27% (first semester) to 16% (second semester). Training also greatly improved the identification of omitted foods (e.g., condiments, sugar, cooking oil, and batter coatings) and the accuracy of food portion size estimates. The integration of an IBDA into dietetic courses has the potential to help students develop knowledge and skills related to "e-dietetics".


Asunto(s)
Dietética/educación , Evaluación Nutricional , Nutricionistas/educación , Fotograbar , Tamaño de la Porción , Curriculum , Humanos , Internet
5.
IEEE Trans Vis Comput Graph ; 26(9): 2834-2847, 2020 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-30716038

RESUMEN

This paper presents a novel algorithm to generate micrography QR codes, a novel machine-readable graphic generated by embedding a QR code within a micrography image. The unique structure of micrography makes it incompatible with existing methods used to combine QR codes with natural or halftone images. We exploited the high-frequency nature of micrography in the design of a novel deformation model that enables the skillful warping of individual letters and adjustment of font weights to enable the embedding of a QR code within a micrography. The entire process is supervised by a set of visual quality metrics tailored specifically for micrography, in conjunction with a novel QR code quality measure aimed at striking a balance between visual fidelity and decoding robustness. The proposed QR code quality measure is based on probabilistic models learned from decoding experiments using popular decoders with synthetic QR codes to capture the various forms of distortion that result from image embedding. Experiment results demonstrate the efficacy of the proposed method in generating micrography QR codes of high quality from a wide variety of inputs. The ability to embed QR codes with multiple scales makes it possible to produce a wide range of diverse designs. Experiments and user studies were conducted to evaluate the proposed method from a qualitative as well as quantitative perspective.

6.
Sensors (Basel) ; 19(24)2019 Dec 12.
Artículo en Inglés | MEDLINE | ID: mdl-31842494

RESUMEN

Periodontal diagnosis requires discovery of the relations among teeth, gingiva (i.e., gums), and alveolar bones, but alveolar bones are inside gingiva and not visible for inspection. Traditional probe examination causes pain, and X-ray based examination is not suited for frequent inspection. This work develops an automatic non-invasive periodontal inspection framework based on gum penetrative Optical Coherence Tomography (OCT), which can be frequently applied without high radiation. We sum up interference responses of all penetration depths for all shooting directions respectively to form the shooting amplitude projection. Because the reaching interference strength decays exponentially with tissues' penetration depth, this projection mainly reveals the responses of the top most gingiva or teeth. Since gingiva and teeth have different air-tissue responses, the gumline, revealing itself as an obvious boundary between teeth and gingiva, is the basis line for periodontal inspection. Our system can also automatically identify regions of gingiva, teeth, and alveolar bones from slices of the cross-sectional volume. Although deep networks can successfully and possibly segment noisy maps, reducing the number of manually labeled maps for training is critical for our framework. In order to enhance the effectiveness and efficiency of training and classification, we adjust Snake segmentation to consider neighboring slices in order to locate those regions possibly containing gingiva-teeth and gingiva-alveolar boundaries. Additionally, we also adapt a truncated direct logarithm based on the Snake-segmented region for intensity quantization to emphasize these boundaries for easier identification. Later, the alveolar-gingiva boundary point directly under the gumline is the desired alveolar sample, and we can measure the distance between the gumline and alveolar line for visualization and direct periodontal inspection. At the end, we experimentally verify our choice in intensity quantization and boundary identification against several other algorithms while applying the framework to locate gumline and alveolar line in vivo data successfully.


Asunto(s)
Encía/diagnóstico por imagen , Enfermedades Periodontales/diagnóstico , Tomografía de Coherencia Óptica , Diente/diagnóstico por imagen , Pérdida de Hueso Alveolar/diagnóstico , Pérdida de Hueso Alveolar/diagnóstico por imagen , Humanos , Enfermedades Periodontales/patología
7.
Sensors (Basel) ; 19(19)2019 Sep 29.
Artículo en Inglés | MEDLINE | ID: mdl-31569554

RESUMEN

Digital dental reconstruction can be a more efficient and effective mechanism for artificial crown construction and period inspection. However, optical methods cannot reconstruct those portions under gums, and X-ray-based methods have high radiation to limit their applied frequency. Optical coherence tomography (OCT) can harmlessly penetrate gums using low-coherence infrared rays, and thus, this work designs an OCT-based framework for dental reconstruction using optical rectification, fast Fourier transform, volumetric boundary detection, and Poisson surface reconstruction to overcome noisy imaging. Additionally, in order to operate in a patient's mouth, the caliber of the injector is small along with its short penetration depth and effective operation range, and thus, reconstruction requires multiple scans from various directions along with proper alignment. However, flat regions, such as the mesial side of front teeth, may not have enough features for alignment. As a result, we design a scanning order for different types of teeth starting from an area of abundant features for easier alignment while using gyros to track scanned postures for better initial orientations. It is important to provide immediate feedback for each scan, and thus, we accelerate the entire signal processing, boundary detection, and point-cloud alignment using Graphics Processing Units (GPUs) while streamlining the data transfer and GPU computations. Finally, our framework can successfully reconstruct three isolated teeth and a side of one living tooth with comparable precisions against the state-of-art method. Moreover, a user study also verifies the effectiveness of our interactive feedback for efficient and fast clinic scanning.


Asunto(s)
Procesamiento de Imagen Asistido por Computador/métodos , Tomografía de Coherencia Óptica/métodos , Diente/diagnóstico por imagen , Calibración , Diseño de Equipo , Análisis de Fourier , Encía/diagnóstico por imagen , Humanos , Tomografía de Coherencia Óptica/instrumentación
8.
Sensors (Basel) ; 19(7)2019 Apr 10.
Artículo en Inglés | MEDLINE | ID: mdl-30974774

RESUMEN

Depth has been a valuable piece of information for perception tasks such as robot grasping, obstacle avoidance, and navigation, which are essential tasks for developing smart homes and smart cities. However, not all applications have the luxury of using depth sensors or multiple cameras to obtain depth information. In this paper, we tackle the problem of estimating the per-pixel depths from a single image. Inspired by the recent works on generative neural network models, we formulate the task of depth estimation as a generative task where we synthesize an image of the depth map from a single Red, Green, and Blue (RGB) input image. We propose a novel generative adversarial network that has an encoder-decoder type generator with residual transposed convolution blocks trained with an adversarial loss. Quantitative and qualitative experimental results demonstrate the effectiveness of our approach over several depth estimation works.

9.
IEEE Trans Vis Comput Graph ; 23(2): 1070-1084, 2017 02.
Artículo en Inglés | MEDLINE | ID: mdl-26863665

RESUMEN

Manga are a popular artistic form around the world, and artists use simple line drawing and screentone to create all kinds of interesting productions. Vectorization is helpful to digitally reproduce these elements for proper content and intention delivery on electronic devices. Therefore, this study aims at transforming scanned Manga to a vector representation for interactive manipulation and real-time rendering with arbitrary resolution. Our system first decomposes the patch into rough Manga elements including possible borders and shading regions using adaptive binarization and screentone detector. We classify detected screentone into simple and complex patterns: our system extracts simple screentone properties for refining screentone borders, estimating lighting, compensating missing strokes inside screentone regions, and later resolution independently rendering with our procedural shaders. Our system treats the others as complex screentone areas and vectorizes them with our proposed line tracer which aims at locating boundaries of all shading regions and polishing all shading borders with the curve-based Gaussian refiner. A user can lay down simple scribbles to cluster Manga elements intuitively for the formation of semantic components, and our system vectorizes these components into shading meshes along with embedded Bézier curves as a unified foundation for consistent manipulation including pattern manipulation, deformation, and lighting addition. Our system can real-time and resolution independently render the shading regions with our procedural shaders and drawing borders with the curve-based shader. For Manga manipulation, the proposed vector representation can be not only magnified without artifacts but also deformed easily to generate interesting results.

10.
IEEE Trans Vis Comput Graph ; 23(12): 2535-2549, 2017 12.
Artículo en Inglés | MEDLINE | ID: mdl-27831882

RESUMEN

Introducing motion into existing static paintings is becoming a field that is gaining momentum. This effort facilitates keeping artworks current and translating them to different forms for diverse audiences. Chinese ink paintings and Japanese Sumies are well recognized in Western cultures, yet not easily practiced due to the years of training required. We are motivated to develop an interactive system for artists, non-artists, Asians, and non-Asians to enjoy the unique style of Chinese paintings. In this paper, our focus is on replacing static water flow scenes with animations. We include flow patterns, surface ripples, and water wakes which are challenging not only artistically but also algorithmically. We develop a data-driven system that procedurally computes a flow field based on stroke properties extracted from the painting, and animate water flows artistically and stylishly. Technically, our system first extracts water-flow-portraying strokes using their locations, oscillation frequencies, brush patterns, and ink densities. We construct an initial flow pattern by analyzing stroke structures, ink dispersion densities, and placement densities. We cluster extracted strokes as stroke pattern groups to further convey the spirit of the original painting. Then, the system automatically computes a flow field according to the initial flow patterns, water boundaries, and flow obstacles. Finally, our system dynamically generates and animates extracted stroke pattern groups with the constructed field for controllable smoothness and temporal coherence. The users can interactively place the extracted stroke patterns through our adapted Poisson-based composition onto other paintings for water flow animation. In conclusion, our system can visually transform a static Chinese painting to an interactive walk-through with seamless and vivid stroke-based flow animations in its original dynamic spirits without flickering artifacts.

11.
J Lab Autom ; 19(5): 492-7, 2014 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-25006038

RESUMEN

Low-efficiency diffusion mechanism poses a significant barrier to the enhancement of micromixing efficiency in microfluidics. Actuating artificial cilia to increase the contact area of two flow streams during micromixing provides a promising alternative to enhance the mixing performance. Real-time adjustment of beating behavior in artificial cilia is necessary to accommodate various biological/chemical reagents with different hydrodynamic properties that are processed in a single microfluidic platform during micromixing. Equipping the microfluidic device with a self-troubleshooting feature for the end user, such as a bubble removal function during the process of multiple chemical solution injections, is also essential for robust micromixing. To meet these requirements, we initiated a new beating control concept by controlling the beating behavior of the artificial cilia through remote and simultaneous actuation of human fingertip drawing. A series of micromixing test cases under extreme flow conditions (Re < 10(-3)) was conducted in the designed micromixer with high mixing performance. Satisfactory micromixing efficiency was achieved even with a rapid beating trajectory of the artificial cilia actuated through the fingertip motion of end users. The analytical paradigm and results allow end users to troubleshoot technical difficulties encountered during micromixing operations.


Asunto(s)
Microfluídica/instrumentación , Microfluídica/métodos , Reología/instrumentación , Reología/métodos , Robótica/métodos , Humanos , Magnetismo , Factores de Tiempo
12.
IEEE Trans Vis Comput Graph ; 18(6): 902-13, 2012 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-21690652

RESUMEN

Field design has wide applications in graphics and visualization. One of the main challenges in field design has been how to provide users with both intuitive control over the directions in the field on one hand and robust management of its topology on the other hand. In this paper, we present a design paradigm for line fields that addresses this challenge. Rather than asking users to input all singularities as in most methods that offer topology control, we let the user provide a partitioning of the domain and specify simple flow patterns within the partitions. Represented by a selected set of harmonic functions, the elementary fields within the partitions are then combined to form continuous fields with rich appearances and well-determined topology. Our method allows a user to conveniently design the flow patterns while having precise and robust control over the topological structure. Based on the method, we developed an interactive tool for designing line fields from images, and demonstrated the utility of the fields in image stylization.

13.
IEEE Trans Vis Comput Graph ; 14(4): 948-60, 2008.
Artículo en Inglés | MEDLINE | ID: mdl-18467767

RESUMEN

We present a novel post-processing utility called adaptive geometry image (AGIM) for global parameterization techniques that can embed a 3D surface onto a rectangular1 domain. This utility first converts a single rectangular parameterization into many different tessellations of square geometry images(GIMs) and then efficiently packs these GIMs into an image called AGIM. Therefore, undersampled regions of the input parameterization can be up-sampled accordingly until the local reconstruction error bound is met. The connectivity of AGIM can be quickly computed and dynamically changed at rendering time. AGIM does not have T-vertices, and therefore no crack is generated between two neighboring GIMs at different tessellations. Experimental results show that AGIM can achieve significant PSNR gain over the input parameterization, AGIM retains the advantages of the original GIM and reduces the reconstruction error present in the original GIM technique. The AGIM is also for global parameterization techniques based on quadrilateral complexes. Using the approximate sampling rates, the PolyCube-based quadrilateral complexes with AGIM can outperform state-of-the-art multichart GIM technique in terms of PSNR.


Asunto(s)
Algoritmos , Gráficos por Computador , Interpretación de Imagen Asistida por Computador/métodos , Imagenología Tridimensional/métodos , Modelos Teóricos , Análisis Numérico Asistido por Computador , Interfaz Usuario-Computador , Simulación por Computador
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA