Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 12 de 12
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Nutrition ; 116: 112212, 2023 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-37776838

RESUMO

OBJECTIVE: Mobile nutrition applications (apps) provide a simple way for individuals to record their diet, but the validity and inherent errors need to be carefully evaluated. The aim of this study was to assess the validity and clarify the sources of measurement errors of image-assisted mobile nutrition apps. METHODS: This was a cross-sectional study with 98 students recruited from School of Nutrition and Health Sciences, Taipei Medical University. A 3-d nutrient intake record by Formosa Food and Nutrient Recording App (FoodApp) was compared with a 24-h dietary recall (24-HDR). A two-stage data modification process, manual data cleaning, and reanalyzing of prepackaged foods were employed to address inherent errors. Nutrient intake levels obtained by the two methods were compared with the recommended daily intake (DRI), Taiwan. Paired t test, Spearman's correlation coefficients, and Bland-Altman plots were used to assess agreement between the FoodApp and 24-HDR. RESULTS: Manual data cleaning identified 166 food coding errors (12%; stage 1), and 426 food codes with missing micronutrients (32%) were reanalyzed (stage 2). Positive linear trends were observed for total energy and micronutrient intake (all Ptrend < 0.05) after the two stages of data modification, but not for dietary fat, carbohydrates, or vitamin D. There were no statistical differences in mean energy and macronutrient intake between the FoodApp and 24-HDR, and this agreement was confirmed by Bland-Altman plots. Spearman's correlation analyses showed strong to moderate correlations (r = 0.834 ∼ 0.386) between the two methods. Participants' nutrient intake tended to be lower than the DRI, but no differences in proportions of adequacy/inadequacy for DRI values were observed between the two methods. CONCLUSIONS: Mitigating errors significantly improved the accuracy of the Formosa FoodApp, indicating its validity and reliability as a self-reporting mobile-based dietary assessment tool. Dietitians and health professionals should be mindful of potential errors associated with self-reporting nutrition apps, and manual data cleaning is vital to obtain reliable nutrient intake data.


Assuntos
Aplicativos Móveis , Humanos , Reprodutibilidade dos Testes , Estudos Transversais , Avaliação Nutricional , Dieta , Ingestão de Energia , Gorduras na Dieta , Registros de Dieta
2.
Nutrients ; 14(16)2022 Aug 12.
Artigo em Inglês | MEDLINE | ID: mdl-36014819

RESUMO

Background and aims: Digital food viewing is a vital skill for connecting dieticians to e-health. The aim of this study was to integrate a novel pedagogical framework that combines interactive three- (3-D) and two-dimensional (2-D) food models into a formal dietetic training course. The level of agreement between the digital food models (first semester) and the effectiveness of educational integration of digital food models during the school closure due to coronavirus disease 2019 (COVID-19) (second semester) were evaluated. Method: In total, 65 second-year undergraduate dietetic students were enrolled in a nutritional practicum course at the School of Nutrition and Health Sciences, Taipei Medical University (Taipei, Taiwan). A 3-D food model was created using Agisoft Metashape. Students' digital food viewing skills and receptiveness towards integrating digital food models were evaluated. Results: In the first semester, no statistical differences were observed between 2-D and 3-D food viewing skills in food identification (2-D: 89% vs. 3-D: 85%) and quantification (within ±10% difference in total calories) (2-D: 19.4% vs. 3-D: 19.3%). A Spearman correlation analysis showed moderate to strong correlations of estimated total calories (0.69~0.93; all p values < 0.05) between the 3-D and 2-D models. Further analysis showed that students who struggled to master both 2-D and 3-D food viewing skills had lower estimation accuracies than those who did not (equal performers: 28% vs. unequal performers:16%, p = 0.041), and interactive 3-D models may help them perform better than 2-D models. In the second semester, the digital food viewing skills significantly improved (food identification: 91.5% and quantification: 42.9%) even for those students who struggled to perform digital food viewing skills equally in the first semester (equal performers: 44% vs. unequal performers: 40%). Conclusion: Although repeated training greatly enhanced students' digital food viewing skills, a tailored training program may be needed to master 2-D and 3-D digital food viewing skills. Future study is needed to evaluate the effectiveness of digital food models for future "eHealth" care.


Assuntos
COVID-19 , Treinamento por Simulação , COVID-19/epidemiologia , Humanos , Estado Nutricional , Projetos Piloto , Tamanho da Porção
3.
Nutrients ; 13(1)2021 Jan 08.
Artigo em Inglês | MEDLINE | ID: mdl-33430147

RESUMO

The use of image-based dietary assessments (IBDAs) has rapidly increased; however, there is no formalized training program to enhance the digital viewing skills of dieticians. An IBDA was integrated into a nutritional practicum course in the School of Nutrition and Health Sciences, Taipei Medical University Taiwan. An online IBDA platform was created as an off-campus remedial teaching tool to reinforce the conceptualization of food portion sizes. Dietetic students' receptiveness and response to the IBDA, and their performance in food identification and quantification, were compared between the IBDA and real food visual estimations (RFVEs). No differences were found between the IBDA and RFVE in terms of food identification (67% vs. 71%) or quantification (±10% of estimated calories: 23% vs. 24%). A Spearman correlation analysis showed a moderate to high correlation for calorie estimates between the IBDA and RFVE (r ≥ 0.33~0.75, all p < 0.0001). Repeated IBDA training significantly improved students' image-viewing skills [food identification: first semester: 67%; pretest: 77%; second semester: 84%) and quantification [±10%: first semester: 23%; pretest: 28%; second semester: 32%; and ±20%: first semester: 38%; pretest: 48%; second semester: 59%] and reduced absolute estimated errors from 27% (first semester) to 16% (second semester). Training also greatly improved the identification of omitted foods (e.g., condiments, sugar, cooking oil, and batter coatings) and the accuracy of food portion size estimates. The integration of an IBDA into dietetic courses has the potential to help students develop knowledge and skills related to "e-dietetics".


Assuntos
Dietética/educação , Avaliação Nutricional , Nutricionistas/educação , Fotografação , Tamanho da Porção , Currículo , Humanos , Internet
4.
IEEE Trans Vis Comput Graph ; 26(9): 2834-2847, 2020 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-30716038

RESUMO

This paper presents a novel algorithm to generate micrography QR codes, a novel machine-readable graphic generated by embedding a QR code within a micrography image. The unique structure of micrography makes it incompatible with existing methods used to combine QR codes with natural or halftone images. We exploited the high-frequency nature of micrography in the design of a novel deformation model that enables the skillful warping of individual letters and adjustment of font weights to enable the embedding of a QR code within a micrography. The entire process is supervised by a set of visual quality metrics tailored specifically for micrography, in conjunction with a novel QR code quality measure aimed at striking a balance between visual fidelity and decoding robustness. The proposed QR code quality measure is based on probabilistic models learned from decoding experiments using popular decoders with synthetic QR codes to capture the various forms of distortion that result from image embedding. Experiment results demonstrate the efficacy of the proposed method in generating micrography QR codes of high quality from a wide variety of inputs. The ability to embed QR codes with multiple scales makes it possible to produce a wide range of diverse designs. Experiments and user studies were conducted to evaluate the proposed method from a qualitative as well as quantitative perspective.

5.
Sensors (Basel) ; 19(24)2019 Dec 12.
Artigo em Inglês | MEDLINE | ID: mdl-31842494

RESUMO

Periodontal diagnosis requires discovery of the relations among teeth, gingiva (i.e., gums), and alveolar bones, but alveolar bones are inside gingiva and not visible for inspection. Traditional probe examination causes pain, and X-ray based examination is not suited for frequent inspection. This work develops an automatic non-invasive periodontal inspection framework based on gum penetrative Optical Coherence Tomography (OCT), which can be frequently applied without high radiation. We sum up interference responses of all penetration depths for all shooting directions respectively to form the shooting amplitude projection. Because the reaching interference strength decays exponentially with tissues' penetration depth, this projection mainly reveals the responses of the top most gingiva or teeth. Since gingiva and teeth have different air-tissue responses, the gumline, revealing itself as an obvious boundary between teeth and gingiva, is the basis line for periodontal inspection. Our system can also automatically identify regions of gingiva, teeth, and alveolar bones from slices of the cross-sectional volume. Although deep networks can successfully and possibly segment noisy maps, reducing the number of manually labeled maps for training is critical for our framework. In order to enhance the effectiveness and efficiency of training and classification, we adjust Snake segmentation to consider neighboring slices in order to locate those regions possibly containing gingiva-teeth and gingiva-alveolar boundaries. Additionally, we also adapt a truncated direct logarithm based on the Snake-segmented region for intensity quantization to emphasize these boundaries for easier identification. Later, the alveolar-gingiva boundary point directly under the gumline is the desired alveolar sample, and we can measure the distance between the gumline and alveolar line for visualization and direct periodontal inspection. At the end, we experimentally verify our choice in intensity quantization and boundary identification against several other algorithms while applying the framework to locate gumline and alveolar line in vivo data successfully.


Assuntos
Gengiva/diagnóstico por imagem , Doenças Periodontais/diagnóstico , Tomografia de Coerência Óptica , Dente/diagnóstico por imagem , Perda do Osso Alveolar/diagnóstico , Perda do Osso Alveolar/diagnóstico por imagem , Humanos , Doenças Periodontais/patologia
6.
Sensors (Basel) ; 19(19)2019 Sep 29.
Artigo em Inglês | MEDLINE | ID: mdl-31569554

RESUMO

Digital dental reconstruction can be a more efficient and effective mechanism for artificial crown construction and period inspection. However, optical methods cannot reconstruct those portions under gums, and X-ray-based methods have high radiation to limit their applied frequency. Optical coherence tomography (OCT) can harmlessly penetrate gums using low-coherence infrared rays, and thus, this work designs an OCT-based framework for dental reconstruction using optical rectification, fast Fourier transform, volumetric boundary detection, and Poisson surface reconstruction to overcome noisy imaging. Additionally, in order to operate in a patient's mouth, the caliber of the injector is small along with its short penetration depth and effective operation range, and thus, reconstruction requires multiple scans from various directions along with proper alignment. However, flat regions, such as the mesial side of front teeth, may not have enough features for alignment. As a result, we design a scanning order for different types of teeth starting from an area of abundant features for easier alignment while using gyros to track scanned postures for better initial orientations. It is important to provide immediate feedback for each scan, and thus, we accelerate the entire signal processing, boundary detection, and point-cloud alignment using Graphics Processing Units (GPUs) while streamlining the data transfer and GPU computations. Finally, our framework can successfully reconstruct three isolated teeth and a side of one living tooth with comparable precisions against the state-of-art method. Moreover, a user study also verifies the effectiveness of our interactive feedback for efficient and fast clinic scanning.


Assuntos
Processamento de Imagem Assistida por Computador/métodos , Tomografia de Coerência Óptica/métodos , Dente/diagnóstico por imagem , Calibragem , Desenho de Equipamento , Análise de Fourier , Gengiva/diagnóstico por imagem , Humanos , Tomografia de Coerência Óptica/instrumentação
7.
Sensors (Basel) ; 19(7)2019 Apr 10.
Artigo em Inglês | MEDLINE | ID: mdl-30974774

RESUMO

Depth has been a valuable piece of information for perception tasks such as robot grasping, obstacle avoidance, and navigation, which are essential tasks for developing smart homes and smart cities. However, not all applications have the luxury of using depth sensors or multiple cameras to obtain depth information. In this paper, we tackle the problem of estimating the per-pixel depths from a single image. Inspired by the recent works on generative neural network models, we formulate the task of depth estimation as a generative task where we synthesize an image of the depth map from a single Red, Green, and Blue (RGB) input image. We propose a novel generative adversarial network that has an encoder-decoder type generator with residual transposed convolution blocks trained with an adversarial loss. Quantitative and qualitative experimental results demonstrate the effectiveness of our approach over several depth estimation works.

8.
IEEE Trans Vis Comput Graph ; 23(2): 1070-1084, 2017 02.
Artigo em Inglês | MEDLINE | ID: mdl-26863665

RESUMO

Manga are a popular artistic form around the world, and artists use simple line drawing and screentone to create all kinds of interesting productions. Vectorization is helpful to digitally reproduce these elements for proper content and intention delivery on electronic devices. Therefore, this study aims at transforming scanned Manga to a vector representation for interactive manipulation and real-time rendering with arbitrary resolution. Our system first decomposes the patch into rough Manga elements including possible borders and shading regions using adaptive binarization and screentone detector. We classify detected screentone into simple and complex patterns: our system extracts simple screentone properties for refining screentone borders, estimating lighting, compensating missing strokes inside screentone regions, and later resolution independently rendering with our procedural shaders. Our system treats the others as complex screentone areas and vectorizes them with our proposed line tracer which aims at locating boundaries of all shading regions and polishing all shading borders with the curve-based Gaussian refiner. A user can lay down simple scribbles to cluster Manga elements intuitively for the formation of semantic components, and our system vectorizes these components into shading meshes along with embedded Bézier curves as a unified foundation for consistent manipulation including pattern manipulation, deformation, and lighting addition. Our system can real-time and resolution independently render the shading regions with our procedural shaders and drawing borders with the curve-based shader. For Manga manipulation, the proposed vector representation can be not only magnified without artifacts but also deformed easily to generate interesting results.

9.
IEEE Trans Vis Comput Graph ; 23(12): 2535-2549, 2017 12.
Artigo em Inglês | MEDLINE | ID: mdl-27831882

RESUMO

Introducing motion into existing static paintings is becoming a field that is gaining momentum. This effort facilitates keeping artworks current and translating them to different forms for diverse audiences. Chinese ink paintings and Japanese Sumies are well recognized in Western cultures, yet not easily practiced due to the years of training required. We are motivated to develop an interactive system for artists, non-artists, Asians, and non-Asians to enjoy the unique style of Chinese paintings. In this paper, our focus is on replacing static water flow scenes with animations. We include flow patterns, surface ripples, and water wakes which are challenging not only artistically but also algorithmically. We develop a data-driven system that procedurally computes a flow field based on stroke properties extracted from the painting, and animate water flows artistically and stylishly. Technically, our system first extracts water-flow-portraying strokes using their locations, oscillation frequencies, brush patterns, and ink densities. We construct an initial flow pattern by analyzing stroke structures, ink dispersion densities, and placement densities. We cluster extracted strokes as stroke pattern groups to further convey the spirit of the original painting. Then, the system automatically computes a flow field according to the initial flow patterns, water boundaries, and flow obstacles. Finally, our system dynamically generates and animates extracted stroke pattern groups with the constructed field for controllable smoothness and temporal coherence. The users can interactively place the extracted stroke patterns through our adapted Poisson-based composition onto other paintings for water flow animation. In conclusion, our system can visually transform a static Chinese painting to an interactive walk-through with seamless and vivid stroke-based flow animations in its original dynamic spirits without flickering artifacts.

10.
J Lab Autom ; 19(5): 492-7, 2014 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-25006038

RESUMO

Low-efficiency diffusion mechanism poses a significant barrier to the enhancement of micromixing efficiency in microfluidics. Actuating artificial cilia to increase the contact area of two flow streams during micromixing provides a promising alternative to enhance the mixing performance. Real-time adjustment of beating behavior in artificial cilia is necessary to accommodate various biological/chemical reagents with different hydrodynamic properties that are processed in a single microfluidic platform during micromixing. Equipping the microfluidic device with a self-troubleshooting feature for the end user, such as a bubble removal function during the process of multiple chemical solution injections, is also essential for robust micromixing. To meet these requirements, we initiated a new beating control concept by controlling the beating behavior of the artificial cilia through remote and simultaneous actuation of human fingertip drawing. A series of micromixing test cases under extreme flow conditions (Re < 10(-3)) was conducted in the designed micromixer with high mixing performance. Satisfactory micromixing efficiency was achieved even with a rapid beating trajectory of the artificial cilia actuated through the fingertip motion of end users. The analytical paradigm and results allow end users to troubleshoot technical difficulties encountered during micromixing operations.


Assuntos
Microfluídica/instrumentação , Microfluídica/métodos , Reologia/instrumentação , Reologia/métodos , Robótica/métodos , Humanos , Magnetismo , Fatores de Tempo
11.
IEEE Trans Vis Comput Graph ; 18(6): 902-13, 2012 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-21690652

RESUMO

Field design has wide applications in graphics and visualization. One of the main challenges in field design has been how to provide users with both intuitive control over the directions in the field on one hand and robust management of its topology on the other hand. In this paper, we present a design paradigm for line fields that addresses this challenge. Rather than asking users to input all singularities as in most methods that offer topology control, we let the user provide a partitioning of the domain and specify simple flow patterns within the partitions. Represented by a selected set of harmonic functions, the elementary fields within the partitions are then combined to form continuous fields with rich appearances and well-determined topology. Our method allows a user to conveniently design the flow patterns while having precise and robust control over the topological structure. Based on the method, we developed an interactive tool for designing line fields from images, and demonstrated the utility of the fields in image stylization.

12.
IEEE Trans Vis Comput Graph ; 14(4): 948-60, 2008.
Artigo em Inglês | MEDLINE | ID: mdl-18467767

RESUMO

We present a novel post-processing utility called adaptive geometry image (AGIM) for global parameterization techniques that can embed a 3D surface onto a rectangular1 domain. This utility first converts a single rectangular parameterization into many different tessellations of square geometry images(GIMs) and then efficiently packs these GIMs into an image called AGIM. Therefore, undersampled regions of the input parameterization can be up-sampled accordingly until the local reconstruction error bound is met. The connectivity of AGIM can be quickly computed and dynamically changed at rendering time. AGIM does not have T-vertices, and therefore no crack is generated between two neighboring GIMs at different tessellations. Experimental results show that AGIM can achieve significant PSNR gain over the input parameterization, AGIM retains the advantages of the original GIM and reduces the reconstruction error present in the original GIM technique. The AGIM is also for global parameterization techniques based on quadrilateral complexes. Using the approximate sampling rates, the PolyCube-based quadrilateral complexes with AGIM can outperform state-of-the-art multichart GIM technique in terms of PSNR.


Assuntos
Algoritmos , Gráficos por Computador , Interpretação de Imagem Assistida por Computador/métodos , Imageamento Tridimensional/métodos , Modelos Teóricos , Análise Numérica Assistida por Computador , Interface Usuário-Computador , Simulação por Computador
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...