Your browser doesn't support javascript.
loading
: 20 | 50 | 100
1 - 20 de 26.162
1.
Skin Res Technol ; 30(5): e13690, 2024 May.
Article En | MEDLINE | ID: mdl-38716749

BACKGROUND: The response of AI in situations that mimic real life scenarios is poorly explored in populations of high diversity. OBJECTIVE: To assess the accuracy and validate the relevance of an automated, algorithm-based analysis geared toward facial attributes devoted to the adornment routines of women. METHODS: In a cross-sectional study, two diversified groups presenting similar distributions such as age, ancestry, skin phototype, and geographical location was created from the selfie images of 1041 female in a US population. 521 images were analyzed as part of a new training dataset aimed to improve the original algorithm and 520 were aimed to validate the performance of the AI. From a total 23 facial attributes (16 continuous and 7 categorical), all images were analyzed by 24 make-up experts and by the automated descriptor tool. RESULTS: For all facial attributes, the new and the original automated tool both surpassed the grading of the experts on a diverse population of women. For the 16 continuous attributes, the gradings obtained by the new system strongly correlated with the assessment made by make-up experts (r ≥ 0.80; p < 0.0001) and supported by a low error rate. For the seven categorical attributes, the overall accuracy of the AI-facial descriptor was improved via enrichment of the training dataset. However, some weaker performance in spotting specific facial attributes were noted. CONCLUSION: In conclusion, the AI-automatic facial descriptor tool was deemed accurate for analysis of facial attributes for diverse women although some skin complexion, eye color, and hair features required some further finetuning.


Algorithms , Face , Humans , Female , Cross-Sectional Studies , Adult , Face/anatomy & histology , Face/diagnostic imaging , United States , Middle Aged , Young Adult , Photography , Reproducibility of Results , Artificial Intelligence , Adolescent , Aged , Skin Pigmentation/physiology
2.
Nutrients ; 16(9)2024 Apr 26.
Article En | MEDLINE | ID: mdl-38732541

Nuts are nutrient-dense foods and can be incorporated into a healthy diet. Artificial intelligence-powered diet-tracking apps may promote nut consumption by providing real-time, accurate nutrition information but depend on data and model availability. Our team developed a dataset comprising 1380 photographs, each in RGB color format and with a resolution of 4032 × 3024 pixels. These images feature 11 types of nuts that are commonly consumed. Each photo includes three nut types; each type consists of 2-4 nuts, so 6-9 nuts are in each image. Rectangular bounding boxes were drawn using a visual geometry group (VGG) image annotator to facilitate the identification of each nut, delineating their locations within the images. This approach renders the dataset an excellent resource for training models capable of multi-label classification and object detection, as it was meticulously divided into training, validation, and test subsets. Utilizing transfer learning in Python with the IceVision framework, deep neural network models were adeptly trained to recognize and pinpoint the nuts depicted in the photographs. The ultimate model exhibited a mean average precision of 0.7596 in identifying various nut types within the validation subset and demonstrated a 97.9% accuracy rate in determining the number and kinds of nuts present in the test subset. By integrating specific nutritional data for each type of nut, the model can precisely (with error margins ranging from 0.8 to 2.6%) calculate the combined nutritional content-encompassing total energy, proteins, carbohydrates, fats (total and saturated), fiber, vitamin E, and essential minerals like magnesium, phosphorus, copper, manganese, and selenium-of the nuts shown in a photograph. Both the dataset and the model have been made publicly available to foster data exchange and the spread of knowledge. Our research underscores the potential of leveraging photographs for automated nut calorie and nutritional content estimation, paving the way for the creation of dietary tracking applications that offer real-time, precise nutritional insights to encourage nut consumption.


Neural Networks, Computer , Nutritive Value , Nuts , Photography , Humans , Deep Learning , Nutrients/analysis
3.
Sensors (Basel) ; 24(9)2024 Apr 26.
Article En | MEDLINE | ID: mdl-38732872

This paper presents an experimental evaluation of a wearable light-emitting diode (LED) transmitter in an optical camera communications (OCC) system. The evaluation is conducted under conditions of controlled user movement during indoor physical exercise, encompassing both mild and intense exercise scenarios. We introduce an image processing algorithm designed to identify a template signal transmitted by the LED and detected within the image. To enhance this process, we utilize the dynamics of controlled exercise-induced motion to limit the tracking process to a smaller region within the image. We demonstrate the feasibility of detecting the transmitting source within the frames, and thus limit the tracking process to a smaller region within the image, achieving an reduction of 87.3% for mild exercise and 79.0% for intense exercise.


Algorithms , Exercise , Wearable Electronic Devices , Humans , Exercise/physiology , Image Processing, Computer-Assisted/methods , Photography/instrumentation , Photography/methods , Delivery of Health Care
4.
J Vis ; 24(5): 1, 2024 May 01.
Article En | MEDLINE | ID: mdl-38691088

Still life paintings comprise a wealth of data on visual perception. Prior work has shown that the color statistics of objects show a marked bias for warm colors. Here, we ask about the relative chromatic contrast of these object-associated colors compared with background colors in still life paintings. We reasoned that, owing to the memory color effect, where the color of familiar objects is perceived more saturated, warm colors will be relatively more saturated than cool colors in still life paintings as compared with photographs. We analyzed color in 108 slides of still life paintings of fruit from the teaching slide collection of the Fogg University Art Museum and 41 color-calibrated photographs of fruit from the McGill data set. The results show that the relatively higher chromatic contrast of warm colors was greater for paintings compared with photographs, consistent with the hypothesis.


Color Perception , Fruit , Paintings , Photography , Humans , Color Perception/physiology , Photography/methods , Color , Contrast Sensitivity/physiology
5.
Ann Med ; 56(1): 2352018, 2024 Dec.
Article En | MEDLINE | ID: mdl-38738798

BACKGROUND: Diabetic retinopathy (DR) is a common complication of diabetes and may lead to irreversible visual loss. Efficient screening and improved treatment of both diabetes and DR have amended visual prognosis for DR. The number of patients with diabetes is increasing and telemedicine, mobile handheld devices and automated solutions may alleviate the burden for healthcare. We compared the performance of 21 artificial intelligence (AI) algorithms for referable DR screening in datasets taken by handheld Optomed Aurora fundus camera in a real-world setting. PATIENTS AND METHODS: Prospective study of 156 patients (312 eyes) attending DR screening and follow-up. Both papilla- and macula-centred 50° fundus images were taken from each eye. DR was graded by experienced ophthalmologists and 21 AI algorithms. RESULTS: Most eyes, 183 out of 312 (58.7%), had no DR and mild NPDR was noted in 21 (6.7%) of the eyes. Moderate NPDR was detected in 66 (21.2%) of the eyes, severe NPDR in 1 (0.3%), and PDR in 41 (13.1%) composing a group of 34.6% of eyes with referable DR. The AI algorithms achieved a mean agreement of 79.4% for referable DR, but the results varied from 49.4% to 92.3%. The mean sensitivity for referable DR was 77.5% (95% CI 69.1-85.8) and specificity 80.6% (95% CI 72.1-89.2). The rate for images ungradable by AI varied from 0% to 28.2% (mean 1.9%). Nineteen out of 21 (90.5%) AI algorithms resulted in grading for DR at least in 98% of the images. CONCLUSIONS: Fundus images captured with Optomed Aurora were suitable for DR screening. The performance of the AI algorithms varied considerably emphasizing the need for external validation of screening algorithms in real-world settings before their clinical application.


What is already known on this topic? Diabetic retinopathy (DR) is a common complication of diabetes. Efficient screening and timely treatment are important to avoid the development of sight-threatening DR. The increasing number of patients with diabetes and DR poses a challenge for healthcare.What this study adds? Telemedicine, mobile handheld devices and artificial intelligence (AI)-based automated algorithms are likely to alleviate the burden by improving efficacy of DR screening programs. Reliable algorithms of high quality exist despite the variability between the solutions.How this study might affect research, practice or policy? AI algorithms improve the efficacy of screening and might be implemented to clinical use after thorough validation in a real-life setting.


Algorithms , Artificial Intelligence , Diabetic Retinopathy , Fundus Oculi , Humans , Diabetic Retinopathy/diagnosis , Diabetic Retinopathy/diagnostic imaging , Female , Prospective Studies , Middle Aged , Male , Aged , Adult , Photography/instrumentation , Mass Screening/methods , Mass Screening/instrumentation , Sensitivity and Specificity
6.
Soc Sci Med ; 350: 116921, 2024 Jun.
Article En | MEDLINE | ID: mdl-38723586

Poor mental health among U.S. adolescents has reach epidemic proportions, with those from the Middle East and North African region exhibiting increased risk for distress and suicide ideation. This mixed-methods study analyzes quantitative data from first- and second-generation Arab adolescents (n = 171) and qualitative data from a participatory study conducted with 11 adolescents of the same population to understand the role of cultural resources in coping. Drawing on the Intersectional Theory of Cultural Repertoires in Health, we show that: 1) cultural resources underlie meaning-making throughout coping; 2) coping strategies are inseparable from the influence of peer and familial relationships, as dictated through the social norms and other cultural resources; 3) collectively held repertoires of coping can promote belonging, affirm identity, and protect against discrimination; and 4) the outcomes of coping strategies, and the culturally informed meaning individuals make of these outcomes, influence their future coping behaviors.


Adaptation, Psychological , Arabs , Social Stigma , Humans , Adolescent , Female , Male , Arabs/psychology , Arabs/statistics & numerical data , Qualitative Research , Photography
7.
PLoS One ; 19(5): e0303168, 2024.
Article En | MEDLINE | ID: mdl-38758960

INTRODUCTION: Globally, a shift is occurring to recognize the importance of young peoples' health and well-being, their unique health challenges, and the potential they hold as key drivers of change in their communities. In Haiti, one of the four leading causes of death for those 20-24 years old is pregnancy, childbirth, and the weeks after birth or at the end of a pregnancy. Important gaps remain in existing knowledge about youth perspectives of maternal health and well-being within their communities. Youth with lived experiences of maternal near-misses are well-positioned to contribute to the understanding of maternal health in their communities and their potential role in bringing about change. OBJECTIVES: To explore and understand youth perspectives of maternal near-miss experiences that occurred in a local healthcare facility or at home in rural Haiti. METHODS: We will conduct a qualitative, community-based participatory research study regarding maternal near-miss experiences to understand current challenges and identify solutions to improve community maternal health, specifically focused on youth maternal health. We will use Photovoice to seek an understanding of the lived experiences of youth maternal near-miss survivors. Participants will be from La Pointe, a Haitian community served by their local healthcare facility. We will undertake purposeful sampling to recruit approximately 20 female youth, aged 15-24 years. Data will be generated through photos, individual interviews and small group discussions (grouped by setting of near-miss experience). Data generation and analysis are expected to occur over a three-month period. ETHICS AND DISSEMINATION: Ethics approval will be sought from Centre Médical Béraca in La Pointe, Haiti, and from the Hamilton Integrated Research Ethics Board in Hamilton ON, Canada. We will involve community stakeholders, especially youth, in developing dissemination and knowledge mobilisation strategies. Our findings will be disseminated as an open access publication, be presented publicly, at conferences, and defended as part of a doctoral thesis.


Maternal Health , Humans , Female , Haiti , Pregnancy , Adolescent , Young Adult , Near Miss, Healthcare , Community-Based Participatory Research , Rural Population , Photography , Qualitative Research , Adult
8.
J Drugs Dermatol ; 23(5): e132-e133, 2024 05 01.
Article En | MEDLINE | ID: mdl-38709690

Skin self-examinations play a vital role in skin cancer detection and are often aided by online resources. Available reference photos must display the full spectrum of skin tones so patients may visualize how skin lesions can appear. This study investigated the portrayal of skin tones in skin cancer-related Google Images, discovering a significant underrepresentation of darker skin tones. J Drugs Dermatol. 2024;23(5):e132-e133.     doi:10.36849/JDD.7886e.


Skin Neoplasms , Skin Pigmentation , Humans , Skin Neoplasms/diagnosis , Skin Neoplasms/pathology , Photography , Self-Examination/methods , Skin/pathology , Internet , Search Engine
9.
J Drugs Dermatol ; 23(5): e137-e138, 2024 05 01.
Article En | MEDLINE | ID: mdl-38709691

When patients self-detect suspicious skin lesions, they often reference online photos prior to seeking medical evaluation. Online images must be available in the full spectrum of skin tones to provide accurate visualizations of disease, especially given the increased morbidity and mortality from skin cancer in patients with darker skin tones. The purpose of this study was to evaluate the representation of skin tones in photos of skin cancer on patient-facing websites. Six federally-based and organization websites were evaluated, and of the 372 total representations identified only 49 depicted darker skin tones (13.2%). This highlights the need to improve skin tone representation on patient-facing online resources. J Drugs Dermatol. 2024;23(5):e137-e138.     doi:10.36849/JDD.7905e.


Internet , Patient Education as Topic , Skin Neoplasms , Skin Pigmentation , Humans , Skin Neoplasms/diagnosis , Patient Education as Topic/methods , Photography , Skin
10.
Hist Cienc Saude Manguinhos ; 31: e2024020, 2024.
Article Es | MEDLINE | ID: mdl-38775521

To study about and reflect on the disease is to highlight the ways of seeing and saying what can a body and its power to be affected before fingerprints or traces that degrade it. This article exposes epistemological research on social representations brackets (where register know doctor) disease from the registry of Clinical Dermatology in the second half of the 19th century. This is resorted to an analysis of medical photographs preserved in archives of Colombia and Spain taking as discursive forms of seeing and saying the disease who have disfiguring effects in the body.


Estudiar y reflexionar sobre la enfermedad es poner de relieve las formas de ver y decir acerca de lo que puede un cuerpo y su potencia de ser afectado ante las huellas o vestigios que lo degradan. Este artículo expone los soportes epistemológicos de una investigación sobre las representaciones sociales (en la que se inscribe el saber médico) de la enfermedad desde el registro de la dermatología clínica durante la segunda mitad del siglo XIX. Para esto, se recurrió a un análisis de fotografías médicas conservada en archivos de Colombia y España y como horizonte discursivo las formas de ver y decir la enfermedad que tiene efectos deformantes en el cuerpo.


Photography , Photography/history , Humans , History, 19th Century , Spain , Colombia , Dermatology/history , Skin Diseases/history , History, 20th Century
11.
Transl Vis Sci Technol ; 13(4): 1, 2024 Apr 02.
Article En | MEDLINE | ID: mdl-38564203

Purpose: The purpose of this study was to develop a deep learning algorithm, to detect retinal breaks and retinal detachments on ultra-widefield fundus (UWF) optos images using artificial intelligence (AI). Methods: Optomap UWF images of the database were annotated to four groups by two retina specialists: (1) retinal breaks without detachment, (2) retinal breaks with retinal detachment, (3) retinal detachment without visible retinal breaks, and (4) a combination of groups 1 to 3. The fundus image data set was split into a training set and an independent test set following an 80% to 20% ratio. Image preprocessing methods were applied. An EfficientNet classification model was trained with the training set and evaluated with the test set. Results: A total of 2489 UWF images were included into the dataset, resulting in a training set size of 2008 UWF images and a test set size of 481 images. The classification models achieved an area under the receiver operating characteristic curve (AUC) on the testing set of 0.975 regarding lesion detection, an AUC of 0.972 for retinal detachment and an AUC of 0.913 for retinal breaks. Conclusions: A deep learning system to detect retinal breaks and retinal detachment using UWF images is feasible and has a good specificity. This is relevant for clinical routine as there can be a high rate of missed breaks in clinics. Future clinical studies will be necessary to evaluate the cost-effectiveness of applying such an algorithm as an automated auxiliary tool in a large practices or tertiary referral centers. Translational Relevance: This study demonstrates the relevance of applying AI in diagnosing peripheral retinal breaks in clinical routine in UWF fundus images.


Deep Learning , Retinal Detachment , Retinal Perforations , Humans , Retinal Detachment/diagnosis , Artificial Intelligence , Photography
14.
Technol Cult ; 65(1): 1-5, 2024.
Article En | MEDLINE | ID: mdl-38661791

The cover of this issue of Technology and Culture illustrates how China implemented-and promoted-on-the-job training in Africa. The image shows a Tanzanian dentist practicing dentistry under the supervision of a Chinese doctor in rural Tanzania, probably in the 1970s. Despite the ineffectiveness of the on-the-job training model, the photograph attempts to project the success of the dental surgery techniques exchanged between China and Tanzania, using simple medical equipment rather than sophisticated medical knowledge. The rural setting reflects the ideological struggle of the Cold War era, when Chinese doctors and rural mobile clinics sought to save lives in the countryside, while doctors from other countries engaged in Cold War competition worked primarily in cities. This essay argues that images were essential propaganda tools during the Cold War and urges historians of technology to use images critically by considering the contexts that influenced their creation.


Inservice Training , China , History, 20th Century , Humans , Inservice Training/history , Tanzania , Rural Health Services/history , Photography/history
15.
Cutis ; 113(3): 141-142, 2024 Mar.
Article En | MEDLINE | ID: mdl-38648596

Precise wound approximation during cutaneous suturing is of vital importance for optimal closure and long-term scar outcomes. Utilizing smartphone camera technology as a quality-control checkpoint for objective evaluation allows the dermatologic surgeon to scrutinize the wound edges and refine their surgical technique to improve scar outcomes.


Cicatrix , Smartphone , Suture Techniques , Humans , Suture Techniques/instrumentation , Photography , Dermatologic Surgical Procedures/instrumentation , Dermatologic Surgical Procedures/methods , Epidermis
16.
BMC Psychol ; 12(1): 233, 2024 Apr 25.
Article En | MEDLINE | ID: mdl-38664723

BACKGROUND: Organizational accounts of social networking sites (SNSs) are similar to individual accounts in terms of their online behaviors. Thus, they can be investigated from the perspective of personality, as individual accounts have been in the literature. Focusing on startups' Instagram accounts, this study aimed to investigate the characteristics of Big Five personality traits and the relationships between the traits and the characteristics of photos in organizational SNS accounts. METHODS: The personality traits of 108 startups' accounts were assessed with an online artificial intelligence service, and a correspondence analysis was performed to identify the key dimensions where the account were distributed by their personality. Photo features were extracted at the content and pixel levels, and correlational analyses between personality traits and photo features were conducted. Moreover, predictive analyses were performed using random forest regression models. RESULTS: The results indicated that personality of the accounts had high openness, agreeableness, and conscientiousness and moderate extraversion and neuroticism. In addition, the two dimensions of high vs. low in neuroticism and extraversion/openness vs. conscientiousness/agreeableness in the accounts' distribution by their personality traits were identified. Conscientiousness was the trait most associated with photo features-in particular, with content category, pixel-color, and visual features, while agreeableness was the trait least associated with photo features. Neuroticism was mainly correlated with pixel-level features, openness was correlated mainly with pixel-color features, and extraversion was correlated mainly with facial features. The personality traits, except neuroticism, were predicted from the photo features. CONCLUSIONS: This study applied the theoretical lens of personality, which has been mainly used to examine individuals' behaviors, to investigate the SNS communication of startups. Moreover, it focused on the visual communication of organizational accounts, which has not been actively studied in the literature. This study has implications for expanding the realm of personality research to organizational SNS accounts.


Personality , Photography , Social Media , Humans , Adult , Male , Female , Artificial Intelligence , Neuroticism
17.
Meat Sci ; 213: 109503, 2024 Jul.
Article En | MEDLINE | ID: mdl-38579510

This study aims to describe the meat quality of young Holstein (HOL) beef-on-dairy heifers and bulls sired by Angus (ANG, n = 109), Charolais (CHA, n = 101) and Danish Blue (DBL, n = 127), and to investigate the performance of the handheld vision-based Q-FOM™ Beef camera in predicting the intramuscular fat concentration (IMF%) in M. longissimus thoracis from carcasses quartered at the 5th-6th thoracic vertebra. The results showed significant differences between crossbreeds and sexes on carcass characteristics and meat quality. DBL × HOL had the highest EUROP conformation scores, whereas ANG × HOL had darker meat with higher IMF% (3.52%) compared to CHA × HOL (2.99%) and DBL × HOL (2.51%). Bulls had higher EUROP conformation scores than heifers, and heifers had higher IMF% (3.70%) than bulls (2.31%). These findings indicate the potential for producing high-quality meat from beef-on-dairy heifers and ANG bulls. The IMF% prediction model for Q-FOM performed well with R2 = 0.91 and root mean squared error of cross validation, RMSECV = 1.33%. The performance of the prediction model on the beef-on-dairy veal subsample ranging from 0.9 to 7.4% IMF had lower accuracy (R2 = 0.48) and the prediction error (RMSEveal) was 1.00%. When grouping beef-on-dairy veal carcasses into three IMF% classes (2.5% IMF bins), 62.6% of the carcasses were accurately predicted. Furthermore, Q-FOM IMF% predictions and chemically determined IMF% were similar for each combination of sex and crossbreed, revealing a potential of Q-FOM IMF% predictions to be used in breeding, when aiming for higher meat quality.


Adipose Tissue , Muscle, Skeletal , Red Meat , Thoracic Vertebrae , Animals , Cattle , Male , Red Meat/analysis , Female , Adipose Tissue/chemistry , Muscle, Skeletal/chemistry , Photography , Color , Breeding
18.
Meat Sci ; 213: 109500, 2024 Jul.
Article En | MEDLINE | ID: mdl-38582006

The objective of this study was to develop calibration models against rib eye traits and independently validate the precision, accuracy, and repeatability of the Frontmatec Q-FOM™ Beef grading camera in Australian carcasses. This study compiled 12 different research datasets acquired from commercial processing facilities and were comprised of a diverse range of carcass phenotypes, graded by industry identified expert Meat Standards Australia (MSA) graders and sampled for chemical intramuscular fat (IMF%). Calibration performance was maintained when the device was independently validated. For continuous traits, the Q-FOM™ demonstrated precise (root mean squared error of prediction, RMSEP) and accurate (coefficient of determination, R2) prediction of eye muscle area (EMA) (R2 = 0.89, RMSEP = 4.3 cm2, slope = 0.96, bias = 0.7), MSA marbling (R2 = 0.95, RMSEP = 47.2, slope = 0.98, bias = -12.8) and chemical IMF% (R2 = 0.94, RMSEP = 1.56%, slope = 0.96, bias = 0.64). For categorical traits, the Q-FOM™ predicted 61%, 64.3% and 60.8% of AUS-MEAT marbling, meat colour and fat colour scores equivalent, and 95% within ±1 classes of expert grader scores. The Q-FOM™ also demonstrated very high repeatability and reproducibility across all traits.


Adipose Tissue , Color , Muscle, Skeletal , Photography , Red Meat , Animals , Australia , Cattle , Red Meat/analysis , Red Meat/standards , Photography/methods , Calibration , Phenotype , Reproducibility of Results , Ribs
19.
Vet Rec ; 194(9): e4088, 2024 05 04.
Article En | MEDLINE | ID: mdl-38637964

BACKGROUND: Ophthalmoscopy is a valuable tool in clinical practice. We report the use of a novel smartphone-based handheld device for visualisation and photo-documentation of the ocular fundus in veterinary medicine. METHODS: Selected veterinary patients of a referral ophthalmology service were included if one or both eyes had clear ocular media, allowing for examination of the fundus. Following pharmacological mydriasis, fundic images were obtained with a handheld fundus camera (Volk VistaView). For comparison, the fundus of a subset of animals was also imaged with a veterinary-specific fundus camera (Optomed Smartscope VET2). RESULTS: The large field of view achieved by the Volk VistaView allowed for rapid and thorough observation of the ocular fundus in animals, providing a tool to visualise and record common pathologies of the posterior segment. Captured fundic images were sometimes overexposed, with the tapetal fundus artificially appearing hyperreflective when using the Volk VistaView camera, a finding that was less frequent when activating a 'veterinary mode' that reduced the sensitivity of the camera's sensor. The Volk VistaView compared well with the Optomed Smartscope VET2. LIMITATION: The main study limitation was the small sample size. CONCLUSIONS: The Volk VistaView camera was easy to use and provided good-quality fundic images in veterinary patients with healthy or diseased eyes, offering a wide field of view that was ideal for screening purposes.


Retinal Diseases , Smartphone , Veterinary Medicine , Animals , Retinal Diseases/veterinary , Retinal Diseases/diagnosis , Veterinary Medicine/instrumentation , Ophthalmoscopy/veterinary , Ophthalmoscopy/methods , Fundus Oculi , Photography/veterinary , Photography/instrumentation , Dogs , Dog Diseases/diagnosis , Cats
20.
Appetite ; 198: 107377, 2024 Jul 01.
Article En | MEDLINE | ID: mdl-38679064

Most instruments measuring nutrition literacy evaluate theoretical knowledge, not necessarily reflecting skills relevant to food choices. We aimed to develop and validate a photograph-based instrument to assess nutrition literacy (NUTLY) among adults in Portugal. NUTLY assesses the ability to distinguish foods with different nutritional profiles; from each of several combinations of three photographs (two foods with similar contents and one with higher content) participants are asked to identify the food with the highest energy/sodium content. The NUTLY version with 79 combinations, obtained after experts/lay people evaluations, was applied to a sample representing different age, gender and education groups (n = 329). Dimensionality was evaluated through latent trait models. Combinations with negative or with positive small factor loadings were excluded after critical assessment. Internal consistency was measured using Cronbach's alpha and construct validity by comparing NUTLY scores with those obtained in the Medical Term Recognition Test and the Newest Vital Sign (NVS), and across education and training in nutrition/health groups. The cut-off to distinguish adequate/inadequate nutrition literacy was defined through ROC analysis using the Youden index criterion, after performing a Latent class analysis which identified a two-class model to have the best goodness of fit. Test-retest reliability was assessed after one month (n = 158). The final NUTLY scale was unidimensional and included 48 combinations (energy: 33; sodium: 15; α = 0.74). Mean scores (±standard deviation) were highest among nutritionists (39.9 ± 4.4), followed by health professionals (38.5 ± 4.1) and declined with decreasing education (p < 0.001). Those with adequate nutrition literacy according to NVS showed higher NUTLY scores (37.9 ± 4.3 vs. 33.9 ± 6.9, p < 0.001). Adequate nutrition literacy was defined as a NUTLY score≥35 (sensitivity: 89.3%; specificity: 93.7%). Test-retest reliability was high (ICC = 0.77). NUTLY is a valid and reliable nutrition literacy measurement tool.


Health Literacy , Photography , Humans , Female , Male , Adult , Reproducibility of Results , Portugal , Middle Aged , Young Adult , Health Knowledge, Attitudes, Practice , Aged , Surveys and Questionnaires/standards , Adolescent
...