Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 26.162
Filter
1.
Hist Cienc Saude Manguinhos ; 31: e2024020, 2024.
Article in Spanish | MEDLINE | ID: mdl-38775521

ABSTRACT

To study about and reflect on the disease is to highlight the ways of seeing and saying what can a body and its power to be affected before fingerprints or traces that degrade it. This article exposes epistemological research on social representations brackets (where register know doctor) disease from the registry of Clinical Dermatology in the second half of the 19th century. This is resorted to an analysis of medical photographs preserved in archives of Colombia and Spain taking as discursive forms of seeing and saying the disease who have disfiguring effects in the body.


Estudiar y reflexionar sobre la enfermedad es poner de relieve las formas de ver y decir acerca de lo que puede un cuerpo y su potencia de ser afectado ante las huellas o vestigios que lo degradan. Este artículo expone los soportes epistemológicos de una investigación sobre las representaciones sociales (en la que se inscribe el saber médico) de la enfermedad desde el registro de la dermatología clínica durante la segunda mitad del siglo XIX. Para esto, se recurrió a un análisis de fotografías médicas conservada en archivos de Colombia y España y como horizonte discursivo las formas de ver y decir la enfermedad que tiene efectos deformantes en el cuerpo.


Subject(s)
Photography , Photography/history , Humans , History, 19th Century , Spain , Colombia , Dermatology/history , Skin Diseases/history , History, 20th Century
2.
Skin Res Technol ; 30(5): e13690, 2024 May.
Article in English | MEDLINE | ID: mdl-38716749

ABSTRACT

BACKGROUND: The response of AI in situations that mimic real life scenarios is poorly explored in populations of high diversity. OBJECTIVE: To assess the accuracy and validate the relevance of an automated, algorithm-based analysis geared toward facial attributes devoted to the adornment routines of women. METHODS: In a cross-sectional study, two diversified groups presenting similar distributions such as age, ancestry, skin phototype, and geographical location was created from the selfie images of 1041 female in a US population. 521 images were analyzed as part of a new training dataset aimed to improve the original algorithm and 520 were aimed to validate the performance of the AI. From a total 23 facial attributes (16 continuous and 7 categorical), all images were analyzed by 24 make-up experts and by the automated descriptor tool. RESULTS: For all facial attributes, the new and the original automated tool both surpassed the grading of the experts on a diverse population of women. For the 16 continuous attributes, the gradings obtained by the new system strongly correlated with the assessment made by make-up experts (r ≥ 0.80; p < 0.0001) and supported by a low error rate. For the seven categorical attributes, the overall accuracy of the AI-facial descriptor was improved via enrichment of the training dataset. However, some weaker performance in spotting specific facial attributes were noted. CONCLUSION: In conclusion, the AI-automatic facial descriptor tool was deemed accurate for analysis of facial attributes for diverse women although some skin complexion, eye color, and hair features required some further finetuning.


Subject(s)
Algorithms , Face , Humans , Female , Cross-Sectional Studies , Adult , Face/anatomy & histology , Face/diagnostic imaging , United States , Middle Aged , Young Adult , Photography , Reproducibility of Results , Artificial Intelligence , Adolescent , Aged , Skin Pigmentation/physiology
3.
Nutrients ; 16(9)2024 Apr 26.
Article in English | MEDLINE | ID: mdl-38732541

ABSTRACT

Nuts are nutrient-dense foods and can be incorporated into a healthy diet. Artificial intelligence-powered diet-tracking apps may promote nut consumption by providing real-time, accurate nutrition information but depend on data and model availability. Our team developed a dataset comprising 1380 photographs, each in RGB color format and with a resolution of 4032 × 3024 pixels. These images feature 11 types of nuts that are commonly consumed. Each photo includes three nut types; each type consists of 2-4 nuts, so 6-9 nuts are in each image. Rectangular bounding boxes were drawn using a visual geometry group (VGG) image annotator to facilitate the identification of each nut, delineating their locations within the images. This approach renders the dataset an excellent resource for training models capable of multi-label classification and object detection, as it was meticulously divided into training, validation, and test subsets. Utilizing transfer learning in Python with the IceVision framework, deep neural network models were adeptly trained to recognize and pinpoint the nuts depicted in the photographs. The ultimate model exhibited a mean average precision of 0.7596 in identifying various nut types within the validation subset and demonstrated a 97.9% accuracy rate in determining the number and kinds of nuts present in the test subset. By integrating specific nutritional data for each type of nut, the model can precisely (with error margins ranging from 0.8 to 2.6%) calculate the combined nutritional content-encompassing total energy, proteins, carbohydrates, fats (total and saturated), fiber, vitamin E, and essential minerals like magnesium, phosphorus, copper, manganese, and selenium-of the nuts shown in a photograph. Both the dataset and the model have been made publicly available to foster data exchange and the spread of knowledge. Our research underscores the potential of leveraging photographs for automated nut calorie and nutritional content estimation, paving the way for the creation of dietary tracking applications that offer real-time, precise nutritional insights to encourage nut consumption.


Subject(s)
Neural Networks, Computer , Nutritive Value , Nuts , Photography , Humans , Deep Learning , Nutrients/analysis
4.
Sensors (Basel) ; 24(9)2024 Apr 26.
Article in English | MEDLINE | ID: mdl-38732872

ABSTRACT

This paper presents an experimental evaluation of a wearable light-emitting diode (LED) transmitter in an optical camera communications (OCC) system. The evaluation is conducted under conditions of controlled user movement during indoor physical exercise, encompassing both mild and intense exercise scenarios. We introduce an image processing algorithm designed to identify a template signal transmitted by the LED and detected within the image. To enhance this process, we utilize the dynamics of controlled exercise-induced motion to limit the tracking process to a smaller region within the image. We demonstrate the feasibility of detecting the transmitting source within the frames, and thus limit the tracking process to a smaller region within the image, achieving an reduction of 87.3% for mild exercise and 79.0% for intense exercise.


Subject(s)
Algorithms , Exercise , Wearable Electronic Devices , Humans , Exercise/physiology , Image Processing, Computer-Assisted/methods , Photography/instrumentation , Photography/methods , Delivery of Health Care
5.
J Vis ; 24(5): 1, 2024 May 01.
Article in English | MEDLINE | ID: mdl-38691088

ABSTRACT

Still life paintings comprise a wealth of data on visual perception. Prior work has shown that the color statistics of objects show a marked bias for warm colors. Here, we ask about the relative chromatic contrast of these object-associated colors compared with background colors in still life paintings. We reasoned that, owing to the memory color effect, where the color of familiar objects is perceived more saturated, warm colors will be relatively more saturated than cool colors in still life paintings as compared with photographs. We analyzed color in 108 slides of still life paintings of fruit from the teaching slide collection of the Fogg University Art Museum and 41 color-calibrated photographs of fruit from the McGill data set. The results show that the relatively higher chromatic contrast of warm colors was greater for paintings compared with photographs, consistent with the hypothesis.


Subject(s)
Color Perception , Fruit , Paintings , Photography , Humans , Color Perception/physiology , Photography/methods , Color , Contrast Sensitivity/physiology
6.
J Drugs Dermatol ; 23(5): e132-e133, 2024 05 01.
Article in English | MEDLINE | ID: mdl-38709690

ABSTRACT

Skin self-examinations play a vital role in skin cancer detection and are often aided by online resources. Available reference photos must display the full spectrum of skin tones so patients may visualize how skin lesions can appear. This study investigated the portrayal of skin tones in skin cancer-related Google Images, discovering a significant underrepresentation of darker skin tones. J Drugs Dermatol. 2024;23(5):e132-e133.     doi:10.36849/JDD.7886e.


Subject(s)
Skin Neoplasms , Skin Pigmentation , Humans , Skin Neoplasms/diagnosis , Skin Neoplasms/pathology , Photography , Self-Examination/methods , Skin/pathology , Internet , Search Engine
7.
J Drugs Dermatol ; 23(5): e137-e138, 2024 05 01.
Article in English | MEDLINE | ID: mdl-38709691

ABSTRACT

When patients self-detect suspicious skin lesions, they often reference online photos prior to seeking medical evaluation. Online images must be available in the full spectrum of skin tones to provide accurate visualizations of disease, especially given the increased morbidity and mortality from skin cancer in patients with darker skin tones. The purpose of this study was to evaluate the representation of skin tones in photos of skin cancer on patient-facing websites. Six federally-based and organization websites were evaluated, and of the 372 total representations identified only 49 depicted darker skin tones (13.2%). This highlights the need to improve skin tone representation on patient-facing online resources. J Drugs Dermatol. 2024;23(5):e137-e138.     doi:10.36849/JDD.7905e.


Subject(s)
Internet , Patient Education as Topic , Skin Neoplasms , Skin Pigmentation , Humans , Skin Neoplasms/diagnosis , Patient Education as Topic/methods , Photography , Skin
8.
Ann Med ; 56(1): 2352018, 2024 Dec.
Article in English | MEDLINE | ID: mdl-38738798

ABSTRACT

BACKGROUND: Diabetic retinopathy (DR) is a common complication of diabetes and may lead to irreversible visual loss. Efficient screening and improved treatment of both diabetes and DR have amended visual prognosis for DR. The number of patients with diabetes is increasing and telemedicine, mobile handheld devices and automated solutions may alleviate the burden for healthcare. We compared the performance of 21 artificial intelligence (AI) algorithms for referable DR screening in datasets taken by handheld Optomed Aurora fundus camera in a real-world setting. PATIENTS AND METHODS: Prospective study of 156 patients (312 eyes) attending DR screening and follow-up. Both papilla- and macula-centred 50° fundus images were taken from each eye. DR was graded by experienced ophthalmologists and 21 AI algorithms. RESULTS: Most eyes, 183 out of 312 (58.7%), had no DR and mild NPDR was noted in 21 (6.7%) of the eyes. Moderate NPDR was detected in 66 (21.2%) of the eyes, severe NPDR in 1 (0.3%), and PDR in 41 (13.1%) composing a group of 34.6% of eyes with referable DR. The AI algorithms achieved a mean agreement of 79.4% for referable DR, but the results varied from 49.4% to 92.3%. The mean sensitivity for referable DR was 77.5% (95% CI 69.1-85.8) and specificity 80.6% (95% CI 72.1-89.2). The rate for images ungradable by AI varied from 0% to 28.2% (mean 1.9%). Nineteen out of 21 (90.5%) AI algorithms resulted in grading for DR at least in 98% of the images. CONCLUSIONS: Fundus images captured with Optomed Aurora were suitable for DR screening. The performance of the AI algorithms varied considerably emphasizing the need for external validation of screening algorithms in real-world settings before their clinical application.


What is already known on this topic? Diabetic retinopathy (DR) is a common complication of diabetes. Efficient screening and timely treatment are important to avoid the development of sight-threatening DR. The increasing number of patients with diabetes and DR poses a challenge for healthcare.What this study adds? Telemedicine, mobile handheld devices and artificial intelligence (AI)-based automated algorithms are likely to alleviate the burden by improving efficacy of DR screening programs. Reliable algorithms of high quality exist despite the variability between the solutions.How this study might affect research, practice or policy? AI algorithms improve the efficacy of screening and might be implemented to clinical use after thorough validation in a real-life setting.


Subject(s)
Algorithms , Artificial Intelligence , Diabetic Retinopathy , Fundus Oculi , Humans , Diabetic Retinopathy/diagnosis , Diabetic Retinopathy/diagnostic imaging , Female , Prospective Studies , Middle Aged , Male , Aged , Adult , Photography/instrumentation , Mass Screening/methods , Mass Screening/instrumentation , Sensitivity and Specificity
9.
Soc Sci Med ; 350: 116921, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38723586

ABSTRACT

Poor mental health among U.S. adolescents has reach epidemic proportions, with those from the Middle East and North African region exhibiting increased risk for distress and suicide ideation. This mixed-methods study analyzes quantitative data from first- and second-generation Arab adolescents (n = 171) and qualitative data from a participatory study conducted with 11 adolescents of the same population to understand the role of cultural resources in coping. Drawing on the Intersectional Theory of Cultural Repertoires in Health, we show that: 1) cultural resources underlie meaning-making throughout coping; 2) coping strategies are inseparable from the influence of peer and familial relationships, as dictated through the social norms and other cultural resources; 3) collectively held repertoires of coping can promote belonging, affirm identity, and protect against discrimination; and 4) the outcomes of coping strategies, and the culturally informed meaning individuals make of these outcomes, influence their future coping behaviors.


Subject(s)
Adaptation, Psychological , Arabs , Social Stigma , Humans , Adolescent , Female , Male , Arabs/psychology , Arabs/statistics & numerical data , Qualitative Research , Photography
10.
PLoS One ; 19(5): e0303168, 2024.
Article in English | MEDLINE | ID: mdl-38758960

ABSTRACT

INTRODUCTION: Globally, a shift is occurring to recognize the importance of young peoples' health and well-being, their unique health challenges, and the potential they hold as key drivers of change in their communities. In Haiti, one of the four leading causes of death for those 20-24 years old is pregnancy, childbirth, and the weeks after birth or at the end of a pregnancy. Important gaps remain in existing knowledge about youth perspectives of maternal health and well-being within their communities. Youth with lived experiences of maternal near-misses are well-positioned to contribute to the understanding of maternal health in their communities and their potential role in bringing about change. OBJECTIVES: To explore and understand youth perspectives of maternal near-miss experiences that occurred in a local healthcare facility or at home in rural Haiti. METHODS: We will conduct a qualitative, community-based participatory research study regarding maternal near-miss experiences to understand current challenges and identify solutions to improve community maternal health, specifically focused on youth maternal health. We will use Photovoice to seek an understanding of the lived experiences of youth maternal near-miss survivors. Participants will be from La Pointe, a Haitian community served by their local healthcare facility. We will undertake purposeful sampling to recruit approximately 20 female youth, aged 15-24 years. Data will be generated through photos, individual interviews and small group discussions (grouped by setting of near-miss experience). Data generation and analysis are expected to occur over a three-month period. ETHICS AND DISSEMINATION: Ethics approval will be sought from Centre Médical Béraca in La Pointe, Haiti, and from the Hamilton Integrated Research Ethics Board in Hamilton ON, Canada. We will involve community stakeholders, especially youth, in developing dissemination and knowledge mobilisation strategies. Our findings will be disseminated as an open access publication, be presented publicly, at conferences, and defended as part of a doctoral thesis.


Subject(s)
Maternal Health , Humans , Female , Haiti , Pregnancy , Adolescent , Young Adult , Near Miss, Healthcare , Community-Based Participatory Research , Rural Population , Photography , Qualitative Research , Adult
11.
PLoS One ; 19(4): e0298285, 2024.
Article in English | MEDLINE | ID: mdl-38573887

ABSTRACT

For many species, population sizes are unknown despite their importance for conservation. For population size estimation, capture-mark-recapture (CMR) studies are often used, which include the necessity to identify each individual, mostly through individual markings or genetic characters. Invasive marking techniques, however, can negatively affect the individual fitness. Alternatives are low-impact techniques such as the use of photos for individual identification, for species with stable distinctive phenotypic traits. For the individual identification of photos, a variety of different software, with different requirements, is available. The European fire salamander (Salamandra salamandra) is a species in which individuals, both at the larval stage and as adults, have individual specific patterns that allow for individual identification. In this study, we compared the performance of five different software for the use of photographic identification for the European fire salamander: Amphibian & Reptile Wildbook (ARW), AmphIdent, I3S pattern+, ManderMatcher and Wild-ID. While adults can be identified by all five software, European fire salamander larvae can currently only be identified by two of the five (ARW and Wild-ID). We used one dataset of European fire salamander larval pictures taken in the laboratory and tested this dataset in two of the five software (ARW and Wild-ID). We used another dataset of European fire salamander adult pictures taken in the field and tested this using all five software. We compared the requirements of all software on the pictures used and calculated the False Rejection Rate (FRR) and the Recognition Rate (RR). For the larval dataset (421 pictures) we found that the ARW and Wild-ID performed equally well for individual identification (99.6% and 100% Recognition Rate, respectively). For the adult dataset (377 pictures), we found the best False Rejection Rate in ManderMatcher and the highest Recognition Rate in the ARW. Additionally, the ARW is the only program that requires no image pre-processing. In times of amphibian declines, non-invasive photo identification software allowing capture-mark-recapture studies help to gain knowledge on population sizes, distribution, movement and demography of a population and can thus help to support species conservation.


Subject(s)
Salamandra , Humans , Animals , Larva , Phenotype , Photography , Software
12.
Transl Vis Sci Technol ; 13(4): 1, 2024 Apr 02.
Article in English | MEDLINE | ID: mdl-38564203

ABSTRACT

Purpose: The purpose of this study was to develop a deep learning algorithm, to detect retinal breaks and retinal detachments on ultra-widefield fundus (UWF) optos images using artificial intelligence (AI). Methods: Optomap UWF images of the database were annotated to four groups by two retina specialists: (1) retinal breaks without detachment, (2) retinal breaks with retinal detachment, (3) retinal detachment without visible retinal breaks, and (4) a combination of groups 1 to 3. The fundus image data set was split into a training set and an independent test set following an 80% to 20% ratio. Image preprocessing methods were applied. An EfficientNet classification model was trained with the training set and evaluated with the test set. Results: A total of 2489 UWF images were included into the dataset, resulting in a training set size of 2008 UWF images and a test set size of 481 images. The classification models achieved an area under the receiver operating characteristic curve (AUC) on the testing set of 0.975 regarding lesion detection, an AUC of 0.972 for retinal detachment and an AUC of 0.913 for retinal breaks. Conclusions: A deep learning system to detect retinal breaks and retinal detachment using UWF images is feasible and has a good specificity. This is relevant for clinical routine as there can be a high rate of missed breaks in clinics. Future clinical studies will be necessary to evaluate the cost-effectiveness of applying such an algorithm as an automated auxiliary tool in a large practices or tertiary referral centers. Translational Relevance: This study demonstrates the relevance of applying AI in diagnosing peripheral retinal breaks in clinical routine in UWF fundus images.


Subject(s)
Deep Learning , Retinal Detachment , Retinal Perforations , Humans , Retinal Detachment/diagnosis , Artificial Intelligence , Photography
13.
Nature ; 628(8009): 922, 2024 Apr.
Article in English | MEDLINE | ID: mdl-38637710
14.
Technol Cult ; 65(1): 1-5, 2024.
Article in English | MEDLINE | ID: mdl-38661791

ABSTRACT

The cover of this issue of Technology and Culture illustrates how China implemented-and promoted-on-the-job training in Africa. The image shows a Tanzanian dentist practicing dentistry under the supervision of a Chinese doctor in rural Tanzania, probably in the 1970s. Despite the ineffectiveness of the on-the-job training model, the photograph attempts to project the success of the dental surgery techniques exchanged between China and Tanzania, using simple medical equipment rather than sophisticated medical knowledge. The rural setting reflects the ideological struggle of the Cold War era, when Chinese doctors and rural mobile clinics sought to save lives in the countryside, while doctors from other countries engaged in Cold War competition worked primarily in cities. This essay argues that images were essential propaganda tools during the Cold War and urges historians of technology to use images critically by considering the contexts that influenced their creation.


Subject(s)
Inservice Training , China , History, 20th Century , Humans , Inservice Training/history , Tanzania , Rural Health Services/history , Photography/history
15.
BMC Psychol ; 12(1): 233, 2024 Apr 25.
Article in English | MEDLINE | ID: mdl-38664723

ABSTRACT

BACKGROUND: Organizational accounts of social networking sites (SNSs) are similar to individual accounts in terms of their online behaviors. Thus, they can be investigated from the perspective of personality, as individual accounts have been in the literature. Focusing on startups' Instagram accounts, this study aimed to investigate the characteristics of Big Five personality traits and the relationships between the traits and the characteristics of photos in organizational SNS accounts. METHODS: The personality traits of 108 startups' accounts were assessed with an online artificial intelligence service, and a correspondence analysis was performed to identify the key dimensions where the account were distributed by their personality. Photo features were extracted at the content and pixel levels, and correlational analyses between personality traits and photo features were conducted. Moreover, predictive analyses were performed using random forest regression models. RESULTS: The results indicated that personality of the accounts had high openness, agreeableness, and conscientiousness and moderate extraversion and neuroticism. In addition, the two dimensions of high vs. low in neuroticism and extraversion/openness vs. conscientiousness/agreeableness in the accounts' distribution by their personality traits were identified. Conscientiousness was the trait most associated with photo features-in particular, with content category, pixel-color, and visual features, while agreeableness was the trait least associated with photo features. Neuroticism was mainly correlated with pixel-level features, openness was correlated mainly with pixel-color features, and extraversion was correlated mainly with facial features. The personality traits, except neuroticism, were predicted from the photo features. CONCLUSIONS: This study applied the theoretical lens of personality, which has been mainly used to examine individuals' behaviors, to investigate the SNS communication of startups. Moreover, it focused on the visual communication of organizational accounts, which has not been actively studied in the literature. This study has implications for expanding the realm of personality research to organizational SNS accounts.


Subject(s)
Personality , Photography , Social Media , Humans , Adult , Male , Female , Artificial Intelligence , Neuroticism
16.
Cutis ; 113(3): 141-142, 2024 Mar.
Article in English | MEDLINE | ID: mdl-38648596

ABSTRACT

Precise wound approximation during cutaneous suturing is of vital importance for optimal closure and long-term scar outcomes. Utilizing smartphone camera technology as a quality-control checkpoint for objective evaluation allows the dermatologic surgeon to scrutinize the wound edges and refine their surgical technique to improve scar outcomes.


Subject(s)
Cicatrix , Smartphone , Suture Techniques , Humans , Suture Techniques/instrumentation , Photography , Dermatologic Surgical Procedures/instrumentation , Dermatologic Surgical Procedures/methods , Epidermis
18.
Meat Sci ; 213: 109503, 2024 Jul.
Article in English | MEDLINE | ID: mdl-38579510

ABSTRACT

This study aims to describe the meat quality of young Holstein (HOL) beef-on-dairy heifers and bulls sired by Angus (ANG, n = 109), Charolais (CHA, n = 101) and Danish Blue (DBL, n = 127), and to investigate the performance of the handheld vision-based Q-FOM™ Beef camera in predicting the intramuscular fat concentration (IMF%) in M. longissimus thoracis from carcasses quartered at the 5th-6th thoracic vertebra. The results showed significant differences between crossbreeds and sexes on carcass characteristics and meat quality. DBL × HOL had the highest EUROP conformation scores, whereas ANG × HOL had darker meat with higher IMF% (3.52%) compared to CHA × HOL (2.99%) and DBL × HOL (2.51%). Bulls had higher EUROP conformation scores than heifers, and heifers had higher IMF% (3.70%) than bulls (2.31%). These findings indicate the potential for producing high-quality meat from beef-on-dairy heifers and ANG bulls. The IMF% prediction model for Q-FOM performed well with R2 = 0.91 and root mean squared error of cross validation, RMSECV = 1.33%. The performance of the prediction model on the beef-on-dairy veal subsample ranging from 0.9 to 7.4% IMF had lower accuracy (R2 = 0.48) and the prediction error (RMSEveal) was 1.00%. When grouping beef-on-dairy veal carcasses into three IMF% classes (2.5% IMF bins), 62.6% of the carcasses were accurately predicted. Furthermore, Q-FOM IMF% predictions and chemically determined IMF% were similar for each combination of sex and crossbreed, revealing a potential of Q-FOM IMF% predictions to be used in breeding, when aiming for higher meat quality.


Subject(s)
Adipose Tissue , Muscle, Skeletal , Red Meat , Thoracic Vertebrae , Animals , Cattle , Male , Red Meat/analysis , Female , Adipose Tissue/chemistry , Muscle, Skeletal/chemistry , Photography , Color , Breeding
19.
Vet Rec ; 194(9): e4088, 2024 05 04.
Article in English | MEDLINE | ID: mdl-38637964

ABSTRACT

BACKGROUND: Ophthalmoscopy is a valuable tool in clinical practice. We report the use of a novel smartphone-based handheld device for visualisation and photo-documentation of the ocular fundus in veterinary medicine. METHODS: Selected veterinary patients of a referral ophthalmology service were included if one or both eyes had clear ocular media, allowing for examination of the fundus. Following pharmacological mydriasis, fundic images were obtained with a handheld fundus camera (Volk VistaView). For comparison, the fundus of a subset of animals was also imaged with a veterinary-specific fundus camera (Optomed Smartscope VET2). RESULTS: The large field of view achieved by the Volk VistaView allowed for rapid and thorough observation of the ocular fundus in animals, providing a tool to visualise and record common pathologies of the posterior segment. Captured fundic images were sometimes overexposed, with the tapetal fundus artificially appearing hyperreflective when using the Volk VistaView camera, a finding that was less frequent when activating a 'veterinary mode' that reduced the sensitivity of the camera's sensor. The Volk VistaView compared well with the Optomed Smartscope VET2. LIMITATION: The main study limitation was the small sample size. CONCLUSIONS: The Volk VistaView camera was easy to use and provided good-quality fundic images in veterinary patients with healthy or diseased eyes, offering a wide field of view that was ideal for screening purposes.


Subject(s)
Retinal Diseases , Smartphone , Veterinary Medicine , Animals , Retinal Diseases/veterinary , Retinal Diseases/diagnosis , Veterinary Medicine/instrumentation , Ophthalmoscopy/veterinary , Ophthalmoscopy/methods , Fundus Oculi , Photography/veterinary , Photography/instrumentation , Dogs , Dog Diseases/diagnosis , Cats
20.
J Plast Reconstr Aesthet Surg ; 92: 264-275, 2024 May.
Article in English | MEDLINE | ID: mdl-38582052

ABSTRACT

BACKGROUND: The increasing number of esthetic procedures emphasizes the need for effective evaluation methods of outcomes. Current practices include the individual practitioners' judgment in conjunction with standardized scales, often relying on the comparison of before and after photographs. This study investigates whether comparative evaluations influence the perception of beauty and aims to enhance the accuracy of esthetic assessments in clinical and research settings. OBJECTIVE: To compare the evaluation of attractiveness and gender characteristics of faces in group-based versus individual ratings. METHODS: A sample of 727 volunteers (average age of 29.5 years) assessed 40 facial photographs (20 male, 20 female) for attractiveness, masculinity, and femininity using a 5-point Likert scale. Each face was digitally edited to display varying ratios in four lip-related proportions: vertical lip position, lip width, upper lip esthetics, and lower lip esthetics. Participants rated these images both in an image series (group-based) and individually. RESULTS: Differences in the perception of the most attractive/masculine/feminine ratios for each lip proportion were found in both the group-based and individual ratings. Group ratings exhibited a significant central tendency bias, with a preference for more average outcomes compared with individual ratings, with an average difference of 0.50 versus 1.00. (p = 0.033) CONCLUSION: A central tendency bias was noted in evaluations of attractiveness, masculinity, and femininity in group-based image presentation, indicating a bias toward more "average" features. Conversely, individual assessments displayed a preference for more pronounced, "non-average" appearances, thereby possibly pointing toward a malleable "intrinsic esthetic blueprint" shaped by comparative context.


Subject(s)
Beauty , Esthetics , Face , Humans , Female , Male , Adult , Face/anatomy & histology , Photography , Masculinity , Femininity , Young Adult , Adolescent , Middle Aged , Lip/anatomy & histology , Surveys and Questionnaires
SELECTION OF CITATIONS
SEARCH DETAIL
...