Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 5.249
Filter
1.
Braz J Biol ; 84: e279855, 2024.
Article in English | MEDLINE | ID: mdl-38985068

ABSTRACT

Leaf Area Index (LAI) is the ratio of ground surface area covered by leaves. LAI plays a significant role in the structural characteristics of forest ecosystems. Therefore, an accurate estimation process is needed. One method for estimating LAI is using Digital Cover Photography. However, most applications for processing LAI using digital photos do not consider the brown color of plant parts. Previous research, which includes brown color as part of the calculation, potentially produced biased results by the increased pixel count from the original photo. This study aims to enhance the accuracy of LAI estimation. The proposed methods consider the brown color while minimizing errors. Image processing is carried out in two stages to separate leaves and non-leaf pixels by using the RGB color model for the first stage and applying the CIELAB color model in the second stage. Proposed methods and existing applications are evaluated against the actual LAI value obtained using Terrestrial Laser Scanning (TLS) as the ground truth. The results demonstrate that the proposed methods effectively identify non-leaf parts and exhibit the lowest error rates compared to other methods. In conclusion, this study provides alternative techniques to enhance the accuracy of LAI estimation in forest ecosystems.


Subject(s)
Forests , Image Processing, Computer-Assisted , Photography , Plant Leaves , Plant Leaves/anatomy & histology , Photography/methods , Image Processing, Computer-Assisted/methods , Trees , Color
2.
BMJ Open Ophthalmol ; 9(1)2024 Jul 05.
Article in English | MEDLINE | ID: mdl-38969362

ABSTRACT

OBJECTIVES: This study aimed to quantitatively evaluate optic nerve head and retinal vascular parameters in children with hyperopia in relation to age and spherical equivalent refraction (SER) using artificial intelligence (AI)-based analysis of colour fundus photographs (CFP). METHODS AND ANALYSIS: This cross-sectional study included 324 children with hyperopia aged 3-12 years. Participants were divided into low hyperopia (SER+0.5 D to+2.0 D) and moderate-to-high hyperopia (SER≥+2.0 D) groups. Fundus parameters, such as optic disc area and mean vessel diameter, were automatically and quantitatively detected using AI. Significant variables (p<0.05) in the univariate analysis were included in a stepwise multiple linear regression. RESULTS: Overall, 324 children were included, 172 with low and 152 with moderate-to-high hyperopia. The median optic disc area and vessel diameter were 1.42 mm2 and 65.09 µm, respectively. Children with high hyperopia had larger superior neuroretinal rim (NRR) width and larger vessel diameter than those with low and moderate hyperopia. In the univariate analysis, axial length was significantly associated with smaller superior NRR width (ß=-3.030, p<0.001), smaller temporal NRR width (ß=-1.469, p=0.020) and smaller vessel diameter (ß=-0.076, p<0.001). A mild inverse correlation was observed between the optic disc area and vertical disc diameter with age. CONCLUSION: AI-based CFP analysis showed that children with high hyperopia had larger mean vessel diameter but smaller vertical cup-to-disc ratio than those with low hyperopia. This suggests that AI can provide quantitative data on fundus parameters in children with hyperopia.


Subject(s)
Artificial Intelligence , Hyperopia , Optic Disk , Photography , Retinal Vessels , Humans , Hyperopia/diagnosis , Hyperopia/physiopathology , Cross-Sectional Studies , Male , Child , Female , Child, Preschool , Optic Disk/diagnostic imaging , Optic Disk/pathology , Optic Disk/blood supply , Retinal Vessels/diagnostic imaging , Retinal Vessels/pathology , Photography/methods , Fundus Oculi , Visual Acuity/physiology , Refraction, Ocular/physiology
3.
F1000Res ; 13: 360, 2024.
Article in English | MEDLINE | ID: mdl-39045173

ABSTRACT

Invasive plant species pose ecological threats to native ecosystems, particularly in areas adjacent to roadways, considering that roadways represent lengthy corridors through which invasive species can propagate. Traditional manual survey methods for monitoring invasive plants are labor-intensive and limited in coverage. This paper introduces a high-speed camera system, named CamAlien, designed to be mounted on vehicles for efficient invasive plant species monitoring along roadways. The camera system captures high-quality images at rapid intervals, to monitor the full roadside when following traffic speed. The system utilizes a global shutter sensor to reduce distortion and geotagging for precise localistion. The camera system makes it possible to collect extensive data sets, which can be used for a digital library of the invasive species and their locations, but also subsequent training of machine learning algorithms for automated species recognition.


Subject(s)
Introduced Species , Plants , Environmental Monitoring/methods , Environmental Monitoring/instrumentation , Photography/instrumentation , Photography/methods , Ecosystem
4.
BMC Oral Health ; 24(1): 828, 2024 Jul 22.
Article in English | MEDLINE | ID: mdl-39039499

ABSTRACT

BACKGROUND: Dental caries is a global public health concern, and early detection is essential. Traditional methods, particularly visual examination, face access and cost challenges. Teledentistry, as an emerging technology, offers the possibility to overcome such barriers, and it must be given high priority for assessment to optimize the performance of oral healthcare systems. The aim of this study was to systematically review the literature evaluating the diagnostic accuracy of teledentistry using photographs taken by Digital Single Lens Reflex (DSLR) and smartphone cameras against visual clinical examination in either primary or permanent dentition. METHODS: The review followed PRISMA-DTA guidelines, and the PubMed, Scopus, and Embase databases were searched through December 2022. Original in-vivo studies comparing dental caries diagnosis via images taken by DSLR or smartphone cameras with clinical examination were included. The QUADAS-2 was used to assess the risk of bias and concerns regarding applicability. Meta-analysis was not performed due to heterogeneity among the studies. Therefore, the data were analyzed narratively by the research team. RESULTS: In the 19 studies included, the sensitivity and specificity ranged from 48 to 98.3% and from 83 to 100%, respectively. The variability in performance was attributed to factors such as study design and diagnostic criteria. Specific tooth surfaces and lesion stages must be considered when interpreting outcomes. Using smartphones for dental photography was common due to the convenience and accessibility of these devices. The employment of mid-level dental providers for remote screening yielded comparable results to those of dentists. Potential bias in patient selection was indicated, suggesting a need for improvements in study design. CONCLUSION: The diagnostic accuracy of teledentistry for caries detection is comparable to that of traditional clinical examination. The findings establish teledentistry's effectiveness, particularly in lower income settings or areas with access problems. While the results of this review is promising, conducting several more rigorous studies with well-designed methodologies can fully validate the diagnostic accuracy of teledentistry for dental caries to make oral health care provision more efficient and equitable. REGISTRATION: This study was registered with PROSPERO (CRD42023417437).


Subject(s)
Dental Caries , Photography, Dental , Humans , Dental Caries/diagnosis , Photography, Dental/methods , Photography, Dental/instrumentation , Telemedicine , Photography/methods , Smartphone , Sensitivity and Specificity
5.
Sci Rep ; 14(1): 15517, 2024 Jul 05.
Article in English | MEDLINE | ID: mdl-38969757

ABSTRACT

CorneAI for iOS is an artificial intelligence (AI) application to classify the condition of the cornea and cataract into nine categories: normal, infectious keratitis, non-infection keratitis, scar, tumor, deposit, acute primary angle closure, lens opacity, and bullous keratopathy. We evaluated its performance to classify multiple conditions of the cornea and cataract of various races in images published in the Cornea journal. The positive predictive value (PPV) of the top classification with the highest predictive score was 0.75, and the PPV for the top three classifications exceeded 0.80. For individual diseases, the highest PPVs were 0.91, 0.73, 0.42, 0.72, 0.77, and 0.55 for infectious keratitis, normal, non-infection keratitis, scar, tumor, and deposit, respectively. CorneAI for iOS achieved an area under the receiver operating characteristic curve of 0.78 (95% confidence interval [CI] 0.5-1.0) for normal, 0.76 (95% CI 0.67-0.85) for infectious keratitis, 0.81 (95% CI 0.64-0.97) for non-infection keratitis, 0.55 (95% CI 0.41-0.69) for scar, 0.62 (95% CI 0.27-0.97) for tumor, and 0.71 (95% CI 0.53-0.89) for deposit. CorneAI performed well in classifying various conditions of the cornea and cataract when used to diagnose journal images, including those with variable imaging conditions, ethnicities, and rare cases.


Subject(s)
Cataract , Corneal Diseases , Humans , Cataract/classification , Cataract/diagnosis , Corneal Diseases/classification , Corneal Diseases/diagnosis , Photography/methods , Artificial Intelligence , Cornea/pathology , Cornea/diagnostic imaging , ROC Curve
6.
BMC Ophthalmol ; 24(1): 285, 2024 Jul 15.
Article in English | MEDLINE | ID: mdl-39009964

ABSTRACT

AIM: This study aimed to differentiate moderate to high myopic astigmatism from forme fruste keratoconus using Pentacam parameters and develop a predictive model for early keratoconus detection. METHODS: We retrospectively analysed 196 eyes from 105 patients and compared Pentacam variables between myopic astigmatism (156 eyes) and forme fruste keratoconus (40 eyes) groups. Receiver operating characteristic curve analysis was used to determine the optimal cut-off values, and a logistic regression model was used to refine the diagnostic accuracy. RESULTS: Statistically significant differences were observed in most Pentacam variables between the groups (p < 0.05). Parameters such as the Index of Surface Variance (ISV), Keratoconus Index (KI), Belin/Ambrosio Deviation Display (BAD_D) and Back Elevation of the Thinnest Corneal Locale (B.Ele.Th) demonstrated promising discriminatory abilities, with BAD_D exhibiting the highest Area under the Curve. The logistic regression model achieved high sensitivity (92.5%), specificity (96.8%), accuracy (95.9%), and positive predictive value (88.1%). CONCLUSION: The simultaneous evaluation of BAD_D, ISV, B.Ele.Th, and KI aids in identifying forme fruste keratoconus cases. Optimal cut-off points demonstrate acceptable sensitivity and specificity, emphasizing their clinical utility pending further refinement and validation across diverse demographics.


Subject(s)
Corneal Topography , Keratoconus , Photography , ROC Curve , Humans , Keratoconus/diagnosis , Female , Male , Retrospective Studies , Adult , Ghana , Corneal Topography/methods , Photography/methods , Young Adult , Adolescent , Cornea/pathology , Cornea/diagnostic imaging , Middle Aged , Myopia/diagnosis , Astigmatism/diagnosis , Visual Acuity
7.
PeerJ ; 12: e17577, 2024.
Article in English | MEDLINE | ID: mdl-38938602

ABSTRACT

Background: Enhancing detection of cryptic snakes is critical for the development of conservation and management strategies; yet, finding methods that provide adequate detection remains challenging. Issues with detecting snakes can be particularly problematic for some species, like the invasive Burmese python (Python bivittatus) in the Florida Everglades. Methods: Using multiple survey methods, we predicted that our ability to detect pythons, larger snakes and all other snakes would be enhanced with the use of live mammalian lures (domesticated rabbits; Oryctolagus cuniculus). Specifically, we used visual surveys, python detection dogs, and time-lapse game cameras to determine if domesticated rabbits were an effective lure. Results: Time-lapse game cameras detected almost 40 times more snakes (n = 375, treatment = 245, control = 130) than visual surveys (n = 10). We recorded 21 independent detections of pythons at treatment pens (with lures) and one detection at a control pen (without lures). In addition, we found larger snakes, and all other snakes were 165% and 74% more likely to be detected at treatment pens compared to control pens, respectively. Time-lapse cameras detected almost 40 times more snakes than visual surveys; we did not detect any pythons with python detection dogs. Conclusions: Our study presents compelling evidence that the detection of snakes is improved by coupling live mammalian lures with time-lapse game cameras. Although the identification of smaller snake species was limited, this was due to pixel resolution, which could be improved by changing the camera focal length. For larger snakes with individually distinctive patterns, this method could potentially be used to identify unique individuals and thus allow researchers to estimate population dynamics.


Subject(s)
Boidae , Snakes , Time-Lapse Imaging , Animals , Rabbits , Time-Lapse Imaging/methods , Florida , Dogs , Photography/instrumentation , Photography/methods , Predatory Behavior/physiology
8.
J Biomed Opt ; 29(7): 076001, 2024 Jul.
Article in English | MEDLINE | ID: mdl-38912212

ABSTRACT

Significance: Retinopathy of prematurity (ROP) poses a significant global threat to childhood vision, necessitating effective screening strategies. This study addresses the impact of color channels in fundus imaging on ROP diagnosis, emphasizing the efficacy and safety of utilizing longer wavelengths, such as red or green for enhanced depth information and improved diagnostic capabilities. Aim: This study aims to assess the spectral effectiveness in color fundus photography for the deep learning classification of ROP. Approach: A convolutional neural network end-to-end classifier was utilized for deep learning classification of normal, stage 1, stage 2, and stage 3 ROP fundus images. The classification performances with individual-color-channel inputs, i.e., red, green, and blue, and multi-color-channel fusion architectures, including early-fusion, intermediate-fusion, and late-fusion, were quantitatively compared. Results: For individual-color-channel inputs, similar performance was observed for green channel (88.00% accuracy, 76.00% sensitivity, and 92.00% specificity) and red channel (87.25% accuracy, 74.50% sensitivity, and 91.50% specificity), which is substantially outperforming the blue channel (78.25% accuracy, 56.50% sensitivity, and 85.50% specificity). For multi-color-channel fusion options, the early-fusion and intermediate-fusion architecture showed almost the same performance when compared to the green/red channel input, and they outperformed the late-fusion architecture. Conclusions: This study reveals that the classification of ROP stages can be effectively achieved using either the green or red image alone. This finding enables the exclusion of blue images, acknowledged for their increased susceptibility to light toxicity.


Subject(s)
Deep Learning , Photography , Retinopathy of Prematurity , Retinopathy of Prematurity/diagnostic imaging , Retinopathy of Prematurity/classification , Humans , Infant, Newborn , Photography/methods , Fundus Oculi , Image Interpretation, Computer-Assisted/methods , Neural Networks, Computer , Color
9.
Elife ; 122024 Jun 19.
Article in English | MEDLINE | ID: mdl-38896568

ABSTRACT

We present open-source tools for three-dimensional (3D) analysis of photographs of dissected slices of human brains, which are routinely acquired in brain banks but seldom used for quantitative analysis. Our tools can: (1) 3D reconstruct a volume from the photographs and, optionally, a surface scan; and (2) produce a high-resolution 3D segmentation into 11 brain regions per hemisphere (22 in total), independently of the slice thickness. Our tools can be used as a substitute for ex vivo magnetic resonance imaging (MRI), which requires access to an MRI scanner, ex vivo scanning expertise, and considerable financial resources. We tested our tools on synthetic and real data from two NIH Alzheimer's Disease Research Centers. The results show that our methodology yields accurate 3D reconstructions, segmentations, and volumetric measurements that are highly correlated to those from MRI. Our method also detects expected differences between post mortem confirmed Alzheimer's disease cases and controls. The tools are available in our widespread neuroimaging suite 'FreeSurfer' (https://surfer.nmr.mgh.harvard.edu/fswiki/PhotoTools).


Every year, thousands of human brains are donated to science. These brains are used to study normal aging, as well as neurological diseases like Alzheimer's or Parkinson's. Donated brains usually go to 'brain banks', institutions where the brains are dissected to extract tissues relevant to different diseases. During this process, it is routine to take photographs of brain slices for archiving purposes. Often, studies of dead brains rely on qualitative observations, such as 'the hippocampus displays some atrophy', rather than concrete 'numerical' measurements. This is because the gold standard to take three-dimensional measurements of the brain is magnetic resonance imaging (MRI), which is an expensive technique that requires high expertise ­ especially with dead brains. The lack of quantitative data means it is not always straightforward to study certain conditions. To bridge this gap, Gazula et al. have developed an openly available software that can build three-dimensional reconstructions of dead brains based on photographs of brain slices. The software can also use machine learning methods to automatically extract different brain regions from the three-dimensional reconstructions and measure their size. These data can be used to take precise quantitative measurements that can be used to better describe how different conditions lead to changes in the brain, such as atrophy (reduced volume of one or more brain regions). The researchers assessed the accuracy of the method in two ways. First, they digitally sliced MRI-scanned brains and used the software to compute the sizes of different structures based on these synthetic data, comparing the results to the known sizes. Second, they used brains for which both MRI data and dissection photographs existed and compared the measurements taken by the software to the measurements obtained with MRI images. Gazula et al. show that, as long as the photographs satisfy some basic conditions, they can provide good estimates of the sizes of many brain structures. The tools developed by Gazula et al. are publicly available as part of FreeSurfer, a widespread neuroimaging software that can be used by any researcher working at a brain bank. This will allow brain banks to obtain accurate measurements of dead brains, allowing them to cheaply perform quantitative studies of brain structures, which could lead to new findings relating to neurodegenerative diseases.


Subject(s)
Alzheimer Disease , Brain , Imaging, Three-Dimensional , Machine Learning , Humans , Imaging, Three-Dimensional/methods , Alzheimer Disease/diagnostic imaging , Alzheimer Disease/pathology , Brain/diagnostic imaging , Brain/pathology , Photography/methods , Dissection , Magnetic Resonance Imaging/methods , Neuropathology/methods , Neuroimaging/methods
10.
Sensors (Basel) ; 24(9)2024 Apr 26.
Article in English | MEDLINE | ID: mdl-38732872

ABSTRACT

This paper presents an experimental evaluation of a wearable light-emitting diode (LED) transmitter in an optical camera communications (OCC) system. The evaluation is conducted under conditions of controlled user movement during indoor physical exercise, encompassing both mild and intense exercise scenarios. We introduce an image processing algorithm designed to identify a template signal transmitted by the LED and detected within the image. To enhance this process, we utilize the dynamics of controlled exercise-induced motion to limit the tracking process to a smaller region within the image. We demonstrate the feasibility of detecting the transmitting source within the frames, and thus limit the tracking process to a smaller region within the image, achieving an reduction of 87.3% for mild exercise and 79.0% for intense exercise.


Subject(s)
Algorithms , Exercise , Wearable Electronic Devices , Humans , Exercise/physiology , Image Processing, Computer-Assisted/methods , Photography/instrumentation , Photography/methods , Delivery of Health Care
11.
Transl Vis Sci Technol ; 13(5): 20, 2024 May 01.
Article in English | MEDLINE | ID: mdl-38780955

ABSTRACT

Purpose: We sough to develop an automatic method of quantifying optic disc pallor in fundus photographs and determine associations with peripapillary retinal nerve fiber layer (pRNFL) thickness. Methods: We used deep learning to segment the optic disc, fovea, and vessels in fundus photographs, and measured pallor. We assessed the relationship between pallor and pRNFL thickness derived from optical coherence tomography scans in 118 participants. Separately, we used images diagnosed by clinical inspection as pale (n = 45) and assessed how measurements compared with healthy controls (n = 46). We also developed automatic rejection thresholds and tested the software for robustness to camera type, image format, and resolution. Results: We developed software that automatically quantified disc pallor across several zones in fundus photographs. Pallor was associated with pRNFL thickness globally (ß = -9.81; standard error [SE] = 3.16; P < 0.05), in the temporal inferior zone (ß = -29.78; SE = 8.32; P < 0.01), with the nasal/temporal ratio (ß = 0.88; SE = 0.34; P < 0.05), and in the whole disc (ß = -8.22; SE = 2.92; P < 0.05). Furthermore, pallor was significantly higher in the patient group. Last, we demonstrate the analysis to be robust to camera type, image format, and resolution. Conclusions: We developed software that automatically locates and quantifies disc pallor in fundus photographs and found associations between pallor measurements and pRNFL thickness. Translational Relevance: We think our method will be useful for the identification, monitoring, and progression of diseases characterized by disc pallor and optic atrophy, including glaucoma, compression, and potentially in neurodegenerative disorders.


Subject(s)
Deep Learning , Nerve Fibers , Optic Disk , Photography , Software , Tomography, Optical Coherence , Humans , Optic Disk/diagnostic imaging , Optic Disk/pathology , Tomography, Optical Coherence/methods , Male , Female , Middle Aged , Nerve Fibers/pathology , Photography/methods , Adult , Retinal Ganglion Cells/pathology , Retinal Ganglion Cells/cytology , Aged , Optic Nerve Diseases/diagnostic imaging , Optic Nerve Diseases/diagnosis , Optic Nerve Diseases/pathology , Fundus Oculi
12.
Sci Rep ; 14(1): 12304, 2024 05 29.
Article in English | MEDLINE | ID: mdl-38811714

ABSTRACT

Recent advances in artificial intelligence (AI) enable the generation of realistic facial images that can be used in police lineups. The use of AI image generation offers pragmatic advantages in that it allows practitioners to generate filler images directly from the description of the culprit using text-to-image generation, avoids the violation of identity rights of natural persons who are not suspects and eliminates the constraints of being bound to a database with a limited set of photographs. However, the risk exists that using AI-generated filler images provokes more biased selection of the suspect if eyewitnesses are able to distinguish AI-generated filler images from the photograph of the suspect's face. Using a model-based analysis, we compared biased suspect selection directly between lineups with AI-generated filler images and lineups with database-derived filler photographs. The results show that the lineups with AI-generated filler images were perfectly fair and, in fact, led to less biased suspect selection than the lineups with database-derived filler photographs used in previous experiments. These results are encouraging with regard to the potential of AI image generation for constructing fair lineups which should inspire more systematic research on the feasibility of adopting AI technology in forensic settings.


Subject(s)
Artificial Intelligence , Face , Humans , Image Processing, Computer-Assisted/methods , Photography/methods , Police , Databases, Factual , Forensic Sciences/methods , Female , Crime
13.
J Craniofac Surg ; 35(4): e376-e380, 2024 Jun 01.
Article in English | MEDLINE | ID: mdl-38722365

ABSTRACT

OBJECTIVE: Orthognathic surgery is a viable and reproducible treatment for facial deformities. Despite the precision of the skeletal planning of surgical procedures, there is little information about the relations between hard and soft tissues in three-dimensional (3D) analysis, resulting in unpredictable soft tissue outcomes. Three-dimensional photography is a viable tool for soft tissue analysis because it is easy to use, has wide availability, low cost, and is harmless. This review aims to establish parameters for acquiring consistent and reproducible 3D facial images. METHODS: A scoping review was conducted across PubMed, SCOPUS, Scientific Electronic Library Online (SciELO), and Web of Science databases, adhering to "Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension for Scoping Reviews" guidelines. Articles presenting 3D facial photographs in the diagnostic phase were considered. RESULTS: A total of 79 articles were identified, of which 29 were selected for analysis. CONCLUSION: The predominant use of automated systems like 3dMD and VECTRA M3 was noted. User positioning has highest agreement among authors. Noteworthy aspects include the importance of proper lighting, facial expression, and dental positioning, with observed discrepancies and inconsistencies among authors. Finally, the authors proposed a 3D image acquisition protocol based on this research findings.


Subject(s)
Face , Imaging, Three-Dimensional , Photography , Humans , Imaging, Three-Dimensional/methods , Face/diagnostic imaging , Face/anatomy & histology , Photography/methods , Orthognathic Surgical Procedures/methods , Reproducibility of Results
14.
Transl Vis Sci Technol ; 13(5): 23, 2024 May 01.
Article in English | MEDLINE | ID: mdl-38809531

ABSTRACT

Purpose: To develop convolutional neural network (CNN)-based models for predicting the axial length (AL) using color fundus photography (CFP) and explore associated clinical and structural characteristics. Methods: This study enrolled 1105 fundus images from 467 participants with ALs ranging from 19.91 to 32.59 mm, obtained at National Taiwan University Hospital between 2020 and 2021. The AL measurements obtained from a scanning laser interferometer served as the gold standard. The accuracy of prediction was compared among CNN-based models with different inputs, including CFP, age, and/or sex. Heatmaps were interpreted by integrated gradients. Results: Using age, sex, and CFP as input, the mean ± standard deviation absolute error (MAE) for AL prediction by the model was 0.771 ± 0.128 mm, outperforming models that used age and sex alone (1.263 ± 0.115 mm; P < 0.001) and CFP alone (0.831 ± 0.216 mm; P = 0.016) by 39.0% and 7.31%, respectively. The removal of relatively poor-quality CFPs resulted in a slight MAE reduction to 0.759 ± 0.120 mm without statistical significance (P = 0.24). The inclusion of age and CFP improved prediction accuracy by 5.59% (P = 0.043), while adding sex had no significant improvement (P = 0.41). The optic disc and temporal peripapillary area were highlighted as the focused areas on the heatmaps. Conclusions: Deep learning-based prediction of AL using CFP was fairly accurate and enhanced by age inclusion. The optic disc and temporal peripapillary area may contain crucial structural information for AL prediction in CFP. Translational Relevance: This study might aid AL assessments and the understanding of the morphologic characteristics of the fundus related to AL.


Subject(s)
Axial Length, Eye , Neural Networks, Computer , Photography , Humans , Male , Female , Middle Aged , Adult , Photography/methods , Aged , Axial Length, Eye/diagnostic imaging , Fundus Oculi , Young Adult , Aged, 80 and over
15.
J Vis ; 24(5): 1, 2024 May 01.
Article in English | MEDLINE | ID: mdl-38691088

ABSTRACT

Still life paintings comprise a wealth of data on visual perception. Prior work has shown that the color statistics of objects show a marked bias for warm colors. Here, we ask about the relative chromatic contrast of these object-associated colors compared with background colors in still life paintings. We reasoned that, owing to the memory color effect, where the color of familiar objects is perceived more saturated, warm colors will be relatively more saturated than cool colors in still life paintings as compared with photographs. We analyzed color in 108 slides of still life paintings of fruit from the teaching slide collection of the Fogg University Art Museum and 41 color-calibrated photographs of fruit from the McGill data set. The results show that the relatively higher chromatic contrast of warm colors was greater for paintings compared with photographs, consistent with the hypothesis.


Subject(s)
Color Perception , Fruit , Paintings , Photography , Humans , Color Perception/physiology , Photography/methods , Color , Contrast Sensitivity/physiology
16.
Indian J Ophthalmol ; 72(Suppl 4): S676-S678, 2024 Jul 01.
Article in English | MEDLINE | ID: mdl-38623707

ABSTRACT

PURPOSE: To assess the prevalence of DR and the need for screening and management of DR with medical management of diabetes in rural and tribal population in Maharashtra. METHODS: The known diabetics of rural area and tribal area were screened at corresponding primary health centers, subcenters, and village level with the help of local healthcare workers using a portable non-mydriatic fundus camera. The prevalence of blindness among known diabetics in rural area was 1.29%, and 0.84% in tribal area. RESULTS: In the rural area, the prevalence of diabetic retinopathy (DR) was 5.67% ( n = 776), out of those 18.18% had sight threatening diabetic retinopathy (STDR). The prevalence of DR was 7.73% ( n = 711) in tribal areas, out of those, 30.90% had STDR. CONCLUSIONS: The significant risk factors were identified to be the duration of diabetes and poor glycemic control. Implementation of targeted interventions for screening and management are required to reduce the risk of blindness among known diabetics in rural and tribal areas.


Subject(s)
Diabetic Retinopathy , Mass Screening , Rural Population , Humans , Diabetic Retinopathy/epidemiology , Diabetic Retinopathy/diagnosis , Prevalence , India/epidemiology , Rural Population/statistics & numerical data , Male , Female , Middle Aged , Mass Screening/methods , Adult , Aged , Fundus Oculi , Risk Factors , Photography/methods , Young Adult , Adolescent
17.
Ophthalmology ; 131(8): 927-942, 2024 Aug.
Article in English | MEDLINE | ID: mdl-38613533

ABSTRACT

PURPOSE: This American Academy of Ophthalmology Ophthalmic Technology Assessment aims to assess the effectiveness of conventional teleretinal screening (TS) in detecting diabetic retinopathy (DR) and diabetic macular edema (DME). METHODS: A literature search of the PubMed database was conducted most recently in July 2023 to identify data published between 2006 and 2023 on any of the following elements related to TS effectiveness: (1) the accuracy of TS in detecting DR or DME compared with traditional ophthalmic screening with dilated fundus examination or 7-standard field Early Treatment Diabetic Retinopathy Study photography, (2) the impact of TS on DR screening compliance rates or other patient behaviors, and (3) cost-effectiveness and patient satisfaction of TS compared with traditional DR screening. Identified studies then were rated based on the Oxford Centre for Evidence-Based Medicine grading system. RESULTS: Eight level I studies, 14 level II studies, and 2 level III studies were identified in total. Although cross-study comparison is challenging because of differences in reference standards and grading methods, TS demonstrated acceptable sensitivity and good specificity in detecting DR; moderate to good agreement between TS and reference-standard DR grading was observed. Performance of TS was not as robust in detecting DME, although the number of studies evaluating DME specifically was limited. Two level I studies, 5 level II studies, and 1 level III study supported that TS had a positive impact on overall DR screening compliance, even increasing it by more than 2-fold in one study. Studies assessing cost-effectiveness and patient satisfaction were not graded formally, but they generally showed that TS was cost-effective and preferred by patients over traditional surveillance. CONCLUSIONS: Conventional TS is an effective approach to DR screening not only for its accuracy in detecting referable-level disease, but also for improving screening compliance in a cost-effective manner that may be preferred by patients. Further research is needed to elucidate the ideal approach of TS that may involve integration of artificial intelligence or other imaging technologies in the future. FINANCIAL DISCLOSURE(S): Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article.


Subject(s)
Academies and Institutes , Diabetic Retinopathy , Macular Edema , Ophthalmology , Photography , Telemedicine , Humans , Diabetic Retinopathy/diagnosis , Macular Edema/diagnosis , Photography/economics , Photography/methods , United States , Cost-Benefit Analysis , Technology Assessment, Biomedical , Sensitivity and Specificity , Diagnostic Techniques, Ophthalmological , Mass Screening/methods , Mass Screening/economics , Reproducibility of Results
18.
Ecology ; 105(6): e4299, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38650359

ABSTRACT

Information on tropical Asian vertebrates has traditionally been sparse, particularly when it comes to cryptic species inhabiting the dense forests of the region. Vertebrate populations are declining globally due to land-use change and hunting, the latter frequently referred as "defaunation." This is especially true in tropical Asia where there is extensive land-use change and high human densities. Robust monitoring requires that large volumes of vertebrate population data be made available for use by the scientific and applied communities. Camera traps have emerged as an effective, non-invasive, widespread, and common approach to surveying vertebrates in their natural habitats. However, camera-derived datasets remain scattered across a wide array of sources, including published scientific literature, gray literature, and unpublished works, making it challenging for researchers to harness the full potential of cameras for ecology, conservation, and management. In response, we collated and standardized observations from 239 camera trap studies conducted in tropical Asia. There were 278,260 independent records of 371 distinct species, comprising 232 mammals, 132 birds, and seven reptiles. The total trapping effort accumulated in this data paper consisted of 876,606 trap nights, distributed among Indonesia, Singapore, Malaysia, Bhutan, Thailand, Myanmar, Cambodia, Laos, Vietnam, Nepal, and far eastern India. The relatively standardized deployment methods in the region provide a consistent, reliable, and rich count data set relative to other large-scale pressence-only data sets, such as the Global Biodiversity Information Facility (GBIF) or citizen science repositories (e.g., iNaturalist), and is thus most similar to eBird. To facilitate the use of these data, we also provide mammalian species trait information and 13 environmental covariates calculated at three spatial scales around the camera survey centroids (within 10-, 20-, and 30-km buffers). We will update the dataset to include broader coverage of temperate Asia and add newer surveys and covariates as they become available. This dataset unlocks immense opportunities for single-species ecological or conservation studies as well as applied ecology, community ecology, and macroecology investigations. The data are fully available to the public for utilization and research. Please cite this data paper when utilizing the data.


Subject(s)
Forests , Tropical Climate , Vertebrates , Animals , Vertebrates/physiology , Photography/methods , Asia , Biodiversity
19.
Meat Sci ; 213: 109500, 2024 Jul.
Article in English | MEDLINE | ID: mdl-38582006

ABSTRACT

The objective of this study was to develop calibration models against rib eye traits and independently validate the precision, accuracy, and repeatability of the Frontmatec Q-FOM™ Beef grading camera in Australian carcasses. This study compiled 12 different research datasets acquired from commercial processing facilities and were comprised of a diverse range of carcass phenotypes, graded by industry identified expert Meat Standards Australia (MSA) graders and sampled for chemical intramuscular fat (IMF%). Calibration performance was maintained when the device was independently validated. For continuous traits, the Q-FOM™ demonstrated precise (root mean squared error of prediction, RMSEP) and accurate (coefficient of determination, R2) prediction of eye muscle area (EMA) (R2 = 0.89, RMSEP = 4.3 cm2, slope = 0.96, bias = 0.7), MSA marbling (R2 = 0.95, RMSEP = 47.2, slope = 0.98, bias = -12.8) and chemical IMF% (R2 = 0.94, RMSEP = 1.56%, slope = 0.96, bias = 0.64). For categorical traits, the Q-FOM™ predicted 61%, 64.3% and 60.8% of AUS-MEAT marbling, meat colour and fat colour scores equivalent, and 95% within ±1 classes of expert grader scores. The Q-FOM™ also demonstrated very high repeatability and reproducibility across all traits.


Subject(s)
Adipose Tissue , Color , Muscle, Skeletal , Photography , Red Meat , Animals , Australia , Cattle , Red Meat/analysis , Red Meat/standards , Photography/methods , Calibration , Phenotype , Reproducibility of Results , Ribs
20.
Ann Plast Surg ; 92(4): 367-372, 2024 Apr 01.
Article in English | MEDLINE | ID: mdl-38527337

ABSTRACT

STATEMENT OF THE PROBLEM: Standardized medical photography of the face is a vital part of patient documentation, clinical evaluation, and scholarly dissemination. Because digital photography is a mainstay in clinical care, there is a critical need for an easy-to-use mobile device application that could assist users in taking a standardized clinical photograph. ImageAssist was developed to answer this need. The mobile application is integrated into the electronic medical record (EMR); it implements and automates American Society of Plastic Surgery/Plastic Surgery Research Foundation photographic guidelines with background deletion. INITIAL PRODUCT DEVELOPMENT: A team consisting of a craniofacial plastic surgeon and the Health Information Technology product group developed and implemented the pilot application of ImageAssist. The application launches directly from patients' chart in the mobile version of the EMR, EPIC Haiku (Verona, Wisconsin). Standard views of the face (90-degree, oblique left and right, front and basal view) were built into digital templates and are user selected. Red digital frames overlay the patients' face on the screen and turn green once standardized alignment is achieved, prompting the user to capture. The background is then digitally subtracted to a standard blue, and the photograph is not stored on the user's phone. EARLY USER EXPERIENCE: ImageAssist initial beta user group was limited to 13 providers across dermatology, ENT, and plastic surgery. A mix of physicians, advanced practice providers, and nurses was included to pilot the application in the outpatient clinic setting using Image Assist on their smart phone. After using the app, an internal survey was used to gain feedback on the user experience. In the first 2 years of use, 31 users have taken more than 3400 photographs in more than 800 clinical encounters. Since initial release, automated background deletion also has been functional for any anatomic area. CONCLUSIONS: ImageAssist is a novel smartphone application that standardizes clinical photography and integrated into the EMR, which could save both time and expense for clinicians seeking to take consistent clinical images. Future steps include continued refinement of current image capture functionality and development of a stand-alone mobile device application.


Subject(s)
Mobile Applications , Plastic Surgery Procedures , Surgery, Plastic , Humans , United States , Smartphone , Photography/methods
SELECTION OF CITATIONS
SEARCH DETAIL