Your browser doesn't support javascript.
loading
Montrer: 20 | 50 | 100
Résultats 1 - 20 de 5.240
Filtrer
1.
J Biomed Opt ; 29(7): 076001, 2024 Jul.
Article de Anglais | MEDLINE | ID: mdl-38912212

RÉSUMÉ

Significance: Retinopathy of prematurity (ROP) poses a significant global threat to childhood vision, necessitating effective screening strategies. This study addresses the impact of color channels in fundus imaging on ROP diagnosis, emphasizing the efficacy and safety of utilizing longer wavelengths, such as red or green for enhanced depth information and improved diagnostic capabilities. Aim: This study aims to assess the spectral effectiveness in color fundus photography for the deep learning classification of ROP. Approach: A convolutional neural network end-to-end classifier was utilized for deep learning classification of normal, stage 1, stage 2, and stage 3 ROP fundus images. The classification performances with individual-color-channel inputs, i.e., red, green, and blue, and multi-color-channel fusion architectures, including early-fusion, intermediate-fusion, and late-fusion, were quantitatively compared. Results: For individual-color-channel inputs, similar performance was observed for green channel (88.00% accuracy, 76.00% sensitivity, and 92.00% specificity) and red channel (87.25% accuracy, 74.50% sensitivity, and 91.50% specificity), which is substantially outperforming the blue channel (78.25% accuracy, 56.50% sensitivity, and 85.50% specificity). For multi-color-channel fusion options, the early-fusion and intermediate-fusion architecture showed almost the same performance when compared to the green/red channel input, and they outperformed the late-fusion architecture. Conclusions: This study reveals that the classification of ROP stages can be effectively achieved using either the green or red image alone. This finding enables the exclusion of blue images, acknowledged for their increased susceptibility to light toxicity.


Sujet(s)
Apprentissage profond , Photographie (méthode) , Rétinopathie du prématuré , Rétinopathie du prématuré/imagerie diagnostique , Rétinopathie du prématuré/classification , Humains , Nouveau-né , Photographie (méthode)/méthodes , Fond de l'oeil , Interprétation d'images assistée par ordinateur/méthodes , , Couleur
2.
Elife ; 122024 Jun 19.
Article de Anglais | MEDLINE | ID: mdl-38896568

RÉSUMÉ

We present open-source tools for three-dimensional (3D) analysis of photographs of dissected slices of human brains, which are routinely acquired in brain banks but seldom used for quantitative analysis. Our tools can: (1) 3D reconstruct a volume from the photographs and, optionally, a surface scan; and (2) produce a high-resolution 3D segmentation into 11 brain regions per hemisphere (22 in total), independently of the slice thickness. Our tools can be used as a substitute for ex vivo magnetic resonance imaging (MRI), which requires access to an MRI scanner, ex vivo scanning expertise, and considerable financial resources. We tested our tools on synthetic and real data from two NIH Alzheimer's Disease Research Centers. The results show that our methodology yields accurate 3D reconstructions, segmentations, and volumetric measurements that are highly correlated to those from MRI. Our method also detects expected differences between post mortem confirmed Alzheimer's disease cases and controls. The tools are available in our widespread neuroimaging suite 'FreeSurfer' (https://surfer.nmr.mgh.harvard.edu/fswiki/PhotoTools).


Every year, thousands of human brains are donated to science. These brains are used to study normal aging, as well as neurological diseases like Alzheimer's or Parkinson's. Donated brains usually go to 'brain banks', institutions where the brains are dissected to extract tissues relevant to different diseases. During this process, it is routine to take photographs of brain slices for archiving purposes. Often, studies of dead brains rely on qualitative observations, such as 'the hippocampus displays some atrophy', rather than concrete 'numerical' measurements. This is because the gold standard to take three-dimensional measurements of the brain is magnetic resonance imaging (MRI), which is an expensive technique that requires high expertise ­ especially with dead brains. The lack of quantitative data means it is not always straightforward to study certain conditions. To bridge this gap, Gazula et al. have developed an openly available software that can build three-dimensional reconstructions of dead brains based on photographs of brain slices. The software can also use machine learning methods to automatically extract different brain regions from the three-dimensional reconstructions and measure their size. These data can be used to take precise quantitative measurements that can be used to better describe how different conditions lead to changes in the brain, such as atrophy (reduced volume of one or more brain regions). The researchers assessed the accuracy of the method in two ways. First, they digitally sliced MRI-scanned brains and used the software to compute the sizes of different structures based on these synthetic data, comparing the results to the known sizes. Second, they used brains for which both MRI data and dissection photographs existed and compared the measurements taken by the software to the measurements obtained with MRI images. Gazula et al. show that, as long as the photographs satisfy some basic conditions, they can provide good estimates of the sizes of many brain structures. The tools developed by Gazula et al. are publicly available as part of FreeSurfer, a widespread neuroimaging software that can be used by any researcher working at a brain bank. This will allow brain banks to obtain accurate measurements of dead brains, allowing them to cheaply perform quantitative studies of brain structures, which could lead to new findings relating to neurodegenerative diseases.


Sujet(s)
Maladie d'Alzheimer , Encéphale , Imagerie tridimensionnelle , Apprentissage machine , Humains , Imagerie tridimensionnelle/méthodes , Maladie d'Alzheimer/imagerie diagnostique , Maladie d'Alzheimer/anatomopathologie , Encéphale/imagerie diagnostique , Encéphale/anatomopathologie , Photographie (méthode)/méthodes , Dissection , Imagerie par résonance magnétique/méthodes , Neuropathologie/méthodes , Neuroimagerie/méthodes
3.
PeerJ ; 12: e17577, 2024.
Article de Anglais | MEDLINE | ID: mdl-38938602

RÉSUMÉ

Background: Enhancing detection of cryptic snakes is critical for the development of conservation and management strategies; yet, finding methods that provide adequate detection remains challenging. Issues with detecting snakes can be particularly problematic for some species, like the invasive Burmese python (Python bivittatus) in the Florida Everglades. Methods: Using multiple survey methods, we predicted that our ability to detect pythons, larger snakes and all other snakes would be enhanced with the use of live mammalian lures (domesticated rabbits; Oryctolagus cuniculus). Specifically, we used visual surveys, python detection dogs, and time-lapse game cameras to determine if domesticated rabbits were an effective lure. Results: Time-lapse game cameras detected almost 40 times more snakes (n = 375, treatment = 245, control = 130) than visual surveys (n = 10). We recorded 21 independent detections of pythons at treatment pens (with lures) and one detection at a control pen (without lures). In addition, we found larger snakes, and all other snakes were 165% and 74% more likely to be detected at treatment pens compared to control pens, respectively. Time-lapse cameras detected almost 40 times more snakes than visual surveys; we did not detect any pythons with python detection dogs. Conclusions: Our study presents compelling evidence that the detection of snakes is improved by coupling live mammalian lures with time-lapse game cameras. Although the identification of smaller snake species was limited, this was due to pixel resolution, which could be improved by changing the camera focal length. For larger snakes with individually distinctive patterns, this method could potentially be used to identify unique individuals and thus allow researchers to estimate population dynamics.


Sujet(s)
Boidae , Serpents , Imagerie accélérée , Animaux , Lapins , Imagerie accélérée/méthodes , Floride , Chiens , Photographie (méthode)/instrumentation , Photographie (méthode)/méthodes , Comportement prédateur/physiologie
4.
J Craniofac Surg ; 35(4): e376-e380, 2024 Jun 01.
Article de Anglais | MEDLINE | ID: mdl-38722365

RÉSUMÉ

OBJECTIVE: Orthognathic surgery is a viable and reproducible treatment for facial deformities. Despite the precision of the skeletal planning of surgical procedures, there is little information about the relations between hard and soft tissues in three-dimensional (3D) analysis, resulting in unpredictable soft tissue outcomes. Three-dimensional photography is a viable tool for soft tissue analysis because it is easy to use, has wide availability, low cost, and is harmless. This review aims to establish parameters for acquiring consistent and reproducible 3D facial images. METHODS: A scoping review was conducted across PubMed, SCOPUS, Scientific Electronic Library Online (SciELO), and Web of Science databases, adhering to "Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension for Scoping Reviews" guidelines. Articles presenting 3D facial photographs in the diagnostic phase were considered. RESULTS: A total of 79 articles were identified, of which 29 were selected for analysis. CONCLUSION: The predominant use of automated systems like 3dMD and VECTRA M3 was noted. User positioning has highest agreement among authors. Noteworthy aspects include the importance of proper lighting, facial expression, and dental positioning, with observed discrepancies and inconsistencies among authors. Finally, the authors proposed a 3D image acquisition protocol based on this research findings.


Sujet(s)
Face , Imagerie tridimensionnelle , Photographie (méthode) , Humains , Imagerie tridimensionnelle/méthodes , Face/imagerie diagnostique , Face/anatomie et histologie , Photographie (méthode)/méthodes , Procédures de chirurgie orthognathique/méthodes , Reproductibilité des résultats
5.
Transl Vis Sci Technol ; 13(5): 23, 2024 May 01.
Article de Anglais | MEDLINE | ID: mdl-38809531

RÉSUMÉ

Purpose: To develop convolutional neural network (CNN)-based models for predicting the axial length (AL) using color fundus photography (CFP) and explore associated clinical and structural characteristics. Methods: This study enrolled 1105 fundus images from 467 participants with ALs ranging from 19.91 to 32.59 mm, obtained at National Taiwan University Hospital between 2020 and 2021. The AL measurements obtained from a scanning laser interferometer served as the gold standard. The accuracy of prediction was compared among CNN-based models with different inputs, including CFP, age, and/or sex. Heatmaps were interpreted by integrated gradients. Results: Using age, sex, and CFP as input, the mean ± standard deviation absolute error (MAE) for AL prediction by the model was 0.771 ± 0.128 mm, outperforming models that used age and sex alone (1.263 ± 0.115 mm; P < 0.001) and CFP alone (0.831 ± 0.216 mm; P = 0.016) by 39.0% and 7.31%, respectively. The removal of relatively poor-quality CFPs resulted in a slight MAE reduction to 0.759 ± 0.120 mm without statistical significance (P = 0.24). The inclusion of age and CFP improved prediction accuracy by 5.59% (P = 0.043), while adding sex had no significant improvement (P = 0.41). The optic disc and temporal peripapillary area were highlighted as the focused areas on the heatmaps. Conclusions: Deep learning-based prediction of AL using CFP was fairly accurate and enhanced by age inclusion. The optic disc and temporal peripapillary area may contain crucial structural information for AL prediction in CFP. Translational Relevance: This study might aid AL assessments and the understanding of the morphologic characteristics of the fundus related to AL.


Sujet(s)
Longueur axiale de l'oeil , , Photographie (méthode) , Humains , Mâle , Femelle , Adulte d'âge moyen , Adulte , Photographie (méthode)/méthodes , Sujet âgé , Longueur axiale de l'oeil/imagerie diagnostique , Fond de l'oeil , Jeune adulte , Sujet âgé de 80 ans ou plus
6.
J Vis ; 24(5): 1, 2024 May 01.
Article de Anglais | MEDLINE | ID: mdl-38691088

RÉSUMÉ

Still life paintings comprise a wealth of data on visual perception. Prior work has shown that the color statistics of objects show a marked bias for warm colors. Here, we ask about the relative chromatic contrast of these object-associated colors compared with background colors in still life paintings. We reasoned that, owing to the memory color effect, where the color of familiar objects is perceived more saturated, warm colors will be relatively more saturated than cool colors in still life paintings as compared with photographs. We analyzed color in 108 slides of still life paintings of fruit from the teaching slide collection of the Fogg University Art Museum and 41 color-calibrated photographs of fruit from the McGill data set. The results show that the relatively higher chromatic contrast of warm colors was greater for paintings compared with photographs, consistent with the hypothesis.


Sujet(s)
Perception des couleurs , Fruit , Peintures (art) , Photographie (méthode) , Humains , Perception des couleurs/physiologie , Photographie (méthode)/méthodes , Couleur , Sensibilité au contraste/physiologie
7.
Transl Vis Sci Technol ; 13(5): 20, 2024 May 01.
Article de Anglais | MEDLINE | ID: mdl-38780955

RÉSUMÉ

Purpose: We sough to develop an automatic method of quantifying optic disc pallor in fundus photographs and determine associations with peripapillary retinal nerve fiber layer (pRNFL) thickness. Methods: We used deep learning to segment the optic disc, fovea, and vessels in fundus photographs, and measured pallor. We assessed the relationship between pallor and pRNFL thickness derived from optical coherence tomography scans in 118 participants. Separately, we used images diagnosed by clinical inspection as pale (n = 45) and assessed how measurements compared with healthy controls (n = 46). We also developed automatic rejection thresholds and tested the software for robustness to camera type, image format, and resolution. Results: We developed software that automatically quantified disc pallor across several zones in fundus photographs. Pallor was associated with pRNFL thickness globally (ß = -9.81; standard error [SE] = 3.16; P < 0.05), in the temporal inferior zone (ß = -29.78; SE = 8.32; P < 0.01), with the nasal/temporal ratio (ß = 0.88; SE = 0.34; P < 0.05), and in the whole disc (ß = -8.22; SE = 2.92; P < 0.05). Furthermore, pallor was significantly higher in the patient group. Last, we demonstrate the analysis to be robust to camera type, image format, and resolution. Conclusions: We developed software that automatically locates and quantifies disc pallor in fundus photographs and found associations between pallor measurements and pRNFL thickness. Translational Relevance: We think our method will be useful for the identification, monitoring, and progression of diseases characterized by disc pallor and optic atrophy, including glaucoma, compression, and potentially in neurodegenerative disorders.


Sujet(s)
Apprentissage profond , Neurofibres , Papille optique , Photographie (méthode) , Logiciel , Tomographie par cohérence optique , Humains , Papille optique/imagerie diagnostique , Papille optique/anatomopathologie , Tomographie par cohérence optique/méthodes , Mâle , Femelle , Adulte d'âge moyen , Neurofibres/anatomopathologie , Photographie (méthode)/méthodes , Adulte , Cellules ganglionnaires rétiniennes/anatomopathologie , Cellules ganglionnaires rétiniennes/cytologie , Sujet âgé , Atteintes du nerf optique/imagerie diagnostique , Atteintes du nerf optique/diagnostic , Atteintes du nerf optique/anatomopathologie , Fond de l'oeil
8.
Sci Rep ; 14(1): 12304, 2024 05 29.
Article de Anglais | MEDLINE | ID: mdl-38811714

RÉSUMÉ

Recent advances in artificial intelligence (AI) enable the generation of realistic facial images that can be used in police lineups. The use of AI image generation offers pragmatic advantages in that it allows practitioners to generate filler images directly from the description of the culprit using text-to-image generation, avoids the violation of identity rights of natural persons who are not suspects and eliminates the constraints of being bound to a database with a limited set of photographs. However, the risk exists that using AI-generated filler images provokes more biased selection of the suspect if eyewitnesses are able to distinguish AI-generated filler images from the photograph of the suspect's face. Using a model-based analysis, we compared biased suspect selection directly between lineups with AI-generated filler images and lineups with database-derived filler photographs. The results show that the lineups with AI-generated filler images were perfectly fair and, in fact, led to less biased suspect selection than the lineups with database-derived filler photographs used in previous experiments. These results are encouraging with regard to the potential of AI image generation for constructing fair lineups which should inspire more systematic research on the feasibility of adopting AI technology in forensic settings.


Sujet(s)
Intelligence artificielle , Face , Humains , Traitement d'image par ordinateur/méthodes , Photographie (méthode)/méthodes , Police , Bases de données factuelles , Sciences légales/méthodes , Femelle , Crime
9.
Sensors (Basel) ; 24(9)2024 Apr 26.
Article de Anglais | MEDLINE | ID: mdl-38732872

RÉSUMÉ

This paper presents an experimental evaluation of a wearable light-emitting diode (LED) transmitter in an optical camera communications (OCC) system. The evaluation is conducted under conditions of controlled user movement during indoor physical exercise, encompassing both mild and intense exercise scenarios. We introduce an image processing algorithm designed to identify a template signal transmitted by the LED and detected within the image. To enhance this process, we utilize the dynamics of controlled exercise-induced motion to limit the tracking process to a smaller region within the image. We demonstrate the feasibility of detecting the transmitting source within the frames, and thus limit the tracking process to a smaller region within the image, achieving an reduction of 87.3% for mild exercise and 79.0% for intense exercise.


Sujet(s)
Algorithmes , Exercice physique , Dispositifs électroniques portables , Humains , Exercice physique/physiologie , Traitement d'image par ordinateur/méthodes , Photographie (méthode)/instrumentation , Photographie (méthode)/méthodes , Prestations des soins de santé
10.
Ecology ; 105(6): e4299, 2024 Jun.
Article de Anglais | MEDLINE | ID: mdl-38650359

RÉSUMÉ

Information on tropical Asian vertebrates has traditionally been sparse, particularly when it comes to cryptic species inhabiting the dense forests of the region. Vertebrate populations are declining globally due to land-use change and hunting, the latter frequently referred as "defaunation." This is especially true in tropical Asia where there is extensive land-use change and high human densities. Robust monitoring requires that large volumes of vertebrate population data be made available for use by the scientific and applied communities. Camera traps have emerged as an effective, non-invasive, widespread, and common approach to surveying vertebrates in their natural habitats. However, camera-derived datasets remain scattered across a wide array of sources, including published scientific literature, gray literature, and unpublished works, making it challenging for researchers to harness the full potential of cameras for ecology, conservation, and management. In response, we collated and standardized observations from 239 camera trap studies conducted in tropical Asia. There were 278,260 independent records of 371 distinct species, comprising 232 mammals, 132 birds, and seven reptiles. The total trapping effort accumulated in this data paper consisted of 876,606 trap nights, distributed among Indonesia, Singapore, Malaysia, Bhutan, Thailand, Myanmar, Cambodia, Laos, Vietnam, Nepal, and far eastern India. The relatively standardized deployment methods in the region provide a consistent, reliable, and rich count data set relative to other large-scale pressence-only data sets, such as the Global Biodiversity Information Facility (GBIF) or citizen science repositories (e.g., iNaturalist), and is thus most similar to eBird. To facilitate the use of these data, we also provide mammalian species trait information and 13 environmental covariates calculated at three spatial scales around the camera survey centroids (within 10-, 20-, and 30-km buffers). We will update the dataset to include broader coverage of temperate Asia and add newer surveys and covariates as they become available. This dataset unlocks immense opportunities for single-species ecological or conservation studies as well as applied ecology, community ecology, and macroecology investigations. The data are fully available to the public for utilization and research. Please cite this data paper when utilizing the data.


Sujet(s)
Forêts , Climat tropical , Vertébrés , Animaux , Vertébrés/physiologie , Photographie (méthode)/méthodes , Asie , Biodiversité
11.
Indian J Ophthalmol ; 72(Suppl 4): S676-S678, 2024 Jul 01.
Article de Anglais | MEDLINE | ID: mdl-38623707

RÉSUMÉ

PURPOSE: To assess the prevalence of DR and the need for screening and management of DR with medical management of diabetes in rural and tribal population in Maharashtra. METHODS: The known diabetics of rural area and tribal area were screened at corresponding primary health centers, subcenters, and village level with the help of local healthcare workers using a portable non-mydriatic fundus camera. The prevalence of blindness among known diabetics in rural area was 1.29%, and 0.84% in tribal area. RESULTS: In the rural area, the prevalence of diabetic retinopathy (DR) was 5.67% ( n = 776), out of those 18.18% had sight threatening diabetic retinopathy (STDR). The prevalence of DR was 7.73% ( n = 711) in tribal areas, out of those, 30.90% had STDR. CONCLUSIONS: The significant risk factors were identified to be the duration of diabetes and poor glycemic control. Implementation of targeted interventions for screening and management are required to reduce the risk of blindness among known diabetics in rural and tribal areas.


Sujet(s)
Rétinopathie diabétique , Dépistage de masse , Population rurale , Humains , Rétinopathie diabétique/épidémiologie , Rétinopathie diabétique/diagnostic , Prévalence , Inde/épidémiologie , Population rurale/statistiques et données numériques , Mâle , Femelle , Adulte d'âge moyen , Dépistage de masse/méthodes , Adulte , Sujet âgé , Fond de l'oeil , Facteurs de risque , Photographie (méthode)/méthodes , Jeune adulte , Adolescent
12.
Meat Sci ; 213: 109500, 2024 Jul.
Article de Anglais | MEDLINE | ID: mdl-38582006

RÉSUMÉ

The objective of this study was to develop calibration models against rib eye traits and independently validate the precision, accuracy, and repeatability of the Frontmatec Q-FOM™ Beef grading camera in Australian carcasses. This study compiled 12 different research datasets acquired from commercial processing facilities and were comprised of a diverse range of carcass phenotypes, graded by industry identified expert Meat Standards Australia (MSA) graders and sampled for chemical intramuscular fat (IMF%). Calibration performance was maintained when the device was independently validated. For continuous traits, the Q-FOM™ demonstrated precise (root mean squared error of prediction, RMSEP) and accurate (coefficient of determination, R2) prediction of eye muscle area (EMA) (R2 = 0.89, RMSEP = 4.3 cm2, slope = 0.96, bias = 0.7), MSA marbling (R2 = 0.95, RMSEP = 47.2, slope = 0.98, bias = -12.8) and chemical IMF% (R2 = 0.94, RMSEP = 1.56%, slope = 0.96, bias = 0.64). For categorical traits, the Q-FOM™ predicted 61%, 64.3% and 60.8% of AUS-MEAT marbling, meat colour and fat colour scores equivalent, and 95% within ±1 classes of expert grader scores. The Q-FOM™ also demonstrated very high repeatability and reproducibility across all traits.


Sujet(s)
Tissu adipeux , Couleur , Muscles squelettiques , Photographie (méthode) , Viande rouge , Animaux , Australie , Bovins , Viande rouge/analyse , Viande rouge/normes , Photographie (méthode)/méthodes , Calibrage , Phénotype , Reproductibilité des résultats , Côtes
14.
Photodiagnosis Photodyn Ther ; 46: 104043, 2024 Apr.
Article de Anglais | MEDLINE | ID: mdl-38460655

RÉSUMÉ

PURPOSE: To evaluate the use of the Pentacam to analyse the presence or absence of fluid pockets under the anterior capsule and their significance in terms of surgical management and prevention of complications. SETTINGS: Abant Izzet Baysal University Hospital, Bolu, Turkey DESIGN: Randomized, masked, prospective design METHODS: 60 patients with mature cataracts underwent standard phacoemulsification (Phaco) and intraocular lens (IOL) implantation. Patients were divided into 3 groups. Group 1 underwent Phaco+IOL implantation without imaging by Pentacam. Group 2 had fluid detected in Pentacam imaging before the operation and underwent Phaco+IOL implantation with Brazilian method. Group 3 had no fluid detected in Pentacam imaging before the operation and underwent standart Phaco+IOL implantation operation. RESULTS: When the complication rates of 3 different groups were examined separately, they were found to be 15 % in group 1; 5 % in group 2 and 5 % in group 3, respectively. When compared in pairs as Group 1-2, 1-3, and 2-3, respectively (p < 0.01), (p < 0.01), (p > 0.05). The nuclear density of Group 2 and Group 3 was measured, resulting in 30.2 % and 29.6 %, respectively (P = 0.614). Lens thickness, patients with fluid (+) had a thickness of 5.35 mm, while patients with fluid (-) had a thickness of 3.96 mm (p < 0.05). CONCLUSION: Patients who are not imaged with pentacam before surgery experience more complications than other groups because the presence of fluid is unknown. Central lens thickness was higher in patients with fluid, and there was no significant difference in nuclear density between the groups with and without fluid. Pentacam can show the presence of supcapsular fluid and we recommend that imaging tools be more widely used in cataract surgery. We think that this will enable surgeons to make a more accurate surgical planning and reduce the risk of complications.


Sujet(s)
Cataracte , Phacoémulsification , Humains , Femelle , Mâle , Études prospectives , Sujet âgé , Adulte d'âge moyen , Phacoémulsification/méthodes , Pose d'implant intraoculaire , Soins préopératoires/méthodes , Photographie (méthode)/méthodes
15.
Spine Deform ; 12(4): 1079-1088, 2024 Jul.
Article de Anglais | MEDLINE | ID: mdl-38526692

RÉSUMÉ

PURPOSE: Waist line asymmetry is a major cosmetic concern in patients with adolescent idiopathic scoliosis (AIS). The primary surgical goal in patients with AIS is to correct spinal deformities and prevent further progression while maintaining global alignment. Additionally, an important objective of surgical treatment is to address physical appearance by reducing asymmetry. This study aimed to evaluate changes in waistline asymmetry using digital photographs in adolescents with thoracolumbar/lumbar (TL/L) scoliosis who underwent corrective surgery. METHODS: We retrospectively analyzed the data of patients with Lenke types 5C and 6C AIS who underwent posterior fusion surgery with at least 2 years of follow-up. Waist line asymmetry was assessed using digital photography. The waist angle ratio (WAR), waist height angle (WHA), and waistline depth ratio (WLDR) were measured pre- and postoperatively. Radiographic parameters and the revised 22-item Scoliosis Research Society Questionnaire (SRS-22r) were also evaluated. RESULTS: Forty-two patients (40 females and 2 males; 34 with type 5C and 8 with type 6C) were included in the study. The WAR, WHA, and WLDR significantly improved after surgery (0.873 → 0.977, - 2.0° → 1.4°, and 0.321 → 0.899, respectively). Every waistline parameter moderately correlated with the apical vertebral translation of the TL/L curve (WAR: r = - 0.398, WHA: r = - 0.442, and WLDR: r = - 0.692), whereas no correlations were observed with the TL/L curve magnitude. No correlations were observed between the photographic parameters and SRS-22r scores. CONCLUSION: Lateral displacement of the apical vertebra on the TL/L curve correlated with waistline asymmetry. Preoperative waistline asymmetry improved with scoliosis correction. LEVEL OF EVIDENCE: Level 4.


Sujet(s)
Vertèbres lombales , Photographie (méthode) , Scoliose , Arthrodèse vertébrale , Vertèbres thoraciques , Humains , Scoliose/chirurgie , Scoliose/imagerie diagnostique , Femelle , Adolescent , Mâle , Études rétrospectives , Photographie (méthode)/méthodes , Vertèbres thoraciques/chirurgie , Vertèbres thoraciques/imagerie diagnostique , Vertèbres lombales/chirurgie , Vertèbres lombales/imagerie diagnostique , Arthrodèse vertébrale/méthodes , Enfant , Résultat thérapeutique
16.
Eye (Lond) ; 38(9): 1694-1701, 2024 Jun.
Article de Anglais | MEDLINE | ID: mdl-38467864

RÉSUMÉ

BACKGROUND: Diabetic Retinopathy (DR) is a leading cause of blindness worldwide, affecting people with diabetes. The timely diagnosis and treatment of DR are essential in preventing vision loss. Non-mydriatic fundus cameras and artificial intelligence (AI) software have been shown to improve DR screening efficiency. However, few studies have compared the diagnostic performance of different non-mydriatic cameras and AI software. METHODS: This clinical study was conducted at the endocrinology clinic of Akdeniz University with 900 volunteer patients that were previously diagnosed with diabetes but not with diabetic retinopathy. Fundus images of each patient were taken using three non-mydriatic fundus cameras and EyeCheckup AI software was used to diagnose more than mild diabetic retinopathy, vision-threatening diabetic retinopathy, and clinically significant diabetic macular oedema using images from all three cameras. Then patients underwent dilation and 4 wide-field fundus photography. Three retina specialists graded the 4 wide-field fundus images according to the diabetic retinopathy treatment preferred practice patterns of the American Academy of Ophthalmology. The study was pre-registered on clinicaltrials.gov with the ClinicalTrials.gov Identifier: NCT04805541. RESULTS: The Canon CR2 AF AF camera had a sensitivity and specificity of 95.65% / 95.92% for diagnosing more than mild DR, the Topcon TRC-NW400 had 95.19% / 96.46%, and the Optomed Aurora had 90.48% / 97.21%. For vision threatening diabetic retinopathy, the Canon CR2 AF had a sensitivity and specificity of 96.00% / 96.34%, the Topcon TRC-NW400 had 98.52% / 95.93%, and the Optomed Aurora had 95.12% / 98.82%. For clinically significant diabetic macular oedema, the Canon CR2 AF had a sensitivity and specificity of 95.83% / 96.83%, the Topcon TRC-NW400 had 98.50% / 96.52%, and the Optomed Aurora had 94.93% / 98.95%. CONCLUSION: The study demonstrates the potential of using non-mydriatic fundus cameras combined with artificial intelligence software in detecting diabetic retinopathy. Several cameras were tested and, notably, each camera exhibited varying but adequate levels of sensitivity and specificity. The Canon CR2 AF emerged with the highest accuracy in identifying both more than mild diabetic retinopathy and vision-threatening cases, while the Topcon TRC-NW400 excelled in detecting clinically significant diabetic macular oedema. The findings from this study emphasize the importance of considering a non mydriatic camera and artificial intelligence software for diabetic retinopathy screening. However, further research is imperative to explore additional factors influencing the efficiency of diabetic retinopathy screening using AI and non mydriatic cameras such as costs involved and effects of screening using and on an ethnically diverse population.


Sujet(s)
Intelligence artificielle , Rétinopathie diabétique , Photographie (méthode) , Sensibilité et spécificité , Adulte , Sujet âgé , Femelle , Humains , Mâle , Adulte d'âge moyen , Rétinopathie diabétique/diagnostic , Photographie (méthode)/méthodes , Reproductibilité des résultats
17.
Biomed Eng Online ; 23(1): 32, 2024 Mar 12.
Article de Anglais | MEDLINE | ID: mdl-38475784

RÉSUMÉ

PURPOSE: This study aimed to investigate the imaging repeatability of self-service fundus photography compared to traditional fundus photography performed by experienced operators. DESIGN: Prospective cross-sectional study. METHODS: In a community-based eye diseases screening site, we recruited 65 eyes (65 participants) from the resident population of Shanghai, China. All participants were devoid of cataract or any other conditions that could potentially compromise the quality of fundus imaging. Participants were categorized into fully self-service fundus photography or traditional fundus photography group. Image quantitative analysis software was used to extract clinically relevant indicators from the fundus images. Finally, a statistical analysis was performed to depict the imaging repeatability of fully self-service fundus photography. RESULTS: There was no statistical difference in the absolute differences, or the extents of variation of the indicators between the two groups. The extents of variation of all the measurement indicators, with the exception of the optic cup area, were below 10% in both groups. The Bland-Altman plots and multivariate analysis results were consistent with results mentioned above. CONCLUSIONS: The image repeatability of fully self-service fundus photography is comparable to that of traditional fundus photography performed by professionals, demonstrating promise in large-scale eye disease screening programs.


Sujet(s)
Services de santé communautaires , Glaucome , Humains , Études transversales , Études prospectives , Chine , Photographie (méthode)/méthodes , Fond de l'oeil
18.
Ann Plast Surg ; 92(4): 367-372, 2024 Apr 01.
Article de Anglais | MEDLINE | ID: mdl-38527337

RÉSUMÉ

STATEMENT OF THE PROBLEM: Standardized medical photography of the face is a vital part of patient documentation, clinical evaluation, and scholarly dissemination. Because digital photography is a mainstay in clinical care, there is a critical need for an easy-to-use mobile device application that could assist users in taking a standardized clinical photograph. ImageAssist was developed to answer this need. The mobile application is integrated into the electronic medical record (EMR); it implements and automates American Society of Plastic Surgery/Plastic Surgery Research Foundation photographic guidelines with background deletion. INITIAL PRODUCT DEVELOPMENT: A team consisting of a craniofacial plastic surgeon and the Health Information Technology product group developed and implemented the pilot application of ImageAssist. The application launches directly from patients' chart in the mobile version of the EMR, EPIC Haiku (Verona, Wisconsin). Standard views of the face (90-degree, oblique left and right, front and basal view) were built into digital templates and are user selected. Red digital frames overlay the patients' face on the screen and turn green once standardized alignment is achieved, prompting the user to capture. The background is then digitally subtracted to a standard blue, and the photograph is not stored on the user's phone. EARLY USER EXPERIENCE: ImageAssist initial beta user group was limited to 13 providers across dermatology, ENT, and plastic surgery. A mix of physicians, advanced practice providers, and nurses was included to pilot the application in the outpatient clinic setting using Image Assist on their smart phone. After using the app, an internal survey was used to gain feedback on the user experience. In the first 2 years of use, 31 users have taken more than 3400 photographs in more than 800 clinical encounters. Since initial release, automated background deletion also has been functional for any anatomic area. CONCLUSIONS: ImageAssist is a novel smartphone application that standardizes clinical photography and integrated into the EMR, which could save both time and expense for clinicians seeking to take consistent clinical images. Future steps include continued refinement of current image capture functionality and development of a stand-alone mobile device application.


Sujet(s)
Applications mobiles , , Chirurgie plastique , Humains , États-Unis , Ordiphone , Photographie (méthode)/méthodes
19.
Retina ; 44(6): 1092-1099, 2024 Jun 01.
Article de Anglais | MEDLINE | ID: mdl-38320305

RÉSUMÉ

PURPOSE: To observe the diagnostic value of multispectral fundus imaging (MSI) in hypertensive retinopathy (HR). METHODS: A total of 100 patients with HR were enrolled in this cross-sectional study, and all participants received fundus photography and MSI. Participants with severe HR received fundus fluorescein angiography (FFA). The diagnostic consistency between fundus photography and MSI in the diagnosis of HR was calculated. The sensitivity of MSI in the diagnosis of severe HR was calculated by comparison with FFA. Choroidal vascular index was calculated in patients with HR using MSI at 780 nm. RESULTS: MSI and fundus photography were highly concordant in the diagnosis of HR with a Kappa value = 0.883. MSI had a sensitivity of 96% in diagnosing retinal hemorrhage, a sensitivity of 89.47% in diagnosing retinal exudation, a sensitivity of 100% in diagnosing vascular compression indentation, and a sensitivity of 96.15% in diagnosing retinal arteriosclerosis. The choroidal vascular index of the patients in the HR group was significantly lower than that of the control group, whereas there was no significant difference between the affected and fellow eyes. CONCLUSION: As a noninvasive modality of observation, MSI may be a new tool for the diagnosis and assessment of HR.


Sujet(s)
Angiographie fluorescéinique , Fond de l'oeil , Rétinopathie hypertensive , Humains , Études transversales , Femelle , Mâle , Adulte d'âge moyen , Angiographie fluorescéinique/méthodes , Rétinopathie hypertensive/diagnostic , Sujet âgé , Adulte , Photographie (méthode)/méthodes , Vaisseaux rétiniens/imagerie diagnostique , Vaisseaux rétiniens/anatomopathologie
20.
Behav Res Methods ; 56(4): 3861-3872, 2024 Apr.
Article de Anglais | MEDLINE | ID: mdl-38332413

RÉSUMÉ

Over the last 40 years, object recognition studies have moved from using simple line drawings, to more detailed illustrations, to more ecologically valid photographic representations. Researchers now have access to various stimuli sets, however, existing sets lack the ability to independently manipulate item format, as the concepts depicted are unique to the set they derive from. To enable such comparisons, Rossion and Pourtois (2004) revisited Snodgrass and Vanderwart's (1980) line drawings and digitally re-drew the objects, adding texture and shading. In the current study, we took this further and created a set of stimuli that showcase the same objects in photographic form. We selected six photographs of each object (three color/three grayscale) and collected normative data and RTs. Naming accuracy and agreement was high for all photographs and appeared to steadily increase with format distinctiveness. In contrast to previous data patterns for drawings, naming agreement (H values) did not differ between grey and color photographs, nor did familiarity ratings. However, grey photographs received significantly lower mental imagery agreement and visual complexity scores than color photographs. This suggests that, in comparison to drawings, the ecological nature of photographs may facilitate deeper critical evaluation of whether they offer a good match to a mental representation. Color may therefore play a more vital role in photographs than in drawings, aiding participants in judging the match with their mental representation. This new photographic stimulus set and corresponding normative data provide valuable materials for a wide range of experimental studies of object recognition.


Sujet(s)
Reconnaissance visuelle des formes , Stimulation lumineuse , Photographie (méthode) , , Humains , Mâle , Femelle , Photographie (méthode)/méthodes , /physiologie , Reconnaissance visuelle des formes/physiologie , Adulte , Temps de réaction/physiologie , Jeune adulte , Adolescent
SÉLECTION CITATIONS
DÉTAIL DE RECHERCHE
...