Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 5.217
Filtrar
1.
Sensors (Basel) ; 24(9)2024 Apr 26.
Artigo em Inglês | MEDLINE | ID: mdl-38732872

RESUMO

This paper presents an experimental evaluation of a wearable light-emitting diode (LED) transmitter in an optical camera communications (OCC) system. The evaluation is conducted under conditions of controlled user movement during indoor physical exercise, encompassing both mild and intense exercise scenarios. We introduce an image processing algorithm designed to identify a template signal transmitted by the LED and detected within the image. To enhance this process, we utilize the dynamics of controlled exercise-induced motion to limit the tracking process to a smaller region within the image. We demonstrate the feasibility of detecting the transmitting source within the frames, and thus limit the tracking process to a smaller region within the image, achieving an reduction of 87.3% for mild exercise and 79.0% for intense exercise.


Assuntos
Algoritmos , Exercício Físico , Dispositivos Eletrônicos Vestíveis , Humanos , Exercício Físico/fisiologia , Processamento de Imagem Assistida por Computador/métodos , Fotografação/instrumentação , Fotografação/métodos , Atenção à Saúde
2.
J Vis ; 24(5): 1, 2024 May 01.
Artigo em Inglês | MEDLINE | ID: mdl-38691088

RESUMO

Still life paintings comprise a wealth of data on visual perception. Prior work has shown that the color statistics of objects show a marked bias for warm colors. Here, we ask about the relative chromatic contrast of these object-associated colors compared with background colors in still life paintings. We reasoned that, owing to the memory color effect, where the color of familiar objects is perceived more saturated, warm colors will be relatively more saturated than cool colors in still life paintings as compared with photographs. We analyzed color in 108 slides of still life paintings of fruit from the teaching slide collection of the Fogg University Art Museum and 41 color-calibrated photographs of fruit from the McGill data set. The results show that the relatively higher chromatic contrast of warm colors was greater for paintings compared with photographs, consistent with the hypothesis.


Assuntos
Percepção de Cores , Frutas , Pinturas , Fotografação , Humanos , Percepção de Cores/fisiologia , Fotografação/métodos , Cor , Sensibilidades de Contraste/fisiologia
3.
Meat Sci ; 213: 109500, 2024 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-38582006

RESUMO

The objective of this study was to develop calibration models against rib eye traits and independently validate the precision, accuracy, and repeatability of the Frontmatec Q-FOM™ Beef grading camera in Australian carcasses. This study compiled 12 different research datasets acquired from commercial processing facilities and were comprised of a diverse range of carcass phenotypes, graded by industry identified expert Meat Standards Australia (MSA) graders and sampled for chemical intramuscular fat (IMF%). Calibration performance was maintained when the device was independently validated. For continuous traits, the Q-FOM™ demonstrated precise (root mean squared error of prediction, RMSEP) and accurate (coefficient of determination, R2) prediction of eye muscle area (EMA) (R2 = 0.89, RMSEP = 4.3 cm2, slope = 0.96, bias = 0.7), MSA marbling (R2 = 0.95, RMSEP = 47.2, slope = 0.98, bias = -12.8) and chemical IMF% (R2 = 0.94, RMSEP = 1.56%, slope = 0.96, bias = 0.64). For categorical traits, the Q-FOM™ predicted 61%, 64.3% and 60.8% of AUS-MEAT marbling, meat colour and fat colour scores equivalent, and 95% within ±1 classes of expert grader scores. The Q-FOM™ also demonstrated very high repeatability and reproducibility across all traits.


Assuntos
Tecido Adiposo , Cor , Músculo Esquelético , Fotografação , Carne Vermelha , Animais , Austrália , Bovinos , Carne Vermelha/análise , Carne Vermelha/normas , Fotografação/métodos , Calibragem , Fenótipo , Reprodutibilidade dos Testes , Costelas
4.
Biomed Eng Online ; 23(1): 32, 2024 Mar 12.
Artigo em Inglês | MEDLINE | ID: mdl-38475784

RESUMO

PURPOSE: This study aimed to investigate the imaging repeatability of self-service fundus photography compared to traditional fundus photography performed by experienced operators. DESIGN: Prospective cross-sectional study. METHODS: In a community-based eye diseases screening site, we recruited 65 eyes (65 participants) from the resident population of Shanghai, China. All participants were devoid of cataract or any other conditions that could potentially compromise the quality of fundus imaging. Participants were categorized into fully self-service fundus photography or traditional fundus photography group. Image quantitative analysis software was used to extract clinically relevant indicators from the fundus images. Finally, a statistical analysis was performed to depict the imaging repeatability of fully self-service fundus photography. RESULTS: There was no statistical difference in the absolute differences, or the extents of variation of the indicators between the two groups. The extents of variation of all the measurement indicators, with the exception of the optic cup area, were below 10% in both groups. The Bland-Altman plots and multivariate analysis results were consistent with results mentioned above. CONCLUSIONS: The image repeatability of fully self-service fundus photography is comparable to that of traditional fundus photography performed by professionals, demonstrating promise in large-scale eye disease screening programs.


Assuntos
Serviços de Saúde Comunitária , Glaucoma , Humanos , Estudos Transversais , Estudos Prospectivos , China , Fotografação/métodos , Fundo de Olho
5.
Ann Plast Surg ; 92(4): 367-372, 2024 Apr 01.
Artigo em Inglês | MEDLINE | ID: mdl-38527337

RESUMO

STATEMENT OF THE PROBLEM: Standardized medical photography of the face is a vital part of patient documentation, clinical evaluation, and scholarly dissemination. Because digital photography is a mainstay in clinical care, there is a critical need for an easy-to-use mobile device application that could assist users in taking a standardized clinical photograph. ImageAssist was developed to answer this need. The mobile application is integrated into the electronic medical record (EMR); it implements and automates American Society of Plastic Surgery/Plastic Surgery Research Foundation photographic guidelines with background deletion. INITIAL PRODUCT DEVELOPMENT: A team consisting of a craniofacial plastic surgeon and the Health Information Technology product group developed and implemented the pilot application of ImageAssist. The application launches directly from patients' chart in the mobile version of the EMR, EPIC Haiku (Verona, Wisconsin). Standard views of the face (90-degree, oblique left and right, front and basal view) were built into digital templates and are user selected. Red digital frames overlay the patients' face on the screen and turn green once standardized alignment is achieved, prompting the user to capture. The background is then digitally subtracted to a standard blue, and the photograph is not stored on the user's phone. EARLY USER EXPERIENCE: ImageAssist initial beta user group was limited to 13 providers across dermatology, ENT, and plastic surgery. A mix of physicians, advanced practice providers, and nurses was included to pilot the application in the outpatient clinic setting using Image Assist on their smart phone. After using the app, an internal survey was used to gain feedback on the user experience. In the first 2 years of use, 31 users have taken more than 3400 photographs in more than 800 clinical encounters. Since initial release, automated background deletion also has been functional for any anatomic area. CONCLUSIONS: ImageAssist is a novel smartphone application that standardizes clinical photography and integrated into the EMR, which could save both time and expense for clinicians seeking to take consistent clinical images. Future steps include continued refinement of current image capture functionality and development of a stand-alone mobile device application.


Assuntos
Aplicativos Móveis , Procedimentos de Cirurgia Plástica , Cirurgia Plástica , Humanos , Estados Unidos , Smartphone , Fotografação/métodos
6.
Ophthalmic Surg Lasers Imaging Retina ; 55(5): 263-269, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38408222

RESUMO

BACKGROUND AND OBJECTIVE: Color fundus photography is an important imaging modality that is currently limited by a narrow dynamic range. We describe a post-image processing technique to generate high dynamic range (HDR) retinal images with enhanced detail. PATIENTS AND METHODS: This was a retrospective, observational case series evaluating fundus photographs of patients with macular pathology. Photographs were acquired with three or more exposure values using a commercially available camera (Topcon 50-DX). Images were aligned and imported into HDR processing software (Photomatix Pro). Fundus detail was compared between HDR and raw photographs. RESULTS: Sixteen eyes from 10 patients (5 male, 5 female; mean age 59.4 years) were analyzed. Clinician graders preferred the HDR image 91.7% of the time (44/48 image comparisons), with good grader agreement (81.3%, 13/16 eyes). CONCLUSIONS: HDR fundus imaging is feasible using images from existing fundus cameras and may be useful for enhanced visualization of retinal detail in a variety of pathologic states. [Ophthalmic Surg Lasers Imaging Retina 2024;55:263-269.].


Assuntos
Fundo de Olho , Fotografação , Humanos , Feminino , Estudos Retrospectivos , Masculino , Pessoa de Meia-Idade , Fotografação/métodos , Idoso , Doenças Retinianas/diagnóstico , Processamento de Imagem Assistida por Computador/métodos , Adulto , Retina/diagnóstico por imagem , Retina/patologia , Técnicas de Diagnóstico Oftalmológico
7.
Retina ; 44(6): 1092-1099, 2024 Jun 01.
Artigo em Inglês | MEDLINE | ID: mdl-38320305

RESUMO

PURPOSE: To observe the diagnostic value of multispectral fundus imaging (MSI) in hypertensive retinopathy (HR). METHODS: A total of 100 patients with HR were enrolled in this cross-sectional study, and all participants received fundus photography and MSI. Participants with severe HR received fundus fluorescein angiography (FFA). The diagnostic consistency between fundus photography and MSI in the diagnosis of HR was calculated. The sensitivity of MSI in the diagnosis of severe HR was calculated by comparison with FFA. Choroidal vascular index was calculated in patients with HR using MSI at 780 nm. RESULTS: MSI and fundus photography were highly concordant in the diagnosis of HR with a Kappa value = 0.883. MSI had a sensitivity of 96% in diagnosing retinal hemorrhage, a sensitivity of 89.47% in diagnosing retinal exudation, a sensitivity of 100% in diagnosing vascular compression indentation, and a sensitivity of 96.15% in diagnosing retinal arteriosclerosis. The choroidal vascular index of the patients in the HR group was significantly lower than that of the control group, whereas there was no significant difference between the affected and fellow eyes. CONCLUSION: As a noninvasive modality of observation, MSI may be a new tool for the diagnosis and assessment of HR.


Assuntos
Angiofluoresceinografia , Fundo de Olho , Retinopatia Hipertensiva , Humanos , Estudos Transversais , Feminino , Masculino , Pessoa de Meia-Idade , Angiofluoresceinografia/métodos , Retinopatia Hipertensiva/diagnóstico , Idoso , Adulto , Fotografação/métodos , Vasos Retinianos/diagnóstico por imagem , Vasos Retinianos/patologia
8.
Burns ; 50(4): 966-979, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38331663

RESUMO

AIM: This study was conducted to determine the segmentation, classification, object detection, and accuracy of skin burn images using artificial intelligence and a mobile application. With this study, individuals were able to determine the degree of burns and see how to intervene through the mobile application. METHODS: This research was conducted between 26.10.2021-01.09.2023. In this study, the dataset was handled in two stages. In the first stage, the open-access dataset was taken from https://universe.roboflow.com/, and the burn images dataset was created. In the second stage, in order to determine the accuracy of the developed system and artificial intelligence model, the patients admitted to the hospital were identified with our own design Burn Wound Detection Android application. RESULTS: In our study, YOLO V7 architecture was used for segmentation, classification, and object detection. There are 21018 data in this study, and 80% of them are used as training data, and 20% of them are used as test data. The YOLO V7 model achieved a success rate of 75.12% on the test data. The Burn Wound Detection Android mobile application that we developed in the study was used to accurately detect images of individuals. CONCLUSION: In this study, skin burn images were segmented, classified, object detected, and a mobile application was developed using artificial intelligence. First aid is crucial in burn cases, and it is an important development for public health that people living in the periphery can quickly determine the degree of burn through the mobile application and provide first aid according to the instructions of the mobile application.


Assuntos
Inteligência Artificial , Queimaduras , Aplicativos Móveis , Queimaduras/classificação , Queimaduras/diagnóstico por imagem , Queimaduras/patologia , Humanos , Fotografação/métodos
9.
Int Ophthalmol ; 44(1): 41, 2024 Feb 09.
Artigo em Inglês | MEDLINE | ID: mdl-38334896

RESUMO

Diabetic retinopathy (DR) is the leading global cause of vision loss, accounting for 4.8% of global blindness cases as estimated by the World Health Organization (WHO). Fundus photography is crucial in ophthalmology as a diagnostic tool for capturing retinal images. However, resource and infrastructure constraints limit access to traditional tabletop fundus cameras in developing countries. Additionally, these conventional cameras are expensive, bulky, and not easily transportable. In contrast, the newer generation of handheld and smartphone-based fundus cameras offers portability, user-friendliness, and affordability. Despite their potential, there is a lack of comprehensive review studies examining the clinical utilities of these handheld (e.g. Zeiss Visuscout 100, Volk Pictor Plus, Volk Pictor Prestige, Remidio NMFOP, FC161) and smartphone-based (e.g. D-EYE, iExaminer, Peek Retina, Volk iNview, Volk Vistaview, oDocs visoScope, oDocs Nun, oDocs Nun IR) fundus cameras. This review study aims to evaluate the feasibility and practicality of these available handheld and smartphone-based cameras in medical settings, emphasizing their advantages over traditional tabletop fundus cameras. By highlighting various clinical settings and use scenarios, this review aims to fill this gap by evaluating the efficiency, feasibility, cost-effectiveness, and remote capabilities of handheld and smartphone fundus cameras, ultimately enhancing the accessibility of ophthalmic services.


Assuntos
Diabetes Mellitus , Retinopatia Diabética , Oftalmopatias , Humanos , Retinopatia Diabética/diagnóstico , Smartphone , Fundo de Olho , Retina , Oftalmopatias/diagnóstico , Fotografação/métodos , Cegueira
10.
Diabetes Care ; 47(2): 304-319, 2024 02 01.
Artigo em Inglês | MEDLINE | ID: mdl-38241500

RESUMO

BACKGROUND: Diabetic macular edema (DME) is the leading cause of vision loss in people with diabetes. Application of artificial intelligence (AI) in interpreting fundus photography (FP) and optical coherence tomography (OCT) images allows prompt detection and intervention. PURPOSE: To evaluate the performance of AI in detecting DME from FP or OCT images and identify potential factors affecting model performances. DATA SOURCES: We searched seven electronic libraries up to 12 February 2023. STUDY SELECTION: We included studies using AI to detect DME from FP or OCT images. DATA EXTRACTION: We extracted study characteristics and performance parameters. DATA SYNTHESIS: Fifty-three studies were included in the meta-analysis. FP-based algorithms of 25 studies yielded pooled area under the receiver operating characteristic curve (AUROC), sensitivity, and specificity of 0.964, 92.6%, and 91.1%, respectively. OCT-based algorithms of 28 studies yielded pooled AUROC, sensitivity, and specificity of 0.985, 95.9%, and 97.9%, respectively. Potential factors improving model performance included deep learning techniques, larger size, and more diversity in training data sets. Models demonstrated better performance when validated internally than externally, and those trained with multiple data sets showed better results upon external validation. LIMITATIONS: Analyses were limited by unstandardized algorithm outcomes and insufficient data in patient demographics, OCT volumetric scans, and external validation. CONCLUSIONS: This meta-analysis demonstrates satisfactory performance of AI in detecting DME from FP or OCT images. External validation is warranted for future studies to evaluate model generalizability. Further investigations may estimate optimal sample size, effect of class balance, patient demographics, and additional benefits of OCT volumetric scans.


Assuntos
Diabetes Mellitus , Retinopatia Diabética , Edema Macular , Humanos , Retinopatia Diabética/diagnóstico por imagem , Retinopatia Diabética/complicações , Edema Macular/diagnóstico por imagem , Edema Macular/etiologia , Inteligência Artificial , Tomografia de Coerência Óptica/métodos , Fotografação/métodos
12.
Retina ; 44(6): 1034-1044, 2024 Jun 01.
Artigo em Inglês | MEDLINE | ID: mdl-38261816

RESUMO

BACKGROUND/PURPOSE: Evaluate the performance of a deep learning algorithm for the automated detection and grading of vitritis on ultrawide-field imaging. METHODS: Cross-sectional noninterventional study. Ultrawide-field fundus retinophotographs of uveitis patients were used. Vitreous haze was defined according to the six steps of the Standardization of Uveitis Nomenclature classification. The deep learning framework TensorFlow and the DenseNet121 convolutional neural network were used to perform the classification task. The best fitted model was tested in a validation study. RESULTS: One thousand one hundred eighty-one images were included. The performance of the model for the detection of vitritis was good with a sensitivity of 91%, a specificity of 89%, an accuracy of 0.90, and an area under the receiver operating characteristics curve of 0.97. When used on an external set of images, the accuracy for the detection of vitritis was 0.78. The accuracy to classify vitritis in one of the six Standardization of Uveitis Nomenclature grades was limited (0.61) but improved to 0.75 when the grades were grouped into three categories. When accepting an error of one grade, the accuracy for the six-class classification increased to 0.90, suggesting the need for a larger sample to improve the model performances. CONCLUSION: A new deep learning model based on ultrawide-field fundus imaging that produces an efficient tool for the detection of vitritis was described. The performance of the model for the grading into three categories of increasing vitritis severity was acceptable. The performance for the six-class grading of vitritis was limited but can probably be improved with a larger set of images.


Assuntos
Aprendizado Profundo , Fundo de Olho , Humanos , Estudos Transversais , Feminino , Masculino , Fotografação/métodos , Corpo Vítreo/patologia , Corpo Vítreo/diagnóstico por imagem , Adulto , Curva ROC , Pessoa de Meia-Idade , Oftalmopatias/diagnóstico , Oftalmopatias/classificação , Oftalmopatias/diagnóstico por imagem , Uveíte/diagnóstico , Uveíte/classificação , Algoritmos , Redes Neurais de Computação
13.
IEEE Trans Med Imaging ; 43(5): 1945-1957, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38206778

RESUMO

Color fundus photography (CFP) and Optical coherence tomography (OCT) images are two of the most widely used modalities in the clinical diagnosis and management of retinal diseases. Despite the widespread use of multimodal imaging in clinical practice, few methods for automated diagnosis of eye diseases utilize correlated and complementary information from multiple modalities effectively. This paper explores how to leverage the information from CFP and OCT images to improve the automated diagnosis of retinal diseases. We propose a novel multimodal learning method, named geometric correspondence-based multimodal learning network (GeCoM-Net), to achieve the fusion of CFP and OCT images. Specifically, inspired by clinical observations, we consider the geometric correspondence between the OCT slice and the CFP region to learn the correlated features of the two modalities for robust fusion. Furthermore, we design a new feature selection strategy to extract discriminative OCT representations by automatically selecting the important feature maps from OCT slices. Unlike the existing multimodal learning methods, GeCoM-Net is the first method that formulates the geometric relationships between the OCT slice and the corresponding region of the CFP image explicitly for CFP and OCT fusion. Experiments have been conducted on a large-scale private dataset and a publicly available dataset to evaluate the effectiveness of GeCoM-Net for diagnosing diabetic macular edema (DME), impaired visual acuity (VA) and glaucoma. The empirical results show that our method outperforms the current state-of-the-art multimodal learning methods by improving the AUROC score 0.4%, 1.9% and 2.9% for DME, VA and glaucoma detection, respectively.


Assuntos
Interpretação de Imagem Assistida por Computador , Imagem Multimodal , Tomografia de Coerência Óptica , Humanos , Tomografia de Coerência Óptica/métodos , Imagem Multimodal/métodos , Interpretação de Imagem Assistida por Computador/métodos , Algoritmos , Doenças Retinianas/diagnóstico por imagem , Retina/diagnóstico por imagem , Aprendizado de Máquina , Fotografação/métodos , Técnicas de Diagnóstico Oftalmológico , Bases de Dados Factuais
14.
Klin Monbl Augenheilkd ; 241(1): 75-83, 2024 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-38242135

RESUMO

Cataract is among the leading causes of visual impairment worldwide. Innovations in treatment have drastically improved patient outcomes, but to be properly implemented, it is necessary to have the right diagnostic tools. This review explores the cataract grading systems developed by researchers in recent decades and provides insight into both merits and limitations. To this day, the gold standard for cataract classification is the Lens Opacity Classification System III. Different cataract features are graded according to standard photographs during slit lamp examination. Although widely used in research, its clinical application is rare, and it is limited by its subjective nature. Meanwhile, recent advancements in imaging technology, notably Scheimpflug imaging and optical coherence tomography, have opened the possibility of objective assessment of lens structure. With the use of automatic lens anatomy detection software, researchers demonstrated a good correlation to functional and surgical metrics such as visual acuity, phacoemulsification energy, and surgical time. The development of deep learning networks has further increased the capability of these grading systems by improving interpretability and increasing robustness when applied to norm-deviating cases. These classification systems, which can be used for both screening and preoperative diagnostics, are of value for targeted prospective studies, but still require implementation and validation in everyday clinical practice.


Assuntos
Catarata , Cristalino , Facoemulsificação , Humanos , Estudos Prospectivos , Fotografação/métodos , Catarata/diagnóstico , Acuidade Visual , Facoemulsificação/métodos
15.
BMC Med Inform Decis Mak ; 24(1): 25, 2024 Jan 26.
Artigo em Inglês | MEDLINE | ID: mdl-38273286

RESUMO

BACKGROUND: The epiretinal membrane (ERM) is a common retinal disorder characterized by abnormal fibrocellular tissue at the vitreomacular interface. Most patients with ERM are asymptomatic at early stages. Therefore, screening for ERM will become increasingly important. Despite the high prevalence of ERM, few deep learning studies have investigated ERM detection in the color fundus photography (CFP) domain. In this study, we built a generative model to enhance ERM detection performance in the CFP. METHODS: This deep learning study retrospectively collected 302 ERM and 1,250 healthy CFP data points from a healthcare center. The generative model using StyleGAN2 was trained using single-center data. EfficientNetB0 with StyleGAN2-based augmentation was validated using independent internal single-center data and external datasets. We randomly assigned healthcare center data to the development (80%) and internal validation (20%) datasets. Data from two publicly accessible sources were used as external validation datasets. RESULTS: StyleGAN2 facilitated realistic CFP synthesis with the characteristic cellophane reflex features of the ERM. The proposed method with StyleGAN2-based augmentation outperformed the typical transfer learning without a generative adversarial network. The proposed model achieved an area under the receiver operating characteristic (AUC) curve of 0.926 for internal validation. AUCs of 0.951 and 0.914 were obtained for the two external validation datasets. Compared with the deep learning model without augmentation, StyleGAN2-based augmentation improved the detection performance and contributed to the focus on the location of the ERM. CONCLUSIONS: We proposed an ERM detection model by synthesizing realistic CFP images with the pathological features of ERM through generative deep learning. We believe that our deep learning framework will help achieve a more accurate detection of ERM in a limited data setting.


Assuntos
Aprendizado Profundo , Membrana Epirretiniana , Humanos , Membrana Epirretiniana/diagnóstico por imagem , Estudos Retrospectivos , Técnicas de Diagnóstico Oftalmológico , Fotografação/métodos
16.
J Biomed Opt ; 29(Suppl 1): S11524, 2024 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-38292055

RESUMO

Significance: Compressed ultrafast photography (CUP) is currently the world's fastest single-shot imaging technique. Through the integration of compressed sensing and streak imaging, CUP can capture a transient event in a single camera exposure with imaging speeds from thousands to trillions of frames per second, at micrometer-level spatial resolutions, and in broad sensing spectral ranges. Aim: This tutorial aims to provide a comprehensive review of CUP in its fundamental methods, system implementations, biomedical applications, and prospect. Approach: A step-by-step guideline to CUP's forward model and representative image reconstruction algorithms is presented with sample codes and illustrations in Matlab and Python. Then, CUP's hardware implementation is described with a focus on the representative techniques, advantages, and limitations of the three key components-the spatial encoder, the temporal shearing unit, and the two-dimensional sensor. Furthermore, four representative biomedical applications enabled by CUP are discussed, followed by the prospect of CUP's technical advancement. Conclusions: CUP has emerged as a state-of-the-art ultrafast imaging technology. Its advanced imaging ability and versatility contribute to unprecedented observations and new applications in biomedicine. CUP holds great promise in improving technical specifications and facilitating the investigation of biomedical processes.


Assuntos
Processamento de Imagem Assistida por Computador , Fotografação , Fotografação/métodos , Processamento de Imagem Assistida por Computador/métodos , Algoritmos
17.
Indian J Ophthalmol ; 72(Suppl 2): S280-S296, 2024 Feb 01.
Artigo em Inglês | MEDLINE | ID: mdl-38271424

RESUMO

PURPOSE: To compare the quantification of intraretinal hard exudate (HE) using en face optical coherence tomography (OCT) and fundus photography. METHODS: Consecutive en face images and corresponding fundus photographs from 13 eyes of 10 patients with macular edema associated with diabetic retinopathy or Coats' disease were analyzed using the machine-learning-based image analysis tool, "ilastik." RESULTS: The overall measured HE area was greater with en face images than with fundus photos (en face: 0.49 ± 0.35 mm2 vs. fundus photo: 0.34 ± 0.34 mm2, P < 0.001). However, there was an excellent correlation between the two measurements (intraclass correlation coefficient [ICC] = 0.844). There was a negative correlation between HE area and central macular thickness (CMT) (r = -0.292, P = 0.001). However, HE area showed a positive correlation with CMT in the previous several months, especially in eyes treated with anti-vascular endothelial growth factor (VEGF) therapy (CMT 3 months before: r = 0.349, P = 0.001; CMT 4 months before: r = 0.287, P = 0.012). CONCLUSION: Intraretinal HE can be reliably quantified from either en face OCT images or fundus photography with the aid of an interactive machine learning-based image analysis tool. HE area changes lagged several months behind CMT changes, especially in eyes treated with anti-VEGF injections.


Assuntos
Retinopatia Diabética , Tomografia de Coerência Óptica , Humanos , Tomografia de Coerência Óptica/métodos , Estudos Retrospectivos , Técnicas de Diagnóstico Oftalmológico , Retinopatia Diabética/diagnóstico , Retinopatia Diabética/complicações , Fotografação/métodos , Exsudatos e Transudatos/metabolismo
18.
Vasc Med ; 29(2): 215-222, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-38054219

RESUMO

This study aimed to review the current literature exploring the utility of noninvasive ocular imaging for the diagnosis of peripheral artery disease (PAD). Our search was conducted in early April 2022 and included the databases Medline, Scopus, Embase, Cochrane, and others. Five articles were included in the final review. Of the five studies that used ocular imaging in PAD, two studies used retinal color fundus photography, one used optical coherence tomography (OCT), and two used optical coherence tomography angiography (OCTA) to assess the ocular changes in PAD. PAD was associated with both structural and functional changes in the retina. Structural alterations around the optic disc and temporal retinal vascular arcades were seen in color fundus photography of patients with PAD compared to healthy individuals. The presence of retinal hemorrhages, exudates, and microaneurysms in color fundus photography was associated with an increased future risk of PAD, especially the severe form of the disease. The retinal nerve fiber layer (RNFL) was significantly thinner in the nasal quadrant in patients with PAD compared to age-matched healthy individuals in OCT. Similarly, the choroidal thickness in the subfoveal region was significantly thinner in patients with PAD compared to controls. Patients with PAD also had a significant reduction in the retinal and choroidal circulation in OCTA compared to healthy controls. As PAD causes thinning and ischemic changes in retinal vessels, examination of the retinal vessels using retinal imaging techniques can provide useful information about early microvascular damage in PAD. Ocular imaging could potentially serve as a biomarker for PAD. PROSPERO ID: CRD42022310637.


Assuntos
Disco Óptico , Doença Arterial Periférica , Humanos , Tomografia de Coerência Óptica/métodos , Fotografação/métodos , Doença Arterial Periférica/diagnóstico por imagem , Biomarcadores , Vasos Retinianos/diagnóstico por imagem
19.
Community Ment Health J ; 60(3): 457-469, 2024 04.
Artigo em Inglês | MEDLINE | ID: mdl-37874437

RESUMO

The importance of community involvement for both older adults and individuals coping with mental illness is well documented. Yet, barriers to community integration for adults with mental illness such as social stigma, discrimination, and economic marginalization are often exacerbated by increased health and mobility challenges among older adults. Using photovoice, nine older adults with mental illness represented their views of community in photographs and group discussions over a six-week period. Participant themes of community life included physical spaces, valued social roles, and access to resources in the community. Themes were anchored by older adults' perceptions of historical and cultural time comparisons between 'how things used to be' and 'how things are now.' Barriers to community integration were often related to factors such as age, mobility, and resources rather than to mental health status. Program evaluation results suggest photovoice can promote self-reflection, learning, and collaboration among older adults with mental illness.


Assuntos
Transtornos Mentais , Fotografação , Humanos , Idoso , Fotografação/métodos , Estigma Social , Transtornos Mentais/psicologia , Capacidades de Enfrentamento , Aprendizagem
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA