Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Mais filtros











Base de dados
Intervalo de ano de publicação
1.
Sci Rep ; 14(1): 8242, 2024 04 08.
Artigo em Inglês | MEDLINE | ID: mdl-38589440

RESUMO

The aim of this study was to introduce novel vector field analysis for the quantitative measurement of retinal displacement after epiretinal membrane (ERM) removal. We developed a novel framework to measure retinal displacement from retinal fundus images as follows: (1) rigid registration of preoperative retinal fundus images in reference to postoperative retinal fundus images, (2) extraction of retinal vessel segmentation masks from these retinal fundus images, (3) non-rigid registration of preoperative vessel masks in reference to postoperative vessel masks, and (4) calculation of the transformation matrix required for non-rigid registration for each pixel. These pixel-wise vector field results were summarized according to predefined 24 sectors after standardization. We applied this framework to 20 patients who underwent ERM removal to obtain their retinal displacement vector fields between retinal fundus images taken preoperatively and at postoperative 1, 4, 10, and 22 months. The mean direction of displacement vectors was in the nasal direction. The mean standardized magnitudes of retinal displacement between preoperative and postoperative 1 month, postoperative 1 and 4, 4 and 10, and 10 and 22 months were 38.6, 14.9, 7.6, and 5.4, respectively. In conclusion, the proposed method provides a computerized, reproducible, and scalable way to analyze structural changes in the retina with a powerful visualization tool. Retinal structural changes were mostly concentrated in the early postoperative period and tended to move nasally.


Assuntos
Membrana Epirretiniana , Humanos , Membrana Epirretiniana/cirurgia , Acuidade Visual , Retina/diagnóstico por imagem , Retina/cirurgia , Vasos Retinianos , Fundo de Olho , Vitrectomia , Tomografia de Coerência Óptica/métodos , Estudos Retrospectivos
2.
BMC Med Inform Decis Mak ; 21(1): 9, 2021 01 06.
Artigo em Inglês | MEDLINE | ID: mdl-33407448

RESUMO

BACKGROUND: Although ophthalmic devices have made remarkable progress and are widely used, most lack standardization of both image review and results reporting systems, making interoperability unachievable. We developed and validated new software for extracting, transforming, and storing information from report images produced by ophthalmic examination devices to generate standardized, structured, and interoperable information to assist ophthalmologists in eye clinics. RESULTS: We selected report images derived from optical coherence tomography (OCT). The new software consists of three parts: (1) The Area Explorer, which determines whether the designated area in the configuration file contains numeric values or tomographic images; (2) The Value Reader, which converts images to text according to ophthalmic measurements; and (3) The Finding Classifier, which classifies pathologic findings from tomographic images included in the report. After assessment of Value Reader accuracy by human experts, all report images were converted and stored in a database. We applied the Value Reader, which achieved 99.67% accuracy, to a total of 433,175 OCT report images acquired in a single tertiary hospital from 07/04/2006 to 08/31/2019. The Finding Classifier provided pathologic findings (e.g., macular edema and subretinal fluid) and disease activity. Patient longitudinal data could be easily reviewed to document changes in measurements over time. The final results were loaded into a common data model (CDM), and the cropped tomographic images were loaded into the Picture Archive Communication System. CONCLUSIONS: The newly developed software extracts valuable information from OCT images and may be extended to other types of report image files produced by medical devices. Furthermore, powerful databases such as the CDM may be implemented or augmented by adding the information captured through our program.


Assuntos
Edema Macular , Humanos , Software , Tomografia de Coerência Óptica
3.
Sci Rep ; 10(1): 4623, 2020 03 12.
Artigo em Inglês | MEDLINE | ID: mdl-32165702

RESUMO

Retinal fundus images are used to detect organ damage from vascular diseases (e.g. diabetes mellitus and hypertension) and screen ocular diseases. We aimed to assess convolutional neural network (CNN) models that predict age and sex from retinal fundus images in normal participants and in participants with underlying systemic vascular-altered status. In addition, we also tried to investigate clues regarding differences between normal ageing and vascular pathologic changes using the CNN models. In this study, we developed CNN age and sex prediction models using 219,302 fundus images from normal participants without hypertension, diabetes mellitus (DM), and any smoking history. The trained models were assessed in four test-sets with 24,366 images from normal participants, 40,659 images from hypertension participants, 14,189 images from DM participants, and 113,510 images from smokers. The CNN model accurately predicted age in normal participants; the correlation between predicted age and chronologic age was R2 = 0.92, and the mean absolute error (MAE) was 3.06 years. MAEs in test-sets with hypertension (3.46 years), DM (3.55 years), and smoking (2.65 years) were similar to that of normal participants; however, R2 values were relatively low (hypertension, R2 = 0.74; DM, R2 = 0.75; smoking, R2 = 0.86). In subgroups with participants over 60 years, the MAEs increased to above 4.0 years and the accuracies declined for all test-sets. Fundus-predicted sex demonstrated acceptable accuracy (area under curve > 0.96) in all test-sets. Retinal fundus images from participants with underlying vascular-altered conditions (hypertension, DM, or smoking) indicated similar MAEs and low coefficients of determination (R2) between the predicted age and chronologic age, thus suggesting that the ageing process and pathologic vascular changes exhibit different features. Our models demonstrate the most improved performance yet and provided clues to the relationship and difference between ageing and pathologic changes from underlying systemic vascular conditions. In the process of fundus change, systemic vascular diseases are thought to have a different effect from ageing. Research in context. Evidence before this study. The human retina and optic disc continuously change with ageing, and they share physiologic or pathologic characteristics with brain and systemic vascular status. As retinal fundus images provide high-resolution in-vivo images of retinal vessels and parenchyma without any invasive procedure, it has been used to screen ocular diseases and has attracted significant attention as a predictive biomarker for cerebral and systemic vascular diseases. Recently, deep neural networks have revolutionised the field of medical image analysis including retinal fundus images and shown reliable results in predicting age, sex, and presence of cardiovascular diseases. Added value of this study. This is the first study demonstrating how a convolutional neural network (CNN) trained using retinal fundus images from normal participants measures the age of participants with underlying vascular conditions such as hypertension, diabetes mellitus (DM), or history of smoking using a large database, SBRIA, which contains 412,026 retinal fundus images from 155,449 participants. Our results indicated that the model accurately predicted age in normal participants, while correlations (coefficient of determination, R2) in test-sets with hypertension, DM, and smoking were relatively low. Additionally, a subgroup analysis indicated that mean absolute errors (MAEs) increased and accuracies declined significantly in subgroups with participants over 60 years of age in both normal participants and participants with vascular-altered conditions. These results suggest that pathologic retinal vascular changes occurring in systemic vascular diseases are different form the changes in spontaneous ageing process, and the ageing process observed in retinal fundus images may saturate at age about 60 years. Implications of all available evidence. Based on this study and previous reports, the CNN could accurately and reliably predict age and sex using retinal fundus images. The fact that retinal changes caused by ageing and systemic vascular diseases occur differently motivates one to understand the retina deeper. Deep learning-based fundus image reading may be a more useful and beneficial tool for screening and diagnosing systemic and ocular diseases after further development.


Assuntos
Diabetes Mellitus/epidemiologia , Fundo de Olho , Hipertensão/epidemiologia , Retina/diagnóstico por imagem , Fumar/epidemiologia , Adulto , Idoso , Algoritmos , Área Sob a Curva , Diabetes Mellitus/patologia , Feminino , Humanos , Hipertensão/patologia , Processamento de Imagem Assistida por Computador/métodos , Masculino , Pessoa de Meia-Idade , Redes Neurais de Computação , Vigilância em Saúde Pública , Curva ROC , República da Coreia , Retina/patologia
4.
PLoS One ; 10(12): e0143725, 2015.
Artigo em Inglês | MEDLINE | ID: mdl-26630496

RESUMO

In this paper, we present a novel cascaded classification framework for automatic detection of individual and clusters of microcalcifications (µC). Our framework comprises three classification stages: i) a random forest (RF) classifier for simple features capturing the second order local structure of individual µCs, where non-µC pixels in the target mammogram are efficiently eliminated; ii) a more complex discriminative restricted Boltzmann machine (DRBM) classifier for µC candidates determined in the RF stage, which automatically learns the detailed morphology of µC appearances for improved discriminative power; and iii) a detector to detect clusters of µCs from the individual µC detection results, using two different criteria. From the two-stage RF-DRBM classifier, we are able to distinguish µCs using explicitly computed features, as well as learn implicit features that are able to further discriminate between confusing cases. Experimental evaluation is conducted on the original Mammographic Image Analysis Society (MIAS) and mini-MIAS databases, as well as our own Seoul National University Bundang Hospital digital mammographic database. It is shown that the proposed method outperforms comparable methods in terms of receiver operating characteristic (ROC) and precision-recall curves for detection of individual µCs and free-response receiver operating characteristic (FROC) curve for detection of clustered µCs.


Assuntos
Neoplasias da Mama/diagnóstico por imagem , Calcinose/diagnóstico por imagem , Mamografia/métodos , Interpretação de Imagem Radiográfica Assistida por Computador/métodos , Calcinose/classificação , Bases de Dados Factuais , Feminino , Humanos , Aprendizado de Máquina , Mamografia/estatística & dados numéricos , Intensificação de Imagem Radiográfica/métodos , Seul
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA