Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Lancet Digit Health ; 3(1): e29-e40, 2021 01.
Artigo em Inglês | MEDLINE | ID: mdl-33735066

RESUMO

BACKGROUND: In current approaches to vision screening in the community, a simple and efficient process is needed to identify individuals who should be referred to tertiary eye care centres for vision loss related to eye diseases. The emergence of deep learning technology offers new opportunities to revolutionise this clinical referral pathway. We aimed to assess the performance of a newly developed deep learning algorithm for detection of disease-related visual impairment. METHODS: In this proof-of-concept study, using retinal fundus images from 15 175 eyes with complete data related to best-corrected visual acuity or pinhole visual acuity from the Singapore Epidemiology of Eye Diseases Study, we first developed a single-modality deep learning algorithm based on retinal photographs alone for detection of any disease-related visual impairment (defined as eyes from patients with major eye diseases and best-corrected visual acuity of <20/40), and moderate or worse disease-related visual impairment (eyes with disease and best-corrected visual acuity of <20/60). After development of the algorithm, we tested it internally, using a new set of 3803 eyes from the Singapore Epidemiology of Eye Diseases Study. We then tested it externally using three population-based studies (the Beijing Eye study [6239 eyes], Central India Eye and Medical study [6526 eyes], and Blue Mountains Eye Study [2002 eyes]), and two clinical studies (the Chinese University of Hong Kong's Sight Threatening Diabetic Retinopathy study [971 eyes] and the Outram Polyclinic Study [1225 eyes]). The algorithm's performance in each dataset was assessed on the basis of the area under the receiver operating characteristic curve (AUC). FINDINGS: In the internal test dataset, the AUC for detection of any disease-related visual impairment was 94·2% (95% CI 93·0-95·3; sensitivity 90·7% [87·0-93·6]; specificity 86·8% [85·6-87·9]). The AUC for moderate or worse disease-related visual impairment was 93·9% (95% CI 92·2-95·6; sensitivity 94·6% [89·6-97·6]; specificity 81·3% [80·0-82·5]). Across the five external test datasets (16 993 eyes), the algorithm achieved AUCs ranging between 86·6% (83·4-89·7; sensitivity 87·5% [80·7-92·5]; specificity 70·0% [66·7-73·1]) and 93·6% (92·4-94·8; sensitivity 87·8% [84·1-90·9]; specificity 87·1% [86·2-88·0]) for any disease-related visual impairment, and the AUCs for moderate or worse disease-related visual impairment ranged between 85·9% (81·8-90·1; sensitivity 84·7% [73·0-92·8]; specificity 74·4% [71·4-77·2]) and 93·5% (91·7-95·3; sensitivity 90·3% [84·2-94·6]; specificity 84·2% [83·2-85·1]). INTERPRETATION: This proof-of-concept study shows the potential of a single-modality, function-focused tool in identifying visual impairment related to major eye diseases, providing more timely and pinpointed referral of patients with disease-related visual impairment from the community to tertiary eye hospitals. FUNDING: National Medical Research Council, Singapore.


Assuntos
Algoritmos , Aprendizado Profundo , Oftalmopatias/complicações , Transtornos da Visão/diagnóstico , Transtornos da Visão/etiologia , Idoso , Área Sob a Curva , Povo Asiático , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Fotografação/métodos , Estudo de Prova de Conceito , Curva ROC , Sensibilidade e Especificidade , Singapura/epidemiologia
2.
Annu Int Conf IEEE Eng Med Biol Soc ; 2019: 4016-4019, 2019 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-31946752

RESUMO

Cardiac segmentation is the first most important step in assessing cardiac diseases. However, it still remains challenging owing to the complicated information of myocardium's boundary. In this work, we investigate approaches based on deep learning for fully automatic segmentation of the left ventricular (LV) endocardium using cardiac magnetic resonance (CMR) images. The deep convolutional neural network architectures, specifically, GoogleNet and U-Net, are modified and deployed to extract the features and then classify each pixel into either endocardium or background. Since adjacent frames for a given slice are imaged over a short time period across a cardiac cycle, the LV endocardium exhibit strong temporal correlation. To utilize the temporal information of heart motion to assist segmentation, we propose to construct multi-channel cardiac images by combining adjacent frames together with the current frame, which are used as the inputs for deep learning models. This allows the deep learning models to automatically learn spatial and temporal information. The performance of our constructed networks is evaluated by using the Dice metric to compare the segmented areas with the manually segmented ground truth. The experiments show that the multi-channel approaches converge more rapidly and achieve higher segmentation accuracy compared to the single channel approach.


Assuntos
Aprendizado Profundo , Endocárdio/diagnóstico por imagem , Ventrículos do Coração/diagnóstico por imagem , Processamento de Imagem Assistida por Computador , Algoritmos , Humanos , Redes Neurais de Computação
3.
J Nucl Med ; 56(8): 1285-91, 2015 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-26135111

RESUMO

UNLABELLED: This study aimed to investigate image quality for a comprehensive set of isotopes ((18)F, (11)C, (89)Zr, (124)I, (68)Ga, and (90)Y) on 2 clinical scanners: a PET/CT scanner and a PET/MR scanner. METHODS: Image quality and spatial resolution were tested according to NU 2-2007 of the National Electrical Manufacturers Association. An image-quality phantom was used to measure contrast recovery, residual bias in a cold area, and background variability. Reconstruction methods available on the 2 scanners were compared, including point-spread-function correction for both scanners and time of flight for the PET/CT scanner. Spatial resolution was measured using point sources and filtered backprojection reconstruction. RESULTS: With the exception of (90)Y, small differences were seen in the hot-sphere contrast recovery of the different isotopes. Cold-sphere contrast recovery was similar across isotopes for all reconstructions, with an improvement seen with time of flight on the PET/CT scanner. The lower-statistic (90)Y scans yielded substantially lower contrast recovery than the other isotopes. When isotopes were compared, there was no difference in measured spatial resolution except for PET/MR axial spatial resolution, which was significantly higher for (124)I and (68)Ga. CONCLUSION: Overall, both scanners produced good images with (18)F, (11)C, (89)Zr, (124)I, (68)Ga, and (90)Y.


Assuntos
Fluordesoxiglucose F18/química , Imageamento por Ressonância Magnética/métodos , Imagem Multimodal/métodos , Neoplasias/diagnóstico por imagem , Tomografia por Emissão de Pósitrons/instrumentação , Tomografia por Emissão de Pósitrons/métodos , Tomografia Computadorizada por Raios X/métodos , Radioisótopos de Carbono/química , Meios de Contraste/química , Radioisótopos de Flúor/química , Radioisótopos de Gálio/química , Humanos , Processamento de Imagem Assistida por Computador , Radioisótopos do Iodo/química , Imagens de Fantasmas , Radioisótopos/química , Compostos Radiofarmacêuticos/química , Radioisótopos de Ítrio/química , Zircônio/química
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...