Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Mais filtros








Base de dados
Intervalo de ano de publicação
1.
J Am Med Inform Assoc ; 30(12): 1904-1914, 2023 11 17.
Artigo em Inglês | MEDLINE | ID: mdl-37659103

RESUMO

OBJECTIVE: To develop a deep learning algorithm (DLA) to detect diabetic kideny disease (DKD) from retinal photographs of patients with diabetes, and evaluate performance in multiethnic populations. MATERIALS AND METHODS: We trained 3 models: (1) image-only; (2) risk factor (RF)-only multivariable logistic regression (LR) model adjusted for age, sex, ethnicity, diabetes duration, HbA1c, systolic blood pressure; (3) hybrid multivariable LR model combining RF data and standardized z-scores from image-only model. Data from Singapore Integrated Diabetic Retinopathy Program (SiDRP) were used to develop (6066 participants with diabetes, primary-care-based) and internally validate (5-fold cross-validation) the models. External testing on 2 independent datasets: (1) Singapore Epidemiology of Eye Diseases (SEED) study (1885 participants with diabetes, population-based); (2) Singapore Macroangiopathy and Microvascular Reactivity in Type 2 Diabetes (SMART2D) (439 participants with diabetes, cross-sectional) in Singapore. Supplementary external testing on 2 Caucasian cohorts: (3) Australian Eye and Heart Study (AHES) (460 participants with diabetes, cross-sectional) and (4) Northern Ireland Cohort for the Longitudinal Study of Ageing (NICOLA) (265 participants with diabetes, cross-sectional). RESULTS: In SiDRP validation, area under the curve (AUC) was 0.826(95% CI 0.818-0.833) for image-only, 0.847(0.840-0.854) for RF-only, and 0.866(0.859-0.872) for hybrid. Estimates with SEED were 0.764(0.743-0.785) for image-only, 0.802(0.783-0.822) for RF-only, and 0.828(0.810-0.846) for hybrid. In SMART2D, AUC was 0.726(0.686-0.765) for image-only, 0.701(0.660-0.741) in RF-only, 0.761(0.724-0.797) for hybrid. DISCUSSION AND CONCLUSION: There is potential for DLA using retinal images as a screening adjunct for DKD among individuals with diabetes. This can value-add to existing DLA systems which diagnose diabetic retinopathy from retinal images, facilitating primary screening for DKD.


Assuntos
Aprendizado Profundo , Diabetes Mellitus Tipo 2 , Nefropatias Diabéticas , Retinopatia Diabética , Humanos , Retinopatia Diabética/diagnóstico , Diabetes Mellitus Tipo 2/complicações , Estudos Transversais , Estudos Longitudinais , Austrália , Algoritmos
2.
NPJ Digit Med ; 3: 40, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-32219181

RESUMO

Deep learning (DL) has been shown to be effective in developing diabetic retinopathy (DR) algorithms, possibly tackling financial and manpower challenges hindering implementation of DR screening. However, our systematic review of the literature reveals few studies studied the impact of different factors on these DL algorithms, that are important for clinical deployment in real-world settings. Using 455,491 retinal images, we evaluated two technical and three image-related factors in detection of referable DR. For technical factors, the performances of four DL models (VGGNet, ResNet, DenseNet, Ensemble) and two computational frameworks (Caffe, TensorFlow) were evaluated while for image-related factors, we evaluated image compression levels (reducing image size, 350, 300, 250, 200, 150 KB), number of fields (7-field, 2-field, 1-field) and media clarity (pseudophakic vs phakic). In detection of referable DR, four DL models showed comparable diagnostic performance (AUC 0.936-0.944). To develop the VGGNet model, two computational frameworks had similar AUC (0.936). The DL performance dropped when image size decreased below 250 KB (AUC 0.936, 0.900, p < 0.001). The DL performance performed better when there were increased number of fields (dataset 1: 2-field vs 1-field-AUC 0.936 vs 0.908, p < 0.001; dataset 2: 7-field vs 2-field vs 1-field, AUC 0.949 vs 0.911 vs 0.895). DL performed better in the pseudophakic than phakic eyes (AUC 0.918 vs 0.833, p < 0.001). Various image-related factors play more significant roles than technical factors in determining the diagnostic performance, suggesting the importance of having robust training and testing datasets for DL training and deployment in the real-world settings.

3.
Lancet Digit Health ; 1(1): e35-e44, 2019 05.
Artigo em Inglês | MEDLINE | ID: mdl-33323239

RESUMO

BACKGROUND: Radical measures are required to identify and reduce blindness due to diabetes to achieve the Sustainable Development Goals by 2030. Therefore, we evaluated the accuracy of an artificial intelligence (AI) model using deep learning in a population-based diabetic retinopathy screening programme in Zambia, a lower-middle-income country. METHODS: We adopted an ensemble AI model consisting of a combination of two convolutional neural networks (an adapted VGGNet architecture and a residual neural network architecture) for classifying retinal colour fundus images. We trained our model on 76 370 retinal fundus images from 13 099 patients with diabetes who had participated in the Singapore Integrated Diabetic Retinopathy Program, between 2010 and 2013, which has been published previously. In this clinical validation study, we included all patients with a diagnosis of diabetes that attended a mobile screening unit in five urban centres in the Copperbelt province of Zambia from Feb 1 to June 31, 2012. In our model, referable diabetic retinopathy was defined as moderate non-proliferative diabetic retinopathy or worse, diabetic macular oedema, and ungradable images. Vision-threatening diabetic retinopathy comprised severe non-proliferative and proliferative diabetic retinopathy. We calculated the area under the curve (AUC), sensitivity, and specificity for referable diabetic retinopathy, and sensitivities of vision-threatening diabetic retinopathy and diabetic macular oedema compared with the grading by retinal specialists. We did a multivariate analysis for systemic risk factors and referable diabetic retinopathy between AI and human graders. FINDINGS: A total of 4504 retinal fundus images from 3093 eyes of 1574 Zambians with diabetes were prospectively recruited. Referable diabetic retinopathy was found in 697 (22·5%) eyes, vision-threatening diabetic retinopathy in 171 (5·5%) eyes, and diabetic macular oedema in 249 (8·1%) eyes. The AUC of the AI system for referable diabetic retinopathy was 0·973 (95% CI 0·969-0·978), with corresponding sensitivity of 92·25% (90·10-94·12) and specificity of 89·04% (87·85-90·28). Vision-threatening diabetic retinopathy sensitivity was 99·42% (99·15-99·68) and diabetic macular oedema sensitivity was 97·19% (96·61-97·77). The AI model and human graders showed similar outcomes in referable diabetic retinopathy prevalence detection and systemic risk factors associations. Both the AI model and human graders identified longer duration of diabetes, higher level of glycated haemoglobin, and increased systolic blood pressure as risk factors associated with referable diabetic retinopathy. INTERPRETATION: An AI system shows clinically acceptable performance in detecting referable diabetic retinopathy, vision-threatening diabetic retinopathy, and diabetic macular oedema in population-based diabetic retinopathy screening. This shows the potential application and adoption of such AI technology in an under-resourced African population to reduce the incidence of preventable blindness, even when the model is trained in a different population. FUNDING: National Medical Research Council Health Service Research Grant, Large Collaborative Grant, Ministry of Health, Singapore; the SingHealth Foundation; and the Tanoto Foundation.


Assuntos
Inteligência Artificial , Aprendizado Profundo , Retinopatia Diabética/diagnóstico , Programas de Rastreamento , Adulto , Área Sob a Curva , Feminino , Humanos , Masculino , Redes Neurais de Computação , Fotografação , Estudos Prospectivos , Retina/fisiopatologia , Sensibilidade e Especificidade , Zâmbia
4.
Biotechnol Prog ; 22(3): 763-9, 2006.
Artigo em Inglês | MEDLINE | ID: mdl-16739960

RESUMO

The capsid of infectious bursal disease virus (IBDV), with a size of 60-65 nm, is formed by an initial processing of polyprotein (pVP2-VP4-VP3) by VP4, subsequent assemblage of pVP2 and VP3, and the maturation of VP2. In Sf9 cells, the processing of polyprotein expressed was restrained in the stage of VP2 maturation, leading to a limited production of capsid, i.e., IBDV-like particles (VLPs). In the present study, another insect cell line, High-Five (Hi-5) cells, was demonstrated to efficiently produce VLPs. Meanwhile, in this system, polyprotein was processed to pVP2 and VP3 protein and pVP2 was further processed to the matured form of VP2. Consequently, Hi-5 cells are better in terms of polyprotein processing and formation of VLPs than Sf9. In addition to the processing of pVP2, VP3 was also degraded. With insufficient intact VP3 protein present for the formation of VLPs, the excessive VP2 form subviral particles (SVPs) with a size of about 25 nm. The ratio of VLPs to SVPs is dependent on the multiplicity of infections (MOIs) used, and an optimal MOI is found for the production of both particles. VLPs were separated from SVPs with a combination of ultracentrifugation and gel-filtration chromatography, and a large number of purified particles of both were obtained. In conclusion, the insect cell lines and MOIs were optimized for the production of VLPs, and pure VLPs with morphology similar to that of the wild-type viruses can be effectively prepared. The efficient production and purification of VLPs benefits not only the development of an antiviral vaccine against IBDV but also the understanding of the structure of this avian virus that is economically important.


Assuntos
Capsídeo/metabolismo , Vírus da Doença Infecciosa da Bursa/química , Poliproteínas/metabolismo , Proteínas Estruturais Virais/biossíntese , Animais , Técnicas de Cultura de Células/métodos , Linhagem Celular , Células Cultivadas , Vírus da Doença Infecciosa da Bursa/metabolismo , Poliproteínas/biossíntese , Proteínas Recombinantes/biossíntese
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA