Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Mais filtros

Base de dados
Ano de publicação
Tipo de documento
Intervalo de ano de publicação
1.
J Pediatr Urol ; 20(2): 257-264, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-37980211

RESUMO

INTRODUCTION: The radiographic grading of voiding cystourethrogram (VCUG) images is often used to determine the clinical course and appropriate treatment in patients with vesicoureteral reflux (VUR). However, image-based evaluation of VUR remains highly subjective, so we developed a supervised machine learning model to automatically and objectively grade VCUG data. STUDY DESIGN: A total of 113 VCUG images were gathered from public sources to compile the dataset for this study. For each image, VUR severity was graded by four pediatric radiologists and three pediatric urologists (low severity scored 1-3; high severity 4-5). Ground truth for each image was assigned based on the grade diagnosed by a majority of the expert assessors. Nine features were extracted from each VCUG image, then six machine learning models were trained, validated, and tested using 'leave-one-out' cross-validation. All features were compared and contrasted, with the highest-ranked then being used to train the final models. RESULTS: F1-score is a metric that is often used to indicate performance accuracy of machine learning models. When using the highest-ranked VCUG image features, F1-scores for the support vector machine (SVM) and multi-layer perceptron (MLP) classifiers were 90.27 % and 91.14 %, respectively, indicating a high level of accuracy. When using all features combined, F1 scores were 89.37 % for SVM and 90.27 % for MLP. DISCUSSION: These findings indicate that a distorted pattern of renal calyces is an accurate predictor of high-grade VUR. Machine learning protocols can be enhanced in future to improve objective grading of VUR.

3.
Front Pediatr ; 11: 1149318, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37138577

RESUMO

Objective: Develop a reliable, automated deep learning-based method for accurate measurement of penile curvature (PC) using 2-dimensional images. Materials and methods: A set of nine 3D-printed models was used to generate a batch of 913 images of penile curvature (PC) with varying configurations (curvature range 18° to 86°). The penile region was initially localized and cropped using a YOLOv5 model, after which the shaft area was extracted using a UNet-based segmentation model. The penile shaft was then divided into three distinct predefined regions: the distal zone, curvature zone, and proximal zone. To measure PC, we identified four distinct locations on the shaft that reflected the mid-axes of proximal and distal segments, then trained an HRNet model to predict these landmarks and calculate curvature angle in both the 3D-printed models and masked segmented images derived from these. Finally, the optimized HRNet model was applied to quantify PC in medical images of real human patients and the accuracy of this novel method was determined. Results: We obtained a mean absolute error (MAE) of angle measurement <5° for both penile model images and their derivative masks. For real patient images, AI prediction varied between 1.7° (for cases of ∼30° PC) and approximately 6° (for cases of 70° PC) compared with assessment by a clinical expert. Discussion: This study demonstrates a novel approach to the automated, accurate measurement of PC that could significantly improve patient assessment by surgeons and hypospadiology researchers. This method may overcome current limitations encountered when applying conventional methods of measuring arc-type PC.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA