Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
Ultrasound Med Biol ; 2024 Jun 05.
Artículo en Inglés | MEDLINE | ID: mdl-38845332

RESUMEN

OBJECTIVE: To develop an algorithm for the automated localization and measurement of levator hiatus (LH) dimensions (AI-LH) using 3-D pelvic floor ultrasound. METHODS: The AI-LH included a 3-D plane regression model and a 2-D segmentation model, which first achieved automated localization of the minimal LH dimension plane (C-plane) and measurement of the hiatal area (HA) on maximum Valsalva on the rendered LH images, but not on the C-plane. The dataset included 600 volumetric data. We compared AI-LH with sonographer difference (ASD) as well as the inter-sonographer differences (IESD) in the testing dataset (n = 240). The assessment encompassed the mean absolute error (MAE) for the angle and center point distance of the C-plane, along with the Dice coefficient, MAE, and intra-class correlation coefficient (ICC) for HA, and included the time consumption. RESULTS: The MAE of the C-plane of ASD was 4.81 ± 2.47° with 1.92 ± 1.54 mm. AI-LH achieved a mean Dice coefficient of 0.93 for LH segmentation. The MAE on HA of ASD (1.44 ± 1.12 mm²) was lower than that of IESD (1.63 ± 1.58 mm²). The ICC on HA of ASD (0.964) was higher than that of IESD (0.949). The average time costs of AI-LH and manual measurement were 2.00 ± 0.22 s and 59.60 ± 2.63 s (t = 18.87, p < 0.01), respectively. CONCLUSION: AI-LH is accurate, reliable, and robust in the localization and measurement of LH dimensions, which can shorten the time cost, simplify the operation process, and have good value in clinical applications.

2.
Med Image Anal ; 92: 103061, 2024 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-38086235

RESUMEN

The Segment Anything Model (SAM) is the first foundation model for general image segmentation. It has achieved impressive results on various natural image segmentation tasks. However, medical image segmentation (MIS) is more challenging because of the complex modalities, fine anatomical structures, uncertain and complex object boundaries, and wide-range object scales. To fully validate SAM's performance on medical data, we collected and sorted 53 open-source datasets and built a large medical segmentation dataset with 18 modalities, 84 objects, 125 object-modality paired targets, 1050K 2D images, and 6033K masks. We comprehensively analyzed different models and strategies on the so-called COSMOS 1050K dataset. Our findings mainly include the following: (1) SAM showed remarkable performance in some specific objects but was unstable, imperfect, or even totally failed in other situations. (2) SAM with the large ViT-H showed better overall performance than that with the small ViT-B. (3) SAM performed better with manual hints, especially box, than the Everything mode. (4) SAM could help human annotation with high labeling quality and less time. (5) SAM was sensitive to the randomness in the center point and tight box prompts, and may suffer from a serious performance drop. (6) SAM performed better than interactive methods with one or a few points, but will be outpaced as the number of points increases. (7) SAM's performance correlated to different factors, including boundary complexity, intensity differences, etc. (8) Finetuning the SAM on specific medical tasks could improve its average DICE performance by 4.39% and 6.68% for ViT-B and ViT-H, respectively. Codes and models are available at: https://github.com/yuhoo0302/Segment-Anything-Model-for-Medical-Images. We hope that this comprehensive report can help researchers explore the potential of SAM applications in MIS, and guide how to appropriately use and develop SAM.


Asunto(s)
Diagnóstico por Imagen , Procesamiento de Imagen Asistido por Computador , Humanos , Procesamiento de Imagen Asistido por Computador/métodos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...