Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 4 de 4
Filter
Add more filters

Database
Language
Publication year range
1.
Article in English | MEDLINE | ID: mdl-38957573

ABSTRACT

Medical image auto-segmentation techniques are basic and critical for numerous image-based analysis applications that play an important role in developing advanced and personalized medicine. Compared with manual segmentations, auto-segmentations are expected to contribute to a more efficient clinical routine and workflow by requiring fewer human interventions or revisions to auto-segmentations. However, current auto-segmentation methods are usually developed with the help of some popular segmentation metrics that do not directly consider human correction behavior. Dice Coefficient (DC) focuses on the truly-segmented areas, while Hausdorff Distance (HD) only measures the maximal distance between the auto-segmentation boundary with the ground truth boundary. Boundary length-based metrics such as surface DC (surDC) and Added Path Length (APL) try to distinguish truly-predicted boundary pixels and wrong ones. It is uncertain if these metrics can reliably indicate the required manual mending effort for application in segmentation research. Therefore, in this paper, the potential use of the above four metrics, as well as a novel metric called Mendability Index (MI), to predict the human correction effort is studied with linear and support vector regression models. 265 3D computed tomography (CT) samples for 3 objects of interest from 3 institutions with corresponding auto-segmentations and ground truth segmentations are utilized to train and test the prediction models. The five-fold cross-validation experiments demonstrate that meaningful human effort prediction can be achieved using segmentation metrics with varying prediction errors for different objects. The improved variant of MI, called MIhd, generally shows the best prediction performance, suggesting its potential to indicate reliably the clinical value of auto-segmentations.

2.
medRxiv ; 2024 Jun 13.
Article in English | MEDLINE | ID: mdl-38947045

ABSTRACT

Auto-segmentation is one of the critical and foundational steps for medical image analysis. The quality of auto-segmentation techniques influences the efficiency of precision radiology and radiation oncology since high- quality auto-segmentations usually require limited manual correction. Segmentation metrics are necessary and important to evaluate auto-segmentation results and guide the development of auto-segmentation techniques. Currently widely applied segmentation metrics usually compare the auto-segmentation with the ground truth in terms of the overlapping area (e.g., Dice Coefficient (DC)) or the distance between boundaries (e.g., Hausdorff Distance (HD)). However, these metrics may not well indicate the manual mending effort required when observing the auto-segmentation results in clinical practice. In this article, we study different segmentation metrics to explore the appropriate way of evaluating auto-segmentations with clinical demands. The mending time for correcting auto-segmentations by experts is recorded to indicate the required mending effort. Five well-defined metrics, the overlapping area-based metric DC, the segmentation boundary distance-based metric HD, the segmentation boundary length-based metrics surface DC (surDC) and added path length (APL), and a newly proposed hybrid metric Mendability Index (MI) are discussed in the correlation analysis experiment and regression experiment. In addition to these explicitly defined metrics, we also preliminarily explore the feasibility of using deep learning models to predict the mending effort, which takes segmentation masks and the original images as the input. Experiments are conducted using datasets of 7 objects from three different institutions, which contain the original computed tomography (CT) images, the ground truth segmentations, the auto-segmentations, the corrected segmentations, and the recorded mending time. According to the correlation analysis and regression experiments for the five well-defined metrics, the variety of MI shows the best performance to indicate the mending effort for sparse objects, while the variety of HD works best when assessing the mending effort for non-sparse objects. Moreover, the deep learning models could well predict efforts required to mend auto-segmentations, even without the need of ground truth segmentations, demonstrating the potential of a novel and easy way to evaluate and boost auto-segmentation techniques.

3.
Med Biol Eng Comput ; 2024 Jun 20.
Article in English | MEDLINE | ID: mdl-38898202

ABSTRACT

Medical image segmentation commonly involves diverse tissue types and structures, including tasks such as blood vessel segmentation and nerve fiber bundle segmentation. Enhancing the continuity of segmentation outcomes represents a pivotal challenge in medical image segmentation, driven by the demands of clinical applications, focusing on disease localization and quantification. In this study, a novel segmentation model is specifically designed for retinal vessel segmentation, leveraging vessel orientation information, boundary constraints, and continuity constraints to improve segmentation accuracy. To achieve this, we cascade U-Net with a long-short-term memory network (LSTM). U-Net is characterized by a small number of parameters and high segmentation efficiency, while LSTM offers a parameter-sharing capability. Additionally, we introduce an orientation information enhancement module inserted into the model's bottom layer to obtain feature maps containing orientation information through an orientation convolution operator. Furthermore, we design a new hybrid loss function that consists of connectivity loss, boundary loss, and cross-entropy loss. Experimental results demonstrate that the model achieves excellent segmentation outcomes across three widely recognized retinal vessel segmentation datasets, CHASE_DB1, DRIVE, and ARIA.

4.
Article in English | MEDLINE | ID: mdl-37256076

ABSTRACT

Auto-segmentation of medical images is critical to boost precision radiology and radiation oncology efficiency, thereby improving medical quality for both health care practitioners and patients. An appropriate metric to evaluate auto-segmentation results is one of the significant tools necessary for building an effective, robust, and practical auto-segmentation technique. However, by comparing the predicted segmentation with the ground truth, currently widely-used metrics usually focus on the overlapping area (Dice Coefficient) or the most severe shifting of the boundary (Hausdorff Distance), which seem inconsistent with human reader behaviors. Human readers usually verify and correct auto-segmentation contours and then apply the modified segmentation masks to guide clinical application in diagnosis or treatment. A metric called Mendability Index (MI) is proposed to better estimate the effort required for manually editing the auto-segmentations of objects of interest in medical images so that the segmentations become acceptable for the application at hand. Considering different human behaviors for different errors, MI classifies auto-segmented errors into three types with different quantitative behaviors. The fluctuation of human subjective delineation is also considered in MI. 505 3D computed tomography (CT) auto-segmentations consisting of 6 objects from 3 institutions with the corresponding ground truth and the recorded manual mending time needed by experts are used to validate the performance of the proposed MI. The correlation between the time for editing with the segmentation metrics demonstrates that MI is generally more suitable for indicating mending efforts than Dice Coefficient or Hausdorff Distance, suggesting that MI may be an effective metric to quantify the clinical value of auto-segmentations.

SELECTION OF CITATIONS
SEARCH DETAIL