RESUMEN
Objective: To investigate the value of the combined application of intravoxel incoherent motion (IVIM) and enhanced T2*-weighted angiography (ESWAN) for preoperative prediction of microvascular invasion (MVI) in hepatocellular carcinoma (HCC). Materials and methods: 76 patients with pathologically confirmed HCC were retrospectively enrolled and divided into the MVI-positive group (n=26) and MVI-negative group (n=50). Conventional MRI, IVIM, and ESWAN sequences were performed. Three region of interests (ROIs) were placed on the maximum axial slice of the lesion on D, D*, and f maps derived from IVIM sequence, and R2* map derived from ESWAN sequence, and intratumoral susceptibility signal (ITSS) from the phase map derived from ESWAN sequence was also automatically measured. Receiver operating characteristic (ROC) curves were drawn to evaluate the ability for predicting MVI. Univariate and multivariate logistic regression were used to screen independent risk predictors in clinical and imaging information. The Delong's test was used to compare the differences between the area under curves (AUCs). Results: The D and D* values of MVI-negative group were significantly higher than those of MVI-positive group (P=0.038, and P=0.023), which in MVI-negative group were 0.892×10-3 (0.760×10-3, 1.303×10-3) mm2/s and 0.055 (0.025, 0.100) mm2/s, and in MVI-positive group were 0.591×10-3 (0.372×10-3, 0.824×10-3) mm2/s and 0.028 (0.006, 0.050)mm2/s, respectively. The R2* and ITSS values of MVI-negative group were significantly lower than those of MVI-positive group (P=0.034, and P=0.005), which in MVI-negative group were 29.290 (23.117, 35.228) Hz and 0.146 (0.086, 0.236), and in MVI-positive group were 43.696 (34.914, 58.083) Hz and 0.199 (0.155, 0.245), respectively. After univariate and multivariate analyses, only AFP (odds ratio, 0.183; 95% CI, 0.041-0.823; P = 0.027) was the independent risk factor for predicting the status of MVI. The AUCs of AFP, D, D*, R2*, and ITSS for prediction of MVI were 0.652, 0.739, 0.707, 0.798, and 0.657, respectively. The AUCs of IVIM (D+D*), ESWAN (R2*+ITSS), and combination (D+D*+R2*+ITSS) for predicting MVI were 0.772, 0.800, and, 0.855, respectively. When IVIM combined with ESWAN, the performance was improved with a sensitivity of 73.1% and a specificity of 92.0% (cut-off value: 0.502) and the AUC was significantly higher than AFP (P=0.001), D (P=0.038), D* (P=0.023), R2* (P=0.034), and ITSS (P=0.005). Conclusion: The IVIM and ESWAN parameters showed good efficacy in prediction of MVI in patients with HCC. The combination of IVIM and ESWAN may be useful for noninvasive prediction of MVI before clinical operation.
RESUMEN
PURPOSE: The training of deep medical image segmentation networks usually requires a large amount of human-annotated data. To alleviate the burden of human labor, many semi- or non-supervised methods have been developed. However, due to the complexity of clinical scenario, insufficient training labels still causes inaccurate segmentation in some difficult local areas such as heterogeneous tumors and fuzzy boundaries. METHODS: We propose an annotation-efficient training approach, which only requires scribble guidance in the difficult areas. A segmentation network is initially trained with a small amount of fully annotated data and then used to produce pseudo labels for more training data. Human supervisors draw scribbles in the areas of incorrect pseudo labels (i.e., difficult areas), and the scribbles are converted into pseudo label maps using a probability-modulated geodesic transform. To reduce the influence of the potential errors in the pseudo labels, a confidence map of the pseudo labels is generated by jointly considering the pixel-to-scribble geodesic distance and the network output probability. The pseudo labels and confidence maps are iteratively optimized with the update of the network, and the network training is promoted by the pseudo labels and the confidence maps in turn. RESULTS: Cross-validation based on two data sets (brain tumor MRI and liver tumor CT) showed that our method significantly reduces the annotation time while maintains the segmentation accuracy of difficult areas (e.g., tumors). Using 90 scribble-annotated training images (annotated time: ~ 9 h), our method achieved the same performance as using 45 fully annotated images (annotation time: > 100 h) but required much shorter annotation time. CONCLUSION: Compared to the conventional full annotation approaches, the proposed method significantly saves the annotation efforts by focusing the human supervisions on the most difficult regions. It provides an annotation-efficient way for training medical image segmentation networks in complex clinical scenario.
Asunto(s)
Neoplasias Encefálicas , Neoplasias Hepáticas , Humanos , Neoplasias Hepáticas/diagnóstico por imagen , Neuroimagen , Probabilidad , Proyectos de Investigación , Procesamiento de Imagen Asistido por ComputadorRESUMEN
PURPOSE: Training deep neural networks usually require a large number of human-annotated data. For organ segmentation from volumetric medical images, human annotation is tedious and inefficient. To save human labour and to accelerate the training process, the strategy of annotation by iterative deep learning recently becomes popular in the research community. However, due to the lack of domain knowledge or efficient human-interaction tools, the current AID methods still suffer from long training time and high annotation burden. METHODS: We develop a contour-based annotation by iterative deep learning (AID) algorithm which uses boundary representation instead of voxel labels to incorporate high-level organ shape knowledge. We propose a contour segmentation network with a multi-scale feature extraction backbone to improve the boundary detection accuracy. We also developed a contour-based human-intervention method to facilitate easy adjustments of organ boundaries. By combining the contour-based segmentation network and the contour-adjustment intervention method, our algorithm achieves fast few-shot learning and efficient human proofreading. RESULTS: For validation, two human operators independently annotated four abdominal organs in computed tomography (CT) images using our method and two compared methods, i.e. a traditional contour-interpolation method and a state-of-the-art (SOTA) convolutional network (CNN) method based on voxel label representation. Compared to these methods, our approach considerably saved annotation time and reduced inter-rater variabilities. Our contour detection network also outperforms the SOTA nnU-Net in producing anatomically plausible organ shape with only a small training set. CONCLUSION: Taking advantage of the boundary shape prior and the contour representation, our method is more efficient, more accurate and less prone to inter-operator variability than the SOTA AID methods for organ segmentation from volumetric medical images. The good shape learning ability and flexible boundary adjustment function make it suitable for fast annotation of organ structures with regular shape.
Asunto(s)
Aprendizaje Profundo , Humanos , Redes Neurales de la Computación , Tomografía Computarizada por Rayos X/métodos , Algoritmos , Procesamiento de Imagen Asistido por Computador/métodosRESUMEN
The development of medical image analysis algorithm is a complex process including the multiple sub-steps of model training, data visualization, human-computer interaction and graphical user interface (GUI) construction. To accelerate the development process, algorithm developers need a software tool to assist with all the sub-steps so that they can focus on the core function implementation. Especially, for the development of deep learning (DL) algorithms, a software tool supporting training data annotation and GUI construction is highly desired. In this work, we constructed AnatomySketch, an extensible open-source software platform with a friendly GUI and a flexible plugin interface for integrating user-developed algorithm modules. Through the plugin interface, algorithm developers can quickly create a GUI-based software prototype for clinical validation. AnatomySketch supports image annotation using the stylus and multi-touch screen. It also provides efficient tools to facilitate the collaboration between human experts and artificial intelligent (AI) algorithms. We demonstrate four exemplar applications including customized MRI image diagnosis, interactive lung lobe segmentation, human-AI collaborated spine disc segmentation and Annotation-by-iterative-Deep-Learning (AID) for DL model training. Using AnatomySketch, the gap between laboratory prototyping and clinical testing is bridged and the development of MIA algorithms is accelerated. The software is opened at https://github.com/DlutMedimgGroup/AnatomySketch-Software .
Asunto(s)
Programas Informáticos , Interfaz Usuario-Computador , Humanos , Algoritmos , Inteligencia Artificial , Imagen por Resonancia Magnética/métodosRESUMEN
Statistical Parametric Mapping (SPM) is a computational approach for analysing functional brain images like Positron Emission Tomography (PET). When performing SPM analysis for different patient populations, brain PET template images representing population-specific brain morphometry and metabolism features are helpful. However, most currently available brain PET templates were constructed using the Caucasian data. To enrich the family of publicly available brain PET templates, we created Chinese-specific template images based on 116 [18F]-fluorodeoxyglucose ([18F]-FDG) PET images of normal participants. These images were warped into a common averaged space, in which the mean and standard deviation templates were both computed. We also developed the SPM analysis programmes to facilitate easy use of the templates. Our templates were validated through the SPM analysis of Alzheimer's and Parkinson's patient images. The resultant SPM t-maps accurately depicted the disease-related brain regions with abnormal [18F]-FDG uptake, proving the templates' effectiveness in brain function impairment analysis.