Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 4 de 4
Filter
1.
Eur Radiol ; 33(9): 6020-6032, 2023 Sep.
Article in English | MEDLINE | ID: mdl-37071167

ABSTRACT

OBJECTIVE: To assess the performance of convolutional neural networks (CNNs) for semiautomated segmentation of hepatocellular carcinoma (HCC) tumors on MRI. METHODS: This retrospective single-center study included 292 patients (237 M/55F, mean age 61 years) with pathologically confirmed HCC between 08/2015 and 06/2019 and who underwent MRI before surgery. The dataset was randomly divided into training (n = 195), validation (n = 66), and test sets (n = 31). Volumes of interest (VOIs) were manually placed on index lesions by 3 independent radiologists on different sequences (T2-weighted imaging [WI], T1WI pre-and post-contrast on arterial [AP], portal venous [PVP], delayed [DP, 3 min post-contrast] and hepatobiliary phases [HBP, when using gadoxetate], and diffusion-weighted imaging [DWI]). Manual segmentation was used as ground truth to train and validate a CNN-based pipeline. For semiautomated segmentation of tumors, we selected a random pixel inside the VOI, and the CNN provided two outputs: single slice and volumetric outputs. Segmentation performance and inter-observer agreement were analyzed using the 3D Dice similarity coefficient (DSC). RESULTS: A total of 261 HCCs were segmented on the training/validation sets, and 31 on the test set. The median lesion size was 3.0 cm (IQR 2.0-5.2 cm). Mean DSC (test set) varied depending on the MRI sequence with a range between 0.442 (ADC) and 0.778 (high b-value DWI) for single-slice segmentation; and between 0.305 (ADC) and 0.667 (T1WI pre) for volumetric-segmentation. Comparison between the two models showed better performance in single-slice segmentation, with statistical significance on T2WI, T1WI-PVP, DWI, and ADC. Inter-observer reproducibility of segmentation analysis showed a mean DSC of 0.71 in lesions between 1 and 2 cm, 0.85 in lesions between 2 and 5 cm, and 0.82 in lesions > 5 cm. CONCLUSION: CNN models have fair to good performance for semiautomated HCC segmentation, depending on the sequence and tumor size, with better performance for the single-slice approach. Refinement of volumetric approaches is needed in future studies. KEY POINTS: • Semiautomated single-slice and volumetric segmentation using convolutional neural networks (CNNs) models provided fair to good performance for hepatocellular carcinoma segmentation on MRI. • CNN models' performance for HCC segmentation accuracy depends on the MRI sequence and tumor size, with the best results on diffusion-weighted imaging and T1-weighted imaging pre-contrast, and for larger lesions.


Subject(s)
Carcinoma, Hepatocellular , Liver Neoplasms , Humans , Middle Aged , Carcinoma, Hepatocellular/diagnostic imaging , Carcinoma, Hepatocellular/pathology , Retrospective Studies , Reproducibility of Results , Liver Neoplasms/diagnostic imaging , Liver Neoplasms/pathology , Image Processing, Computer-Assisted/methods , Magnetic Resonance Imaging/methods , Neural Networks, Computer
2.
J Acoust Soc Am ; 140(4): 2829, 2016 10.
Article in English | MEDLINE | ID: mdl-27794304

ABSTRACT

Optimization of Lamb modes induced by laser can be achieved by adjusting the spatial source distribution to the mode wavelength (λ). The excitability of Zero-Group Velocity (ZGV) resonances in isotropic plates is investigated both theoretically and experimentally for axially symmetric sources. Optimal parameters and amplitude gains are derived analytically for spot and annular sources of either Gaussian or rectangular energy profiles. For a Gaussian spot source, the optimal radius is found to be λZGV/π. Annular sources increase the amplitude by at least a factor of 3 compared to the optimal Gaussian source. Rectangular energy profiles provide higher gain than Gaussian ones. These predictions are confirmed by semi-analytical simulation of the thermoelastic generation of Lamb waves, including the effect of material attenuation. Experimentally, Gaussian ring sources of controlled width and radius are produced with an axicon-lens system. Measured optimal geometric parameters obtained for Gaussian and annular beams are in good agreement with theoretical predictions. A ZGV resonance amplification factor of 2.1 is obtained with the Gaussian ring. Such source should facilitate the inspection of highly attenuating plates made of low ablation threshold materials like composites.

3.
Eur J Cancer ; 174: 90-98, 2022 10.
Article in English | MEDLINE | ID: mdl-35985252

ABSTRACT

BACKGROUND: The need for developing new biomarkers is increasing with the emergence of many targeted therapies. Artificial Intelligence (AI) algorithms have shown great promise in the medical imaging field to build predictive models. We developed a prognostic model for solid tumour patients using AI on multimodal data. PATIENTS AND METHODS: Our retrospective study included examinations of patients with seven different cancer types performed between 2003 and 2017 in 17 different hospitals. Radiologists annotated all metastases on baseline computed tomography (CT) and ultrasound (US) images. Imaging features were extracted using AI models and used along with the patients' and treatments' metadata. A Cox regression was fitted to predict prognosis. Performance was assessed on a left-out test set with 1000 bootstraps. RESULTS: The model was built on 436 patients and tested on 196 patients (mean age 59, IQR: 51-6, 411 men out of 616 patients). On the whole, 1147 US images were annotated with lesions delineation, and 632 thorax-abdomen-pelvis CTs (total of 301,975 slices) were fully annotated with a total of 9516 lesions. The developed model reaches an average concordance index of 0.71 (0.67-0.76, 95% CI). Using the median predicted risk as a threshold value, the model is able to significantly (log-rank test P value < 0.001) isolate high-risk patients from low-risk patients (respective median OS of 11 and 31 months) with a hazard ratio of 3.5 (2.4-5.2, 95% CI). CONCLUSION: AI was able to extract prognostic features from imaging data, and along with clinical data, allows an accurate stratification of patients' prognoses.


Subject(s)
Artificial Intelligence , Neoplasms , Biomarkers , Humans , Male , Middle Aged , Neoplasms/diagnostic imaging , Retrospective Studies , Tomography, X-Ray Computed/methods
4.
Nat Commun ; 12(1): 634, 2021 01 27.
Article in English | MEDLINE | ID: mdl-33504775

ABSTRACT

The SARS-COV-2 pandemic has put pressure on intensive care units, so that identifying predictors of disease severity is a priority. We collect 58 clinical and biological variables, and chest CT scan data, from 1003 coronavirus-infected patients from two French hospitals. We train a deep learning model based on CT scans to predict severity. We then construct the multimodal AI-severity score that includes 5 clinical and biological variables (age, sex, oxygenation, urea, platelet) in addition to the deep learning model. We show that neural network analysis of CT-scans brings unique prognosis information, although it is correlated with other markers of severity (oxygenation, LDH, and CRP) explaining the measurable but limited 0.03 increase of AUC obtained when adding CT-scan information to clinical variables. Here, we show that when comparing AI-severity with 11 existing severity scores, we find significantly improved prognosis performance; AI-severity can therefore rapidly become a reference scoring approach.


Subject(s)
COVID-19/diagnosis , COVID-19/physiopathology , Deep Learning , Neural Networks, Computer , Tomography, X-Ray Computed/methods , Artificial Intelligence , COVID-19/classification , Humans , Models, Biological , Multivariate Analysis , Prognosis , Radiologists , Severity of Illness Index
SELECTION OF CITATIONS
SEARCH DETAIL