Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 14 de 14
Filter
1.
Med Phys ; 51(5): 3173-3183, 2024 May.
Article in English | MEDLINE | ID: mdl-38536107

ABSTRACT

BACKGROUND: Stereotactic body radiotherapy of thoracic and abdominal tumors has to account for respiratory intrafractional tumor motion. Commonly, an external breathing signal is continuously acquired that serves as a surrogate of the tumor motion and forms the basis of strategies like breathing-guided imaging and gated dose delivery. However, due to inherent system latencies, there exists a temporal lag between the acquired respiratory signal and the system response. Respiratory signal prediction models aim to compensate for the time delays and to improve imaging and dose delivery. PURPOSE: The present study explores and compares six state-of-the-art machine and deep learning-based prediction models, focusing on real-time and real-world applicability. All models and data are provided as open source and data to ensure reproducibility of the results and foster reuse. METHODS: The study was based on 2502 breathing signals ( t t o t a l ≈ 90 $t_{total} \approx 90$  h) acquired during clinical routine, split into independent training (50%), validation (20%), and test sets (30%). Input signal values were sampled from noisy signals, and the target signal values were selected from corresponding denoised signals. A standard linear prediction model (Linear), two state-of-the-art models in general univariate signal prediction (Dlinear, Xgboost), and three deep learning models (Lstm, Trans-Enc, Trans-TSF) were chosen. The prediction performance was evaluated for three different prediction horizons (480, 680, and 920 ms). Moreover, the robustness of the different models when applied to atypical, that is, out-of-distribution (OOD) signals, was analyzed. RESULTS: The Lstm model achieved the lowest normalized root mean square error for all prediction horizons. The prediction errors only slightly increased for longer horizons. However, a substantial spread of the error values across the test signals was observed. Compared to typical, that is, in-distribution test signals, the prediction accuracy of all models decreased when applied to OOD signals. The more complex deep learning models Lstm and Trans-Enc showed the least performance loss, while the performance of simpler models like Linear dropped the most. Except for Trans-Enc, inference times for the different models allowed for real-time application. CONCLUSION: The application of the Lstm model achieved the lowest prediction errors. Simpler prediction filters suffer from limited signal history access, resulting in a drop in performance for OOD signals.


Subject(s)
Benchmarking , Machine Learning , Radiosurgery , Respiration , Radiosurgery/methods , Humans , Time Factors , Deep Learning , Four-Dimensional Computed Tomography
2.
Med Phys ; 2023 Dec 06.
Article in English | MEDLINE | ID: mdl-38055336

ABSTRACT

BACKGROUND: 4D CT imaging is an essential component of radiotherapy of thoracic and abdominal tumors. 4D CT images are, however, often affected by artifacts that compromise treatment planning quality and image information reliability. PURPOSE: In this work, deep learning (DL)-based conditional inpainting is proposed to restore anatomically correct image information of artifact-affected areas. METHODS: The restoration approach consists of a two-stage process: DL-based detection of common interpolation (INT) and double structure (DS) artifacts, followed by conditional inpainting applied to the artifact areas. In this context, conditional refers to a guidance of the inpainting process by patient-specific image data to ensure anatomically reliable results. The study is based on 65 in-house 4D CT images of lung cancer patients (48 with only slight artifacts, 17 with pronounced artifacts) and two publicly available 4D CT data sets that serve as independent external test sets. RESULTS: Automated artifact detection revealed a ROC-AUC of 0.99 for INT and of 0.97 for DS artifacts (in-house data). The proposed inpainting method decreased the average root mean squared error (RMSE) by 52 % (INT) and 59 % (DS) for the in-house data. For the external test data sets, the RMSE improvement is similar (50 % and 59 %, respectively). Applied to 4D CT data with pronounced artifacts (not part of the training set), 72 % of the detectable artifacts were removed. CONCLUSIONS: The results highlight the potential of DL-based inpainting for restoration of artifact-affected 4D CT data. Compared to recent 4D CT inpainting and restoration approaches, the proposed methodology illustrates the advantages of exploiting patient-specific prior image information.

3.
Cancers (Basel) ; 15(11)2023 May 23.
Article in English | MEDLINE | ID: mdl-37296843

ABSTRACT

Discordance and conversion of receptor expressions in metastatic lesions and primary tumors is often observed in patients with brain metastases from breast cancer. Therefore, personalized therapy requires continuous monitoring of receptor expressions and dynamic adaptation of applied targeted treatment options. Radiological in vivo techniques may allow receptor status tracking at high frequencies at low risk and cost. The present study aims to investigate the potential of receptor status prediction through machine-learning-based analysis of radiomic MR image features. The analysis is based on 412 brain metastases samples from 106 patients acquired between 09/2007 and 09/2021. Inclusion criteria were as follows: diagnosed cerebral metastases from breast cancer; histopathology reports on progesterone (PR), estrogen (ER), and human epidermal growth factor 2 (HER2) receptor status; and availability of MR imaging data. In total, 3367 quantitative features of T1 contrast-enhanced, T1 non-enhanced, and FLAIR images and corresponding patient age were evaluated utilizing random forest algorithms. Feature importance was assessed using Gini impurity measures. Predictive performance was tested using 10 permuted 5-fold cross-validation sets employing the 30 most important features of each training set. Receiver operating characteristic areas under the curves of the validation sets were 0.82 (95% confidence interval [0.78; 0.85]) for ER+, 0.73 [0.69; 0.77] for PR+, and 0.74 [0.70; 0.78] for HER2+. Observations indicate that MR image features employed in a machine learning classifier could provide high discriminatory accuracy in predicting the receptor status of brain metastases from breast cancer.

4.
Strahlenther Onkol ; 199(7): 686-691, 2023 07.
Article in English | MEDLINE | ID: mdl-37000223

ABSTRACT

PURPOSE: 4D CT imaging is an integral part of 4D radiotherapy workflows. However, 4D CT data often contain motion artifacts that mitigate treatment planning. Recently, breathing-adapted 4D CT (i4DCT) was introduced into clinical practice, promising artifact reduction in in-silico and phantom studies. Here, we present an image quality comparison study, pooling clinical patient data from two centers: a new i4DCT and a conventional spiral 4D CT patient cohort. METHODS: The i4DCT cohort comprises 129 and the conventional spiral 4D CT cohort 417 4D CT data sets of lung and liver tumor patients. All data were acquired for treatment planning. The study consists of three parts: illustration of image quality in selected patients of the two cohorts with similar breathing patterns; an image quality expert rater study; and automated analysis of the artifact frequency. RESULTS: Image data of the patients with similar breathing patterns underline artifact reduction by i4DCT compared to conventional spiral 4D CT. Based on a subgroup of 50 patients with irregular breathing patterns, the rater study reveals a fraction of almost artifact-free scans of 89% for i4DCT and only 25% for conventional 4D CT; the quantitative analysis indicated a reduction of artifact frequency by 31% for i4DCT. CONCLUSION: The results demonstrate 4D CT image quality improvement for patients with irregular breathing patterns by breathing-adapted 4D CT in this first corresponding clinical data image quality comparison study.


Subject(s)
Four-Dimensional Computed Tomography , Lung Neoplasms , Humans , Four-Dimensional Computed Tomography/methods , Respiration , Lung , Lung Neoplasms/diagnostic imaging , Lung Neoplasms/radiotherapy , Motion
5.
Neuro Oncol ; 24(10): 1790-1798, 2022 10 03.
Article in English | MEDLINE | ID: mdl-35426432

ABSTRACT

BACKGROUND: Patients with neurofibromatosis type 1 (NF1) develop benign (BPNST), premalignant atypical (ANF), and malignant (MPNST) peripheral nerve sheath tumors. Radiological differentiation of these entities is challenging. Therefore, we aimed to evaluate the value of a magnetic resonance imaging (MRI)-based radiomics machine-learning (ML) classifier for differentiation of these three entities of internal peripheral nerve sheath tumors in NF1 patients. METHODS: MRI was performed at 3T in 36 NF1 patients (20 male; age: 31 ± 11 years). Segmentation of 117 BPNSTs, 17 MPNSTs, and 8 ANFs was manually performed using T2w spectral attenuated inversion recovery sequences. One hundred seven features per lesion were extracted using PyRadiomics and applied for BPNST versus MPNST differentiation. A 5-feature radiomics signature was defined based on the most important features and tested for signature-based BPNST versus MPNST classification (random forest [RF] classification, leave-one-patient-out evaluation). In a second step, signature feature expressions for BPNSTs, ANFs, and MPNSTs were evaluated for radiomics-based classification for these three entities. RESULTS: The mean area under the receiver operator characteristic curve (AUC) for the radiomics-based BPNST versus MPNST differentiation was 0.94, corresponding to correct classification of on average 16/17 MPNSTs and 114/117 BPNSTs (sensitivity: 94%, specificity: 97%). Exploratory analysis with the eight ANFs revealed intermediate radiomic feature characteristics in-between BPNST and MPNST tumor feature expression. CONCLUSION: In this proof-of-principle study, ML using MRI-based radiomics characteristics allows sensitive and specific differentiation of BPNSTs and MPNSTs in NF1 patients. Feature expression of premalignant atypical tumors was distributed in-between benign and malignant tumor feature expressions, which illustrates biological plausibility of the considered radiomics characteristics.


Subject(s)
Nerve Sheath Neoplasms , Neurofibromatosis 1 , Neurofibrosarcoma , Adult , Female , Humans , Male , Young Adult , Magnetic Resonance Imaging/methods , Nerve Sheath Neoplasms/diagnostic imaging , Nerve Sheath Neoplasms/pathology , Neurofibromatosis 1/diagnostic imaging , Neurofibromatosis 1/pathology
6.
Med Image Anal ; 70: 101996, 2021 05.
Article in English | MEDLINE | ID: mdl-33647783

ABSTRACT

Histopathologic diagnosis relies on simultaneous integration of information from a broad range of scales, ranging from nuclear aberrations (≈O(0.1µm)) through cellular structures (≈O(10µm)) to the global tissue architecture (⪆O(1mm)). To explicitly mimic how human pathologists combine multi-scale information, we introduce a family of multi-encoder fully-convolutional neural networks with deep fusion. We present a simple block for merging model paths with differing spatial scales in a spatial relationship-preserving fashion, which can readily be included in standard encoder-decoder networks. Additionally, a context classification gate block is proposed as an alternative for the incorporation of global context. Our experiments were performed on three publicly available whole-slide images of recent challenges (PAIP 2019: hepatocellular carcinoma segmentation; BACH 2020: breast cancer segmentation; CAMELYON 2016: metastasis detection in lymph nodes). The multi-scale architectures consistently outperformed the baseline single-scale U-Nets by a large margin. They benefit from local as well as global context and particularly a combination of both. If feature maps from different scales are fused, doing so in a manner preserving spatial relationships was found to be beneficial. Deep guidance by a context classification loss appeared to improve model training at low computational costs. All multi-scale models had a reduced GPU memory footprint compared to ensembles of individual U-Nets trained on different image scales. Additional path fusions were shown to be possible at low computational cost, opening up possibilities for further, systematic and task-specific architecture optimisation. The findings demonstrate the potential of the presented family of human-inspired, end-to-end trainable, multi-scale multi-encoder fully-convolutional neural networks to improve deep histopathologic diagnosis by extensive integration of largely different spatial scales.


Subject(s)
Breast Neoplasms , Image Processing, Computer-Assisted , Breast Neoplasms/diagnostic imaging , Cell Nucleus , Female , Humans , Neural Networks, Computer
7.
Med Image Anal ; 67: 101854, 2021 01.
Article in English | MEDLINE | ID: mdl-33091742

ABSTRACT

Pathology Artificial Intelligence Platform (PAIP) is a free research platform in support of pathological artificial intelligence (AI). The main goal of the platform is to construct a high-quality pathology learning data set that will allow greater accessibility. The PAIP Liver Cancer Segmentation Challenge, organized in conjunction with the Medical Image Computing and Computer Assisted Intervention Society (MICCAI 2019), is the first image analysis challenge to apply PAIP datasets. The goal of the challenge was to evaluate new and existing algorithms for automated detection of liver cancer in whole-slide images (WSIs). Additionally, the PAIP of this year attempted to address potential future problems of AI applicability in clinical settings. In the challenge, participants were asked to use analytical data and statistical metrics to evaluate the performance of automated algorithms in two different tasks. The participants were given the two different tasks: Task 1 involved investigating Liver Cancer Segmentation and Task 2 involved investigating Viable Tumor Burden Estimation. There was a strong correlation between high performance of teams on both tasks, in which teams that performed well on Task 1 also performed well on Task 2. After evaluation, we summarized the top 11 team's algorithms. We then gave pathological implications on the easily predicted images for cancer segmentation and the challenging images for viable tumor burden estimation. Out of the 231 participants of the PAIP challenge datasets, a total of 64 were submitted from 28 team participants. The submitted algorithms predicted the automatic segmentation on the liver cancer with WSIs to an accuracy of a score estimation of 0.78. The PAIP challenge was created in an effort to combat the lack of research that has been done to address Liver cancer using digital pathology. It remains unclear of how the applicability of AI algorithms created during the challenge can affect clinical diagnoses. However, the results of this dataset and evaluation metric provided has the potential to aid the development and benchmarking of cancer diagnosis and segmentation.


Subject(s)
Artificial Intelligence , Liver Neoplasms , Algorithms , Humans , Image Processing, Computer-Assisted , Liver Neoplasms/diagnostic imaging , Tumor Burden
8.
Phys Med Biol ; 66(1)2021 01 08.
Article in English | MEDLINE | ID: mdl-33171441

ABSTRACT

4D CT imaging is a cornerstone of 4D radiotherapy treatment. Clinical 4D CT data are, however, often affected by severe artifacts. The artifacts are mainly caused by breathing irregularity and retrospective correlation of breathing phase information and acquired projection data, which leads to insufficient projection data coverage to allow for proper reconstruction of 4D CT phase images. The recently introduced 4D CT approach i4DCT (intelligent 4D CT sequence scanning) aims to overcome this problem by breathing signal-driven tube control. The present motion phantom study describes the first in-depth evaluation of i4DCT in a real-world scenario. Twenty-eight 4D CT breathing curves of lung and liver tumor patients with pronounced breathing irregularity were selected to program the motion phantom. For every motion pattern, 4D CT imaging was performed with i4DCT and a conventional spiral 4D CT mode. For qualitative evaluation, the reconstructed 4D CT images were presented to clinical experts, who scored image quality. Further quantitative evaluation was based on established image intensity-based artifact metrics to measure (dis)similarity of neighboring image slices. In addition, beam-on and scan times of the scan modes were analyzed. The expert rating revealed a significantly higher image quality for the i4DCT data. The quantitative evaluation further supported the qualitative: While 20% of the slices of the conventional spiral 4D CT images were found to be artifact-affected, the corresponding fraction was only 4% for i4DCT. The beam-on time (surrogate of imaging dose) did not significantly differ between i4DCT and spiral 4D CT. Overall i4DCT scan times (time between first beam-on and last beam-on event, including scan breaks to compensate for breathing irregularity) were, on average, 53% longer compared to spiral CT. Thus, the results underline that i4DCT significantly improves 4D CT image quality compared to standard spiral CT scanning in the case of breathing irregularity during scanning.


Subject(s)
Four-Dimensional Computed Tomography , Tomography, Spiral Computed , Four-Dimensional Computed Tomography/methods , Humans , Phantoms, Imaging , Respiration , Retrospective Studies , Tomography, Spiral Computed/methods
9.
Med Phys ; 47(11): 5619-5631, 2020 Nov.
Article in English | MEDLINE | ID: mdl-33063329

ABSTRACT

PURPOSE: Four-dimensional cone-beam computed tomography (4D CBCT) imaging has been suggested as a solution to account for interfraction motion variability of moving targets like lung and liver during radiotherapy (RT) of moving targets. However, due to severe sparse view sampling artifacts, current 4D CBCT data lack sufficient image quality for accurate motion quantification. In the present paper, we introduce a deep learning-based framework for boosting the image quality of 4D CBCT image data that can be combined with any CBCT reconstruction approach and clinical 4D CBCT workflow. METHODS: Boosting is achieved by learning the relationship between so-called sparse view pseudo-time-average CBCT images obtained by a projection selection scheme introduced to mimic phase image sparse view artifact characteristics and corresponding time-average CBCT images obtained by full view reconstruction. The employed convolutional neural network architecture is the residual dense network (RDN). The underlying hypothesis is that the RDN learns the appearance of the streaking artifacts that is typical for 4D CBCT phase images - and removes them without influencing the anatomical image information. After training the RDN, it can be applied to the 4D CBCT phase images to enhance the image quality without affecting the contained temporal and motion information. Different to existing approaches, no patient-specific prior knowledge about anatomy or motion characteristics is needed, that is, the proposed approach is self-contained. RESULTS: Application of the trained network to reconstructed phase images of an external (SPARE challenge) as well as in-house 4D CBCT patient and motion phantom data set reduces the phase image streak artifacts consistently for all patients and state-of-the-art reconstruction approaches. Using the SPARE data set, we show that the root mean squared error compared to ground truth data provided by the challenge is reduced by approximately 50% while normalized cross correlation of reconstruction and ground truth is improved up to 10%. Compared to direct deep learning-based 4D CBCT to 4D CT mapping, our proposed method performs better because inappropriate prior knowledge about the patient anatomy and physiology is taken into account. Moreover, the image quality enhancement leads to more plausible motion fields estimated by deformable image registration (DIR) in the 4D CBCT image sequences. CONCLUSIONS: The presented framework enables significantly boosting of 4D CBCT image quality as well as improved DIR and motion field consistency. Thus, the proposed method facilitates extraction of motion information from severely artifact-affected images, which is one of the key challenges of integrating 4D CBCT imaging into RT workflows.


Subject(s)
Deep Learning , Algorithms , Cone-Beam Computed Tomography , Four-Dimensional Computed Tomography , Humans , Image Processing, Computer-Assisted , Phantoms, Imaging
10.
Radiother Oncol ; 148: 229-234, 2020 07.
Article in English | MEDLINE | ID: mdl-32442870

ABSTRACT

BACKGROUND AND PURPOSE: 4D CT images often contain artifacts that are suspected to affect treatment planning quality and clinical outcome of lung and liver SBRT. The present study investigates the correlation between the presence of artifacts in SBRT planning 4D CT data and local metastasis control. MATERIALS AND METHODS: The study includes 62 patients with 102 metastases (49 in the lung and 53 in the liver), treated between 2012 and 2016 with SBRT for mainly curative intent. For each patient, 10-phase 4D CT images were acquired and used for ITV definition and treatment planning. Follow-up intervals were 3 weeks after treatment and every 3-6 months thereafter. Based on the number and type of image artifacts, a strict rule-based two-class artifact score was introduced and assigned to the individual 4D CT data sets. Correlation between local control and artifact score (consensus rating based on two independent observers) were analyzed using uni- and multivariable Cox proportional hazards models with random effects. Metastatic site, target volume, metastasis motion, breathing irregularity-related measures, and clinical data (chemotherapy prior to SBRT, target dose, treatment fractionation) were considered as covariates. RESULTS: Local recurrence was observed in 17/102 (17%) metastases. Significant univariable factors for local control were artifact score (severe CT artifacts vs. few CT artifacts; hazard ratio 8.22; 95%-CI 2.04-33.18) and mean patient breathing period (>4.8 s vs. ≤4.8 s; hazard ratio 3.58; 95%-CI 1.18-10.84). Following multivariable analysis, artifact score remained as dominating prognostic factor, although statistically not significant (hazard ratio 10.28; 95%-CI 0.57-184.24). CONCLUSION: The results support the hypothesis that image artifacts in 4D CT treatment planning data negatively influence clinical outcome in SBRT of lung and liver metastases, underlining the need to account for 4D CT artifacts and improve image quality.


Subject(s)
Liver Neoplasms , Lung Neoplasms , Radiosurgery , Artifacts , Four-Dimensional Computed Tomography , Humans , Liver Neoplasms/diagnostic imaging , Lung , Lung Neoplasms/diagnostic imaging , Lung Neoplasms/surgery , Neoplasm Recurrence, Local , Radiotherapy Planning, Computer-Assisted , Respiration
11.
Med Phys ; 47(6): 2408-2412, 2020 Jun.
Article in English | MEDLINE | ID: mdl-32115724

ABSTRACT

PURPOSE: Four-dimensional (4D) computed tomography (CT) imaging is an essential part of current 4D radiotherapy treatment planning workflows, but clinical 4D CT images are often affected by artifacts. The artifacts are mainly caused by breathing irregularity during data acquisition, which leads to projection data coverage issues for currently available commercial 4D CT protocols. It was proposed to improve projection data coverage by online respiratory signal analysis and signal-guided CT tube control, but related work was always theoretical and presented as pure in silico studies. The present work demonstrates a first CT prototype implementation along with respective phantom measurements for the recently introduced intelligent 4D CT (i4DCT) sequence scanning concept (https://doi.org/10.1002/mp.13632). METHODS: Intelligent 4D CT was implemented on the Siemens SOMATOM go platform. Four-dimensional CT measurements were performed using the CIRS motion phantom. Motion curves were programmed to systematically vary from regular to very irregular, covering typical irregular patterns that are known to result in image artifacts using standard 4D CT imaging protocols. Corresponding measurements were performed using i4DCT and routine spiral 4D CT with similar imaging parameters (e.g., mAs setting and gantry rotation time, retrospective ten-phase reconstruction) to allow for a direct comparison of the image data. RESULTS: Following technological implementation of i4DCT on the clinical CT scanner platform, 4D CT motion artifacts were significantly reduced for all investigated levels of breathing irregularity when compared to routine spiral 4D CT scanning. CONCLUSIONS: The present study confirms feasibility of fully automated respiratory signal-guided 4D CT scanning by means of a first implementation of i4DCT on a CT scanner. The measurements thereby support the conclusions of respective in silico studies and demonstrate that respiratory signal-guided 4D CT (here: i4DCT) is ready for integration into clinical CT scanners.


Subject(s)
Four-Dimensional Computed Tomography , Lung Neoplasms , Artifacts , Humans , Lung Neoplasms/diagnostic imaging , Phantoms, Imaging , Respiration , Retrospective Studies
12.
IEEE Trans Biomed Eng ; 67(2): 495-503, 2020 02.
Article in English | MEDLINE | ID: mdl-31071016

ABSTRACT

OBJECTIVE: This paper addresses two key problems of skin lesion classification. The first problem is the effective use of high-resolution images with pretrained standard architectures for image classification. The second problem is the high-class imbalance encountered in real-world multi-class datasets. METHODS: To use high-resolution images, we propose a novel patch-based attention architecture that provides global context between small, high-resolution patches. We modify three pretrained architectures and study the performance of patch-based attention. To counter class imbalance problems, we compare oversampling, balanced batch sampling, and class-specific loss weighting. Additionally, we propose a novel diagnosis-guided loss weighting method that takes the method used for ground-truth annotation into account. RESULTS: Our patch-based attention mechanism outperforms previous methods and improves the mean sensitivity by [Formula: see text]. Class balancing significantly improves the mean sensitivity and we show that our diagnosis-guided loss weighting method improves the mean sensitivity by [Formula: see text] over normal loss balancing. CONCLUSION: The novel patch-based attention mechanism can be integrated into pretrained architectures and provides global context between local patches while outperforming other patch-based methods. Hence, pretrained architectures can be readily used with high-resolution images without downsampling. The new diagnosis-guided loss weighting method outperforms other methods and allows for effective training when facing class imbalance. SIGNIFICANCE: The proposed methods improve automatic skin lesion classification. They can be extended to other clinical applications where high-resolution image data and class imbalance are relevant.


Subject(s)
Deep Learning , Image Interpretation, Computer-Assisted/methods , Skin Neoplasms/diagnostic imaging , Databases, Factual , Dermoscopy , Humans , Skin/diagnostic imaging , Skin Neoplasms/classification , Skin Neoplasms/pathology
13.
Med Phys ; 46(8): 3462-3474, 2019 Aug.
Article in English | MEDLINE | ID: mdl-31140606

ABSTRACT

PURPOSE: Four-dimensional (4D) CT imaging is a central part of current treatment planning workflows in 4D radiotherapy (RT). However, clinical 4D CT image data often suffer from severe artifacts caused by insufficient projection data coverage due to the inability of current commercial 4D CT imaging protocols to adapt to breathing irregularity. We propose an intelligent sequence mode 4D CT imaging protocol (i4DCT) that builds on online breathing curve analysis and respiratory signal-guided selection of beam on/off periods during scan time in order to fulfill projection data coverage requirements. i4DCT performance is evaluated and compared to standard clinical sequence mode 4D CT (seq4DCT) and spiral 4D CT (spiral4DCT) approaches. METHODS: i4DCT consists of three main blocks: (a) an initial learning period to establish a patient-specific reference breathing cycle representation for data-driven i4DCT parameter selection, (b) online respiratory signal-guided sequence mode scanning (i4DCT core), (c) rapid breathing record analysis and quality control after scanning to trigger potential local rescanning (i4DCT rescan). Based on a phase space representation of the patient's breathing signal, i4DCT core implements real-time analysis of the signal to appropriately switch on and off projection data acquisition even during irregular breathing. Performance evaluation was based on 189 clinical breathing records acquired during spiral 4D CT scanning for RT planning (data acquisition period: 2013-2017; Siemens Somatom with Varian RPM system). For each breathing record, i4DCT, seq4DCT, and spiral4DCT scanning protocol variants were simulated. Evaluation measures were local projection data coverage ß cov ; number ϵ total of local projection data coverage failures; and number ϵ pat of patients with coverage failures; average beam on time t beam on as a surrogate for imaging dose and total patient on table time t table as the time between first and last beam on signal. RESULTS: Using i4DCT, mean inhalation and exhalation projection data coverage ß cov increased significantly compared to standard spiral 4D CT scanning as applied for the original clinical data acquisition and conventional 4D CT sequence scanning modes. The improved projection data coverage translated into a reduction of coverage failures ϵ total by 89% without and 93% when allowing for a rescanning at up to five z-positions compared to spiral scanning and between 76% and 82% without and 85% and 89% with rescanning when compared to seq4DCT. Similar numbers were observed for ϵ pat . Simultaneously, i4DCT (without rescanning) reduced the beam on time on average by 3%-17% compared to standard spiral 4D CT. In turn, the patient on table time increased by between 35% and 66%. Allowing for rescanning led on average to additional 5.9 s beam on and 10.6 s patient on table time. CONCLUSIONS: i4DCT outperformed currently implemented clinical fixed beam on period 4D CT scanning approaches by means of a significantly smaller data coverage failure rate without requiring additional beam on time compared to, for example, conventional spiral 4D CT protocols.


Subject(s)
Four-Dimensional Computed Tomography/methods , Respiratory-Gated Imaging Techniques
14.
Radiology ; 290(2): 479-487, 2019 02.
Article in English | MEDLINE | ID: mdl-30526358

ABSTRACT

Purpose To investigate the feasibility of tumor type prediction with MRI radiomic image features of different brain metastases in a multiclass machine learning approach for patients with unknown primary lesion at the time of diagnosis. Materials and methods This single-center retrospective analysis included radiomic features of 658 brain metastases from T1-weighted contrast material-enhanced, T1-weighted nonenhanced, and fluid-attenuated inversion recovery (FLAIR) images in 189 patients (101 women, 88 men; mean age, 61 years; age range, 32-85 years). Images were acquired over a 9-year period (from September 2007 through December 2016) with different MRI units, reflecting heterogeneous image data. Included metastases originated from breast cancer (n = 143), small cell lung cancer (n = 151), non-small cell lung cancer (n = 225), gastrointestinal cancer (n = 50), and melanoma (n = 89). A total of 1423 quantitative image features and basic clinical data were evaluated by using random forest machine learning algorithms. Validation was performed with model-external fivefold cross validation. Comparative analysis of 10 randomly drawn cross-validation sets verified the stability of the results. The classifier performance was compared with predictions from a respective conventional reading by two radiologists. Results Areas under the receiver operating characteristic curve of the five-class problem ranged between 0.64 (for non-small cell lung cancer) and 0.82 (for melanoma); all P values were less than .01. Prediction performance of the classifier was superior to the radiologists' readings. Highest differences were observed for melanoma, with a 17-percentage-point gain in sensitivity compared with the sensitivity of both readers; P values were less than .02. Conclusion Quantitative features of routine brain MR images used in a machine learning classifier provided high discriminatory accuracy in predicting the tumor type of brain metastases. © RSNA, 2018 Online supplemental material is available for this article.


Subject(s)
Brain Neoplasms/diagnostic imaging , Brain Neoplasms/secondary , Image Interpretation, Computer-Assisted/methods , Magnetic Resonance Imaging/methods , Adult , Aged , Aged, 80 and over , Algorithms , Brain Neoplasms/classification , Brain Neoplasms/epidemiology , Female , Humans , Machine Learning , Male , Middle Aged , Neoplasms/pathology , Retrospective Studies
SELECTION OF CITATIONS
SEARCH DETAIL
...