Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 66
Filter
1.
Sci Data ; 11(1): 483, 2024 May 10.
Article in English | MEDLINE | ID: mdl-38729970

ABSTRACT

The Sparsely Annotated Region and Organ Segmentation (SAROS) dataset was created using data from The Cancer Imaging Archive (TCIA) to provide a large open-access CT dataset with high-quality annotations of body landmarks. In-house segmentation models were employed to generate annotation proposals on randomly selected cases from TCIA. The dataset includes 13 semantic body region labels (abdominal/thoracic cavity, bones, brain, breast implant, mediastinum, muscle, parotid/submandibular/thyroid glands, pericardium, spinal cord, subcutaneous tissue) and six body part labels (left/right arm/leg, head, torso). Case selection was based on the DICOM series description, gender, and imaging protocol, resulting in 882 patients (438 female) for a total of 900 CTs. Manual review and correction of proposals were conducted in a continuous quality control cycle. Only every fifth axial slice was annotated, yielding 20150 annotated slices from 28 data collections. For the reproducibility on downstream tasks, five cross-validation folds and a test set were pre-defined. The SAROS dataset serves as an open-access resource for training and evaluating novel segmentation models, covering various scanner vendors and diseases.


Subject(s)
Tomography, X-Ray Computed , Whole Body Imaging , Female , Humans , Male , Image Processing, Computer-Assisted
3.
Med Image Anal ; 94: 103143, 2024 May.
Article in English | MEDLINE | ID: mdl-38507894

ABSTRACT

Nuclei detection and segmentation in hematoxylin and eosin-stained (H&E) tissue images are important clinical tasks and crucial for a wide range of applications. However, it is a challenging task due to nuclei variances in staining and size, overlapping boundaries, and nuclei clustering. While convolutional neural networks have been extensively used for this task, we explore the potential of Transformer-based networks in combination with large scale pre-training in this domain. Therefore, we introduce a new method for automated instance segmentation of cell nuclei in digitized tissue samples using a deep learning architecture based on Vision Transformer called CellViT. CellViT is trained and evaluated on the PanNuke dataset, which is one of the most challenging nuclei instance segmentation datasets, consisting of nearly 200,000 annotated nuclei into 5 clinically important classes in 19 tissue types. We demonstrate the superiority of large-scale in-domain and out-of-domain pre-trained Vision Transformers by leveraging the recently published Segment Anything Model and a ViT-encoder pre-trained on 104 million histological image patches - achieving state-of-the-art nuclei detection and instance segmentation performance on the PanNuke dataset with a mean panoptic quality of 0.50 and an F1-detection score of 0.83. The code is publicly available at https://github.com/TIO-IKIM/CellViT.


Subject(s)
Cell Nucleus , Neural Networks, Computer , Humans , Eosine Yellowish-(YS) , Hematoxylin , Staining and Labeling , Image Processing, Computer-Assisted
4.
Med Image Anal ; 93: 103100, 2024 Apr.
Article in English | MEDLINE | ID: mdl-38340545

ABSTRACT

With the massive proliferation of data-driven algorithms, such as deep learning-based approaches, the availability of high-quality data is of great interest. Volumetric data is very important in medicine, as it ranges from disease diagnoses to therapy monitoring. When the dataset is sufficient, models can be trained to help doctors with these tasks. Unfortunately, there are scenarios where large amounts of data is unavailable. For example, rare diseases and privacy issues can lead to restricted data availability. In non-medical fields, the high cost of obtaining enough high-quality data can also be a concern. A solution to these problems can be the generation of realistic synthetic data using Generative Adversarial Networks (GANs). The existence of these mechanisms is a good asset, especially in healthcare, as the data must be of good quality, realistic, and without privacy issues. Therefore, most of the publications on volumetric GANs are within the medical domain. In this review, we provide a summary of works that generate realistic volumetric synthetic data using GANs. We therefore outline GAN-based methods in these areas with common architectures, loss functions and evaluation metrics, including their advantages and disadvantages. We present a novel taxonomy, evaluations, challenges, and research opportunities to provide a holistic overview of the current state of volumetric GANs.


Subject(s)
Algorithms , Data Analysis , Humans , Rare Diseases
5.
Nat Methods ; 21(2): 182-194, 2024 Feb.
Article in English | MEDLINE | ID: mdl-38347140

ABSTRACT

Validation metrics are key for tracking scientific progress and bridging the current chasm between artificial intelligence research and its translation into practice. However, increasing evidence shows that, particularly in image analysis, metrics are often chosen inadequately. Although taking into account the individual strengths, weaknesses and limitations of validation metrics is a critical prerequisite to making educated choices, the relevant knowledge is currently scattered and poorly accessible to individual researchers. Based on a multistage Delphi process conducted by a multidisciplinary expert consortium as well as extensive community feedback, the present work provides a reliable and comprehensive common point of access to information on pitfalls related to validation metrics in image analysis. Although focused on biomedical image analysis, the addressed pitfalls generalize across application domains and are categorized according to a newly created, domain-agnostic taxonomy. The work serves to enhance global comprehension of a key topic in image analysis validation.


Subject(s)
Artificial Intelligence
6.
Nat Methods ; 21(2): 195-212, 2024 Feb.
Article in English | MEDLINE | ID: mdl-38347141

ABSTRACT

Increasing evidence shows that flaws in machine learning (ML) algorithm validation are an underestimated global problem. In biomedical image analysis, chosen performance metrics often do not reflect the domain interest, and thus fail to adequately measure scientific progress and hinder translation of ML techniques into practice. To overcome this, we created Metrics Reloaded, a comprehensive framework guiding researchers in the problem-aware selection of metrics. Developed by a large international consortium in a multistage Delphi process, it is based on the novel concept of a problem fingerprint-a structured representation of the given problem that captures all aspects that are relevant for metric selection, from the domain interest to the properties of the target structure(s), dataset and algorithm output. On the basis of the problem fingerprint, users are guided through the process of choosing and applying appropriate validation metrics while being made aware of potential pitfalls. Metrics Reloaded targets image analysis problems that can be interpreted as classification tasks at image, object or pixel level, namely image-level classification, object detection, semantic segmentation and instance segmentation tasks. To improve the user experience, we implemented the framework in the Metrics Reloaded online tool. Following the convergence of ML methodology across application domains, Metrics Reloaded fosters the convergence of validation methodology. Its applicability is demonstrated for various biomedical use cases.


Subject(s)
Algorithms , Image Processing, Computer-Assisted , Machine Learning , Semantics
7.
Comput Methods Programs Biomed ; 245: 108013, 2024 Mar.
Article in English | MEDLINE | ID: mdl-38262126

ABSTRACT

The recent release of ChatGPT, a chat bot research project/product of natural language processing (NLP) by OpenAI, stirs up a sensation among both the general public and medical professionals, amassing a phenomenally large user base in a short time. This is a typical example of the 'productization' of cutting-edge technologies, which allows the general public without a technical background to gain firsthand experience in artificial intelligence (AI), similar to the AI hype created by AlphaGo (DeepMind Technologies, UK) and self-driving cars (Google, Tesla, etc.). However, it is crucial, especially for healthcare researchers, to remain prudent amidst the hype. This work provides a systematic review of existing publications on the use of ChatGPT in healthcare, elucidating the 'status quo' of ChatGPT in medical applications, for general readers, healthcare professionals as well as NLP scientists. The large biomedical literature database PubMed is used to retrieve published works on this topic using the keyword 'ChatGPT'. An inclusion criterion and a taxonomy are further proposed to filter the search results and categorize the selected publications, respectively. It is found through the review that the current release of ChatGPT has achieved only moderate or 'passing' performance in a variety of tests, and is unreliable for actual clinical deployment, since it is not intended for clinical applications by design. We conclude that specialized NLP models trained on (bio)medical datasets still represent the right direction to pursue for critical clinical applications.


Subject(s)
Artificial Intelligence , Physicians , Humans , Databases, Factual , Natural Language Processing , PubMed
8.
ArXiv ; 2024 Feb 23.
Article in English | MEDLINE | ID: mdl-36945687

ABSTRACT

Validation metrics are key for the reliable tracking of scientific progress and for bridging the current chasm between artificial intelligence (AI) research and its translation into practice. However, increasing evidence shows that particularly in image analysis, metrics are often chosen inadequately in relation to the underlying research problem. This could be attributed to a lack of accessibility of metric-related knowledge: While taking into account the individual strengths, weaknesses, and limitations of validation metrics is a critical prerequisite to making educated choices, the relevant knowledge is currently scattered and poorly accessible to individual researchers. Based on a multi-stage Delphi process conducted by a multidisciplinary expert consortium as well as extensive community feedback, the present work provides the first reliable and comprehensive common point of access to information on pitfalls related to validation metrics in image analysis. Focusing on biomedical image analysis but with the potential of transfer to other fields, the addressed pitfalls generalize across application domains and are categorized according to a newly created, domain-agnostic taxonomy. To facilitate comprehension, illustrations and specific examples accompany each pitfall. As a structured body of information accessible to researchers of all levels of expertise, this work enhances global comprehension of a key topic in image analysis validation.

9.
Eur Radiol ; 34(1): 330-337, 2024 Jan.
Article in English | MEDLINE | ID: mdl-37505252

ABSTRACT

OBJECTIVES: Provide physicians and researchers an efficient way to extract information from weakly structured radiology reports with natural language processing (NLP) machine learning models. METHODS: We evaluate seven different German bidirectional encoder representations from transformers (BERT) models on a dataset of 857,783 unlabeled radiology reports and an annotated reading comprehension dataset in the format of SQuAD 2.0 based on 1223 additional reports. RESULTS: Continued pre-training of a BERT model on the radiology dataset and a medical online encyclopedia resulted in the most accurate model with an F1-score of 83.97% and an exact match score of 71.63% for answerable questions and 96.01% accuracy in detecting unanswerable questions. Fine-tuning a non-medical model without further pre-training led to the lowest-performing model. The final model proved stable against variation in the formulations of questions and in dealing with questions on topics excluded from the training set. CONCLUSIONS: General domain BERT models further pre-trained on radiological data achieve high accuracy in answering questions on radiology reports. We propose to integrate our approach into the workflow of medical practitioners and researchers to extract information from radiology reports. CLINICAL RELEVANCE STATEMENT: By reducing the need for manual searches of radiology reports, radiologists' resources are freed up, which indirectly benefits patients. KEY POINTS: • BERT models pre-trained on general domain datasets and radiology reports achieve high accuracy (83.97% F1-score) on question-answering for radiology reports. • The best performing model achieves an F1-score of 83.97% for answerable questions and 96.01% accuracy for questions without an answer. • Additional radiology-specific pretraining of all investigated BERT models improves their performance.


Subject(s)
Information Storage and Retrieval , Radiology , Humans , Language , Machine Learning , Natural Language Processing
10.
IEEE J Biomed Health Inform ; 28(1): 100-109, 2024 Jan.
Article in English | MEDLINE | ID: mdl-37624724

ABSTRACT

Recently, deep learning has been demonstrated to be feasible in eliminating the use of gadoliniumbased contrast agents (GBCAs) through synthesizing gadolinium-free contrast-enhanced MRI (GFCE-MRI) from contrast-free MRI sequences, providing the community with an alternative to get rid of GBCAs-associated safety issues in patients. Nevertheless, generalizability assessment of the GFCE-MRI model has been largely challenged by the high inter-institutional heterogeneity of MRI data, on top of the scarcity of multi-institutional data itself. Although various data normalization methods have been adopted to address the heterogeneity issue, it has been limited to single-institutional investigation and there is no standard normalization approach presently. In this study, we aimed at investigating generalizability of GFCE-MRI model using data from seven institutions by manipulating heterogeneity of MRI data under five popular normalization approaches. Three state-of-the-art neural networks were applied to map from T1-weighted and T2-weighted MRI to contrast-enhanced MRI (CE-MRI) for GFCE-MRI synthesis in patients with nasopharyngeal carcinoma. MRI data from three institutions were used separately to generate three uni-institution models and jointly for a tri-institution model. The five normalization methods were applied to normalize the data of each model. MRI data from the remaining four institutions served as external cohorts for model generalizability assessment. Quality of GFCE-MRI was quantitatively evaluated against ground-truth CE-MRI using mean absolute error (MAE) and peak signal-to-noise ratio(PSNR). Results showed that performance of all uni-institution models remarkably dropped on the external cohorts. By contrast, model trained using multi-institutional data with Z-Score normalization yielded the best model generalizability improvement.


Subject(s)
Gadolinium , Magnetic Resonance Imaging , Humans , Magnetic Resonance Imaging/methods , Neural Networks, Computer , Signal-To-Noise Ratio
11.
Sci Rep ; 13(1): 21231, 2023 12 01.
Article in English | MEDLINE | ID: mdl-38040865

ABSTRACT

Cerebral organoids recapitulate the structure and function of the developing human brain in vitro, offering a large potential for personalized therapeutic strategies. The enormous growth of this research area over the past decade with its capability for clinical translation makes a non-invasive, automated analysis pipeline of organoids highly desirable. This work presents a novel non-invasive approach to monitor and analyze cerebral organoids over time using high-field magnetic resonance imaging and state-of-the-art tools for automated image analysis. Three specific objectives are addressed, (I) organoid segmentation to investigate organoid development over time, (II) global cysticity classification and (III) local cyst segmentation for organoid quality assessment. We show that organoid growth can be monitored reliably over time and cystic and non-cystic organoids can be separated with high accuracy, with on par or better performance compared to state-of-the-art tools applied to brightfield imaging. Local cyst segmentation is feasible but could be further improved in the future. Overall, these results highlight the potential of the pipeline for clinical application to larger-scale comparative organoid analysis.


Subject(s)
Cysts , Organoids , Humans , Organoids/pathology , Image Processing, Computer-Assisted , Magnetic Resonance Imaging , Cysts/pathology , Artificial Intelligence
12.
Sci Data ; 10(1): 796, 2023 11 11.
Article in English | MEDLINE | ID: mdl-37951957

ABSTRACT

The availability of computational hardware and developments in (medical) machine learning (MML) increases medical mixed realities' (MMR) clinical usability. Medical instruments have played a vital role in surgery for ages. To further accelerate the implementation of MML and MMR, three-dimensional (3D) datasets of instruments should be publicly available. The proposed data collection consists of 103, 3D-scanned medical instruments from the clinical routine, scanned with structured light scanners. The collection consists, for example, of instruments, like retractors, forceps, and clamps. The collection can be augmented by generating likewise models using 3D software, resulting in an inflated dataset for analysis. The collection can be used for general instrument detection and tracking in operating room settings, or a freeform marker-less instrument registration for tool tracking in augmented reality. Furthermore, for medical simulation or training scenarios in virtual reality and medical diminishing reality in mixed reality. We hope to ease research in the field of MMR and MML, but also to motivate the release of a wider variety of needed surgical instrument datasets.


Subject(s)
Imaging, Three-Dimensional , Surgical Instruments , Virtual Reality , Computer Simulation , Software
13.
BMC Med Imaging ; 23(1): 174, 2023 10 31.
Article in English | MEDLINE | ID: mdl-37907876

ABSTRACT

BACKGROUND: With the rise in importance of personalized medicine and deep learning, we combine the two to create personalized neural networks. The aim of the study is to show a proof of concept that data from just one patient can be used to train deep neural networks to detect tumor progression in longitudinal datasets. METHODS: Two datasets with 64 scans from 32 patients with glioblastoma multiforme (GBM) were evaluated in this study. The contrast-enhanced T1w sequences of brain magnetic resonance imaging (MRI) images were used. We trained a neural network for each patient using just two scans from different timepoints to map the difference between the images. The change in tumor volume can be calculated with this map. The neural networks were a form of a Wasserstein-GAN (generative adversarial network), an unsupervised learning architecture. The combination of data augmentation and the network architecture allowed us to skip the co-registration of the images. Furthermore, no additional training data, pre-training of the networks or any (manual) annotations are necessary. RESULTS: The model achieved an AUC-score of 0.87 for tumor change. We also introduced a modified RANO criteria, for which an accuracy of 66% can be achieved. CONCLUSIONS: We show a novel approach to deep learning in using data from just one patient to train deep neural networks to monitor tumor change. Using two different datasets to evaluate the results shows the potential to generalize the method.


Subject(s)
Glioblastoma , Neural Networks, Computer , Humans , Magnetic Resonance Imaging , Brain , Glioblastoma/diagnostic imaging , Image Processing, Computer-Assisted/methods
14.
Sci Rep ; 13(1): 20229, 2023 11 19.
Article in English | MEDLINE | ID: mdl-37981641

ABSTRACT

Traditional convolutional neural network (CNN) methods rely on dense tensors, which makes them suboptimal for spatially sparse data. In this paper, we propose a CNN model based on sparse tensors for efficient processing of high-resolution shapes represented as binary voxel occupancy grids. In contrast to a dense CNN that takes the entire voxel grid as input, a sparse CNN processes only on the non-empty voxels, thus reducing the memory and computation overhead caused by the sparse input data. We evaluate our method on two clinically relevant skull reconstruction tasks: (1) given a defective skull, reconstruct the complete skull (i.e., skull shape completion), and (2) given a coarse skull, reconstruct a high-resolution skull with fine geometric details (shape super-resolution). Our method outperforms its dense CNN-based counterparts in the skull reconstruction task quantitatively and qualitatively, while requiring substantially less memory for training and inference. We observed that, on the 3D skull data, the overall memory consumption of the sparse CNN grows approximately linearly during inference with respect to the image resolutions. During training, the memory usage remains clearly below increases in image resolution-an [Formula: see text] increase in voxel number leads to less than [Formula: see text] increase in memory requirements. Our study demonstrates the effectiveness of using a sparse CNN for skull reconstruction tasks, and our findings can be applied to other spatially sparse problems. We prove this by additional experimental results on other sparse medical datasets, like the aorta and the heart. Project page at https://github.com/Jianningli/SparseCNN .


Subject(s)
Image Processing, Computer-Assisted , Neural Networks, Computer , Image Processing, Computer-Assisted/methods , Skull/diagnostic imaging , Head
15.
Healthcare (Basel) ; 11(17)2023 Aug 23.
Article in English | MEDLINE | ID: mdl-37685411

ABSTRACT

Data-driven machine learning in medical research and diagnostics needs large-scale datasets curated by clinical experts. The generation of large datasets can be challenging in terms of resource consumption and time effort, while generalizability and validation of the developed models significantly benefit from variety in data sources. Training algorithms on smaller decentralized datasets through federated learning can reduce effort, but require the implementation of a specific and ambitious infrastructure to share data, algorithms and computing time. Additionally, it offers the opportunity of maintaining and keeping the data locally. Thus, data safety issues can be avoided because patient data must not be shared. Machine learning models are trained on local data by sharing the model and through an established network. In addition to commercial applications, there are also numerous academic and customized implementations of network infrastructures available. The configuration of these networks primarily differs, yet adheres to a standard framework composed of fundamental components. In this technical note, we propose basic infrastructure requirements for data governance, data science workflows, and local node set-up, and report on the advantages and experienced pitfalls in implementing the local infrastructure with the German Radiological Cooperative Network initiative as the use case example. We show how the infrastructure can be built upon some base components to reflect the needs of a federated learning network and how they can be implemented considering both local and global network requirements. After analyzing the deployment process in different settings and scenarios, we recommend integrating the local node into an existing clinical IT infrastructure. This approach offers benefits in terms of maintenance and deployment effort compared to external integration in a separate environment (e.g., the radiology department). This proposed groundwork can be taken as an exemplary development guideline for future applications of federated learning networks in clinical and scientific environments.

17.
JCO Clin Cancer Inform ; 7: e2300038, 2023 08.
Article in English | MEDLINE | ID: mdl-37527475

ABSTRACT

PURPOSE: Quantifying treatment response to gastroesophageal junction (GEJ) adenocarcinomas is crucial to provide an optimal therapeutic strategy. Routinely taken tissue samples provide an opportunity to enhance existing positron emission tomography-computed tomography (PET/CT)-based therapy response evaluation. Our objective was to investigate if deep learning (DL) algorithms are capable of predicting the therapy response of patients with GEJ adenocarcinoma to neoadjuvant chemotherapy on the basis of histologic tissue samples. METHODS: This diagnostic study recruited 67 patients with I-III GEJ adenocarcinoma from the multicentric nonrandomized MEMORI trial including three German university hospitals TUM (University Hospital Rechts der Isar, Munich), LMU (Hospital of the Ludwig-Maximilians-University, Munich), and UME (University Hospital Essen, Essen). All patients underwent baseline PET/CT scans and esophageal biopsy before and 14-21 days after treatment initiation. Treatment response was defined as a ≥35% decrease in SUVmax from baseline. Several DL algorithms were developed to predict PET/CT-based responders and nonresponders to neoadjuvant chemotherapy using digitized histopathologic whole slide images (WSIs). RESULTS: The resulting models were trained on TUM (n = 25 pretherapy, n = 47 on-therapy) patients and evaluated on our internal validation cohort from LMU and UME (n = 17 pretherapy, n = 15 on-therapy). Compared with multiple architectures, the best pretherapy network achieves an area under the receiver operating characteristic curve (AUROC) of 0.81 (95% CI, 0.61 to 1.00), an area under the precision-recall curve (AUPRC) of 0.82 (95% CI, 0.61 to 1.00), a balanced accuracy of 0.78 (95% CI, 0.60 to 0.94), and a Matthews correlation coefficient (MCC) of 0.55 (95% CI, 0.18 to 0.88). The best on-therapy network achieves an AUROC of 0.84 (95% CI, 0.64 to 1.00), an AUPRC of 0.82 (95% CI, 0.56 to 1.00), a balanced accuracy of 0.80 (95% CI, 0.65 to 1.00), and a MCC of 0.71 (95% CI, 0.38 to 1.00). CONCLUSION: Our results show that DL algorithms can predict treatment response to neoadjuvant chemotherapy using WSI with high accuracy even before therapy initiation, suggesting the presence of predictive morphologic tissue biomarkers.


Subject(s)
Adenocarcinoma , Deep Learning , Humans , Neoadjuvant Therapy , Positron Emission Tomography Computed Tomography , Adenocarcinoma/pathology , Esophagogastric Junction/pathology
18.
Rofo ; 195(8): 665, 2023 08.
Article in German | MEDLINE | ID: mdl-37490886
19.
Med Image Anal ; 88: 102865, 2023 08.
Article in English | MEDLINE | ID: mdl-37331241

ABSTRACT

Cranial implants are commonly used for surgical repair of craniectomy-induced skull defects. These implants are usually generated offline and may require days to weeks to be available. An automated implant design process combined with onsite manufacturing facilities can guarantee immediate implant availability and avoid secondary intervention. To address this need, the AutoImplant II challenge was organized in conjunction with MICCAI 2021, catering for the unmet clinical and computational requirements of automatic cranial implant design. The first edition of AutoImplant (AutoImplant I, 2020) demonstrated the general capabilities and effectiveness of data-driven approaches, including deep learning, for a skull shape completion task on synthetic defects. The second AutoImplant challenge (i.e., AutoImplant II, 2021) built upon the first by adding real clinical craniectomy cases as well as additional synthetic imaging data. The AutoImplant II challenge consisted of three tracks. Tracks 1 and 3 used skull images with synthetic defects to evaluate the ability of submitted approaches to generate implants that recreate the original skull shape. Track 3 consisted of the data from the first challenge (i.e., 100 cases for training, and 110 for evaluation), and Track 1 provided 570 training and 100 validation cases aimed at evaluating skull shape completion algorithms at diverse defect patterns. Track 2 also made progress over the first challenge by providing 11 clinically defective skulls and evaluating the submitted implant designs on these clinical cases. The submitted designs were evaluated quantitatively against imaging data from post-craniectomy as well as by an experienced neurosurgeon. Submissions to these challenge tasks made substantial progress in addressing issues such as generalizability, computational efficiency, data augmentation, and implant refinement. This paper serves as a comprehensive summary and comparison of the submissions to the AutoImplant II challenge. Codes and models are available at https://github.com/Jianningli/Autoimplant_II.


Subject(s)
Prostheses and Implants , Skull , Humans , Skull/diagnostic imaging , Skull/surgery , Craniotomy/methods , Head
20.
Eur Urol Open Sci ; 54: 28-32, 2023 Aug.
Article in English | MEDLINE | ID: mdl-37361199

ABSTRACT

In this prospective two-center feasibility study, we evaluate the diagnostic value of intraoperative ex vivo specimenPET/CT imaging of radical prostatectomy (RP) and lymphadenectomy specimens. Ten patients with high-risk prostate cancer underwent clinical prostate-specific membrane antigen (PSMA) positron emission tomography/computed tomography (PET/CT) preoperatively on the day of surgery. Six patients received 68Ga-PSMA-11 and four 18F-PSMA-1007. Radioactivity of the resected specimen was measured again using a novel specimenPET/CT device (AURA10; XEOS Medical, Gent, Belgium) developed for intraoperative margin assessment. All index lesions of staging multiparametric magnetic resonance imaging could be visualized. Overall, specimenPET/CT correlated well with conventional PET/CT regarding detection of suspicious tracer foci (Pearson coefficient 0.935). In addition, specimenPET/CT demonstrated all lymph node metastases detected on conventional PET/CT (n = 3), as well as three previously undetected lymph node metastases. Importantly, all positive or close (<1 mm) surgical margins could be visualized in agreement with histopathology. In conclusion, specimenPET/CT enables detection of PSMA-avid lesions and warrants further investigation to tailor RP, based on a good correlation with final pathology. Future trials will prospectively compare ex vivo specimenPET/CT with a frozen section analysis for the detection of positive surgical margins and assessment of biochemical recurrence-free survival. Patient summary: In this report, we examined prostatectomy and lymphadenectomy specimens for suspicious positron emission tomography (PET) signals after preoperative tracer injection. It was found that in all cases, a good signal could be visualized, with a promising correlation of surface assessment compared with histopathology. We conclude that specimenPET imaging is feasible and may help improve oncological outcomes in the future.

SELECTION OF CITATIONS
SEARCH DETAIL
...