RESUMO
Photoacoustic imaging potentially allows for the real-time visualization of functional human tissue parameters such as oxygenation but is subject to a challenging underlying quantification problem. While in silico studies have revealed the great potential of deep learning (DL) methodology in solving this problem, the inherent lack of an efficient gold standard method for model training and validation remains a grand challenge. This work investigates whether DL can be leveraged to accurately and efficiently simulate photon propagation in biological tissue, enabling photoacoustic image synthesis. Our approach is based on estimating the initial pressure distribution of the photoacoustic waves from the underlying optical properties using a back-propagatable neural network trained on synthetic data. In proof-of-concept studies, we validated the performance of two complementary neural network architectures, namely a conventional U-Net-like model and a Fourier Neural Operator (FNO) network. Our in silico validation on multispectral human forearm images shows that DL methods can speed up image generation by a factor of 100 when compared to Monte Carlo simulations with 5×108 photons. While the FNO is slightly more accurate than the U-Net, when compared to Monte Carlo simulations performed with a reduced number of photons (5×106), both neural network architectures achieve equivalent accuracy. In contrast to Monte Carlo simulations, the proposed DL models can be used as inherently differentiable surrogate models in the photoacoustic image synthesis pipeline, allowing for back-propagation of the synthesis error and gradient-based optimization over the entire pipeline. Due to their efficiency, they have the potential to enable large-scale training data generation that can expedite the clinical application of photoacoustic imaging.
Assuntos
Aprendizado Profundo , Humanos , Análise Espectral , Antebraço , Método de Monte Carlo , Redes Neurais de ComputaçãoRESUMO
Intelligent systems in interventional healthcare depend on the reliable perception of the environment. In this context, photoacoustic tomography (PAT) has emerged as a non-invasive, functional imaging modality with great clinical potential. Current research focuses on converting the high-dimensional, not human-interpretable spectral data into the underlying functional information, specifically the blood oxygenation. One of the largely unexplored issues stalling clinical advances is the fact that the quantification problem is ambiguous, i.e. that radically different tissue parameter configurations could lead to almost identical photoacoustic spectra. In the present work, we tackle this problem with conditional Invertible Neural Networks (cINNs). Going beyond traditional point estimates, our network is used to compute an approximation of the conditional posterior density of tissue parameters given the photoacoustic spectrum. To this end, an automatic mode detection algorithm extracts the plausible solution from the sample-based posterior. According to a comprehensive validation study based on both synthetic and real images, our approach is well-suited for exploring ambiguity in quantitative PAT.
Assuntos
Algoritmos , Processamento de Imagem Assistida por Computador , Redes Neurais de Computação , Oxigênio , Técnicas Fotoacústicas , Técnicas Fotoacústicas/métodos , Humanos , Processamento de Imagem Assistida por Computador/métodos , Oxigênio/sangue , Oxigênio/metabolismo , Imagens de FantasmasRESUMO
Photoacoustic tomography (PAT) has the potential to recover morphological and functional tissue properties with high spatial resolution. However, previous attempts to solve the optical inverse problem with supervised machine learning were hampered by the absence of labeled reference data. While this bottleneck has been tackled by simulating training data, the domain gap between real and simulated images remains an unsolved challenge. We propose a novel approach to PAT image synthesis that involves subdividing the challenge of generating plausible simulations into two disjoint problems: (1) Probabilistic generation of realistic tissue morphology, and (2) pixel-wise assignment of corresponding optical and acoustic properties. The former is achieved with Generative Adversarial Networks (GANs) trained on semantically annotated medical imaging data. According to a validation study on a downstream task our approach yields more realistic synthetic images than the traditional model-based approach and could therefore become a fundamental step for deep learning-based quantitative PAT (qPAT).
RESUMO
Photoacoustic (PA) imaging has the potential to revolutionize functional medical imaging in healthcare due to the valuable information on tissue physiology contained in multispectral photoacoustic measurements. Clinical translation of the technology requires conversion of the high-dimensional acquired data into clinically relevant and interpretable information. In this work, we present a deep learning-based approach to semantic segmentation of multispectral photoacoustic images to facilitate image interpretability. Manually annotated photoacoustic and ultrasound imaging data are used as reference and enable the training of a deep learning-based segmentation algorithm in a supervised manner. Based on a validation study with experimentally acquired data from 16 healthy human volunteers, we show that automatic tissue segmentation can be used to create powerful analyses and visualizations of multispectral photoacoustic images. Due to the intuitive representation of high-dimensional information, such a preprocessing algorithm could be a valuable means to facilitate the clinical translation of photoacoustic imaging.
RESUMO
SIGNIFICANCE: Optical and acoustic imaging techniques enable noninvasive visualisation of structural and functional properties of tissue. The quantification of measurements, however, remains challenging due to the inverse problems that must be solved. Emerging data-driven approaches are promising, but they rely heavily on the presence of high-quality simulations across a range of wavelengths due to the lack of ground truth knowledge of tissue acoustical and optical properties in realistic settings. AIM: To facilitate this process, we present the open-source simulation and image processing for photonics and acoustics (SIMPA) Python toolkit. SIMPA is being developed according to modern software design standards. APPROACH: SIMPA enables the use of computational forward models, data processing algorithms, and digital device twins to simulate realistic images within a single pipeline. SIMPA's module implementations can be seamlessly exchanged as SIMPA abstracts from the concrete implementation of each forward model and builds the simulation pipeline in a modular fashion. Furthermore, SIMPA provides comprehensive libraries of biological structures, such as vessels, as well as optical and acoustic properties and other functionalities for the generation of realistic tissue models. RESULTS: To showcase the capabilities of SIMPA, we show examples in the context of photoacoustic imaging: the diversity of creatable tissue models, the customisability of a simulation pipeline, and the degree of realism of the simulations. CONCLUSIONS: SIMPA is an open-source toolkit that can be used to simulate optical and acoustic imaging modalities. The code is available at: https://github.com/IMSY-DKFZ/simpa, and all of the examples and experiments in this paper can be reproduced using the code available at: https://github.com/IMSY-DKFZ/simpa_paper_experiments.
Assuntos
Óptica e Fotônica , Software , Acústica , Dimetilpolisiloxanos , Processamento de Imagem Assistida por Computador/métodosRESUMO
Photoacoustic imaging (PAI) is an emerging modality that has shown promise for improving patient management in a range of applications. Unfortunately, the current lack of uniformity in PAI data formats compromises inter-user data exchange and comparison, which impedes: technological progress; effective research collaboration; and efforts to deliver multi-centre clinical trials. To overcome this challenge, the International Photoacoustic Standardisation Consortium (IPASC) has established a data format with a defined consensus metadata structure and developed an open-source software application programming interface (API) to enable conversion from proprietary file formats into the IPASC format. The format is based on Hierarchical Data Format 5 (HDF5) and designed to store photoacoustic raw time series data. Internal quality control mechanisms are included to ensure completeness and consistency of the converted data. By unifying the variety of proprietary data and metadata definitions into a consensus format, IPASC hopes to facilitate the exchange and comparison of PAI data.
RESUMO
Photoacoustic imaging (PAI) is a promising emerging imaging modality that enables spatially resolved imaging of optical tissue properties up to several centimeters deep in tissue, creating the potential for numerous exciting clinical applications. However, extraction of relevant tissue parameters from the raw data requires the solving of inverse image reconstruction problems, which have proven extremely difficult to solve. The application of deep learning methods has recently exploded in popularity, leading to impressive successes in the context of medical imaging and also finding first use in the field of PAI. Deep learning methods possess unique advantages that can facilitate the clinical translation of PAI, such as extremely fast computation times and the fact that they can be adapted to any given problem. In this review, we examine the current state of the art regarding deep learning in PAI and identify potential directions of research that will help to reach the goal of clinical applicability.
RESUMO
PURPOSE: Photoacoustic tomography (PAT) is a novel imaging technique that can spatially resolve both morphological and functional tissue properties, such as vessel topology and tissue oxygenation. While this capacity makes PAT a promising modality for the diagnosis, treatment, and follow-up of various diseases, a current drawback is the limited field of view provided by the conventionally applied 2D probes. METHODS: In this paper, we present a novel approach to 3D reconstruction of PAT data (Tattoo tomography) that does not require an external tracking system and can smoothly be integrated into clinical workflows. It is based on an optical pattern placed on the region of interest prior to image acquisition. This pattern is designed in a way that a single tomographic image of it enables the recovery of the probe pose relative to the coordinate system of the pattern, which serves as a global coordinate system for image compounding. RESULTS: To investigate the feasibility of Tattoo tomography, we assessed the quality of 3D image reconstruction with experimental phantom data and in vivo forearm data. The results obtained with our prototype indicate that the Tattoo method enables the accurate and precise 3D reconstruction of PAT data and may be better suited for this task than the baseline method using optical tracking. CONCLUSIONS: In contrast to previous approaches to 3D ultrasound (US) or PAT reconstruction, the Tattoo approach neither requires complex external hardware nor training data acquired for a specific application. It could thus become a valuable tool for clinical freehand PAT.