RESUMEN
Accurate measurement of optical absorption coefficients from photoacoustic imaging (PAI) data would enable direct mapping of molecular concentrations, providing vital clinical insight. The ill-posed nature of the problem of absorption coefficient recovery has prohibited PAI from achieving this goal in living systems due to the domain gap between simulation and experiment. To bridge this gap, we introduce a collection of experimentally well-characterised imaging phantoms and their digital twins. This first-of-a-kind phantom data set enables supervised training of a U-Net on experimental data for pixel-wise estimation of absorption coefficients. We show that training on simulated data results in artefacts and biases in the estimates, reinforcing the existence of a domain gap between simulation and experiment. Training on experimentally acquired data, however, yielded more accurate and robust estimates of optical absorption coefficients. We compare the results to fluence correction with a Monte Carlo model from reference optical properties of the materials, which yields a quantification error of approximately 20%. Application of the trained U-Nets to a blood flow phantom demonstrated spectral biases when training on simulated data, while application to a mouse model highlighted the ability of both learning-based approaches to recover the depth-dependent loss of signal intensity. We demonstrate that training on experimental phantoms can restore the correlation of signal amplitudes measured in depth. While the absolute quantification error remains high and further improvements are needed, our results highlight the promise of deep learning to advance quantitative PAI.
Asunto(s)
Técnicas Fotoacústicas , Animales , Ratones , Fantasmas de Imagen , Técnicas Fotoacústicas/métodos , Diagnóstico por Imagen , Simulación por Computador , Método de MontecarloRESUMEN
Significance: Photoacoustic imaging (PAI) provides contrast based on the concentration of optical absorbers in tissue, enabling the assessment of functional physiological parameters such as blood oxygen saturation (sO2). Recent evidence suggests that variation in melanin levels in the epidermis leads to measurement biases in optical technologies, which could potentially limit the application of these biomarkers in diverse populations. Aim: To examine the effects of skin melanin pigmentation on PAI and oximetry. Approach: We evaluated the effects of skin tone in PAI using a computational skin model, two-layer melanin-containing tissue-mimicking phantoms, and mice of a consistent genetic background with varying pigmentations. The computational skin model was validated by simulating the diffuse reflectance spectrum using the adding-doubling method, allowing us to assign our simulation parameters to approximate Fitzpatrick skin types. Monte Carlo simulations and acoustic simulations were run to obtain idealized photoacoustic images of our skin model. Photoacoustic images of the phantoms and mice were acquired using a commercial instrument. Reconstructed images were processed with linear spectral unmixing to estimate blood oxygenation. Linear unmixing results were compared with a learned unmixing approach based on gradient-boosted regression. Results: Our computational skin model was consistent with representative literature for in vivo skin reflectance measurements. We observed consistent spectral coloring effects across all model systems, with an overestimation of sO2 and more image artifacts observed with increasing melanin concentration. The learned unmixing approach reduced the measurement bias, but predictions made at lower blood sO2 still suffered from a skin tone-dependent effect. Conclusion: PAI demonstrates measurement bias, including an overestimation of blood sO2, in higher Fitzpatrick skin types. Future research should aim to characterize this effect in humans to ensure equitable application of the technology.
Asunto(s)
Técnicas Fotoacústicas , Pigmentación de la Piel , Humanos , Animales , Ratones , Oxígeno , Melaninas , Técnicas Fotoacústicas/métodos , Oximetría/métodos , Fantasmas de ImagenRESUMEN
Significance: The estimation of tissue optical properties using diffuse optics has found a range of applications in disease detection, therapy monitoring, and general health care. Biomarkers derived from the estimated optical absorption and scattering coefficients can reflect the underlying progression of many biological processes in tissues. Aim: Complex light-tissue interactions make it challenging to disentangle the absorption and scattering coefficients, so dedicated measurement systems are required. We aim to help readers understand the measurement principles and practical considerations needed when choosing between different estimation methods based on diffuse optics. Approach: The estimation methods can be categorized as: steady state, time domain, time frequency domain (FD), spatial domain, and spatial FD. The experimental measurements are coupled with models of light-tissue interactions, which enable inverse solutions for the absorption and scattering coefficients from the measured tissue reflectance and/or transmittance. Results: The estimation of tissue optical properties has been applied to characterize a variety of ex vivo and in vivo tissues, as well as tissue-mimicking phantoms. Choosing a specific estimation method for a certain application has to trade-off its advantages and limitations. Conclusion: Optical absorption and scattering property estimation is an increasingly important and accessible approach for medical diagnosis and health monitoring.
Asunto(s)
Fantasmas de Imagen , Dispersión de Radiación , Humanos , Luz , Imagen Óptica/métodos , Animales , Absorción de Radiación , AlgoritmosRESUMEN
Mesoscopic photoacoustic imaging (PAI) enables label-free visualization of vascular networks in tissues with high contrast and resolution. Segmenting these networks from 3D PAI data and interpreting their physiological and pathological significance is crucial yet challenging due to the time-consuming and error-prone nature of current methods. Deep learning offers a potential solution; however, supervised analysis frameworks typically require human-annotated ground-truth labels. To address this, an unsupervised image-to-image translation deep learning model is introduced, the Vessel Segmentation Generative Adversarial Network (VAN-GAN). VAN-GAN integrates synthetic blood vessel networks that closely resemble real-life anatomy into its training process and learns to replicate the underlying physics of the PAI system in order to learn how to segment vasculature from 3D photoacoustic images. Applied to a diverse range of in silico, in vitro, and in vivo data, including patient-derived breast cancer xenograft models and 3D clinical angiograms, VAN-GAN demonstrates its capability to facilitate accurate and unbiased segmentation of 3D vascular networks. By leveraging synthetic data, VAN-GAN reduces the reliance on manual labeling, thus lowering the barrier to entry for high-quality blood vessel segmentation (F1 score: VAN-GAN vs. U-Net = 0.84 vs. 0.87) and enhancing preclinical and clinical research into vascular structure and function.
Asunto(s)
Aprendizaje Profundo , Imagenología Tridimensional , Técnicas Fotoacústicas , Técnicas Fotoacústicas/métodos , Humanos , Imagenología Tridimensional/métodos , Animales , Ratones , Microvasos/diagnóstico por imagen , Neoplasias de la Mama/diagnóstico por imagenRESUMEN
Significance: Photoacoustic imaging (PAI) promises to measure spatially resolved blood oxygen saturation but suffers from a lack of accurate and robust spectral unmixing methods to deliver on this promise. Accurate blood oxygenation estimation could have important clinical applications from cancer detection to quantifying inflammation. Aim: We address the inflexibility of existing data-driven methods for estimating blood oxygenation in PAI by introducing a recurrent neural network architecture. Approach: We created 25 simulated training dataset variations to assess neural network performance. We used a long short-term memory network to implement a wavelength-flexible network architecture and proposed the Jensen-Shannon divergence to predict the most suitable training dataset. Results: The network architecture can flexibly handle the input wavelengths and outperforms linear unmixing and the previously proposed learned spectral decoloring method. Small changes in the training data significantly affect the accuracy of our method, but we find that the Jensen-Shannon divergence correlates with the estimation error and is thus suitable for predicting the most appropriate training datasets for any given application. Conclusions: A flexible data-driven network architecture combined with the Jensen-Shannon divergence to predict the best training data set provides a promising direction that might enable robust data-driven photoacoustic oximetry for clinical use cases.
Asunto(s)
Redes Neurales de la Computación , Oximetría , Técnicas Fotoacústicas , Técnicas Fotoacústicas/métodos , Oximetría/métodos , Humanos , Oxígeno/sangre , Saturación de Oxígeno/fisiología , AlgoritmosRESUMEN
Objective:The formation of functional vasculature in solid tumours enables delivery of oxygen and nutrients, and is vital for effective treatment with chemotherapeutic agents. Longitudinal characterisation of vascular networks can be enabled using mesoscopic photoacoustic imaging, but requires accurate image co-registration to precisely assess local changes across disease development or in response to therapy. Co-registration in photoacoustic imaging is challenging due to the complex nature of the generated signal, including the sparsity of data, artefacts related to the illumination/detection geometry, scan-to-scan technical variability, and biological variability, such as transient changes in perfusion. To better inform the choice of co-registration algorithms, we compared five open-source methods, in physiological and pathological tissues, with the aim of aligning evolving vascular networks in tumours imaged over growth at different time-points.Approach:Co-registration techniques were applied to 3D vascular images acquired with photoacoustic mesoscopy from murine ears and breast cancer patient-derived xenografts, at a fixed time-point and longitudinally. Images were pre-processed and segmented using an unsupervised generative adversarial network. To compare co-registration quality in different settings, pairs of fixed and moving intensity images and/or segmentations were fed into five methods split into the following categories: affine intensity-based using 1)mutual information (MI) or 2)normalised cross-correlation (NCC) as optimisation metrics, affine shape-based using 3)NCC applied to distance-transformed segmentations or 4)iterative closest point algorithm, and deformable weakly supervised deep learning-based using 5)LocalNet co-registration. Percent-changes in Dice coefficients, surface distances, MI, structural similarity index measure and target registration errors were evaluated.Main results:Co-registration using MI or NCC provided similar alignment performance, better than shape-based methods. LocalNet provided accurate co-registration of substructures by optimising subfield deformation throughout the volumes, outperforming other methods, especially in the longitudinal breast cancer xenograft dataset by minimising target registration errors.Significance:We showed the feasibility of co-registering repeatedly or longitudinally imaged vascular networks in photoacoustic mesoscopy, taking a step towards longitudinal quantitative characterisation of these complex structures. These tools open new outlooks for monitoring tumour angiogenesis at the meso-scale and for quantifying treatment-induced co-localised alterations in the vasculature.
RESUMEN
To date, the appropriate training required for the reproducible operation of multispectral optoacoustic tomography (MSOT) is poorly discussed. Therefore, the aim of this study was to assess the teachability of MSOT imaging. Five operators (two experienced and three inexperienced) performed repositioning imaging experiments. The inexperienced received the following introductions: personal supervision, video meeting, or printed introduction. The task was to image the exact same position on the calf muscle for seven times on five volunteers in two rounds of investigations. In the first session, operators used ultrasound guidance during measurements while using only photoacoustic data in the second session. The performance comparison was carried out with full-reference image quality measures to quantitatively assess the difference between repeated scans. The study demonstrates that given a personal supervision and hybrid ultrasound real-time imaging in MSOT measurements, inexperienced operators are able to achieve the same level as experienced operators in terms of repositioning accuracy.
Asunto(s)
Técnicas Fotoacústicas , Tomografía , Humanos , Procesamiento de Imagen Asistido por Computador/métodosRESUMEN
Establishing tissue-mimicking biophotonic phantom materials that provide long-term stability are imperative to enable the comparison of biomedical imaging devices across vendors and institutions, support the development of internationally recognized standards, and assist the clinical translation of novel technologies. Here, a manufacturing process is presented that results in a stable, low-cost, tissue-mimicking copolymer-in-oil material for use in photoacoustic, optical, and ultrasound standardization efforts. The base material consists of mineral oil and a copolymer with defined Chemical Abstract Service (CAS) numbers. The protocol presented here yields a representative material with a speed of sound c(f) = 1,481 ± 0.4 m·s-1 at 5 MHz (corresponds to the speed of sound of water at 20 °C), acoustic attenuation α(f) = 6.1 ± 0.06 dB·cm-1 at 5 MHz, optical absorption µa(λ) = 0.05 ± 0.005 mm-1 at 800 nm, and optical scattering µs'(λ) = 1 ± 0.1 mm-1 at 800 nm. The material allows independent tuning of the acoustic and optical properties by respectively varying the polymer concentration or light scattering (titanium dioxide) and absorbing agents (oil-soluble dye). The fabrication of different phantom designs is displayed and the homogeneity of the resulting test objects is confirmed using photoacoustic imaging. Due to its facile, repeatable fabrication process and durability, as well as its biologically relevant properties, the material recipe has high promise in multimodal acoustic-optical standardization initiatives.
Asunto(s)
Diagnóstico por Imagen , Aceite Mineral , Fantasmas de Imagen , Ultrasonografía/métodos , Acústica , Polímeros/químicaRESUMEN
Photoacoustic imaging (PAI), also referred to as optoacoustic imaging, has shown promise in early-stage clinical trials in a range of applications from inflammatory diseases to cancer. While the first PAI systems have recently received regulatory approvals, successful adoption of PAI technology into healthcare systems for clinical decision making must still overcome a range of barriers, from education and training to data acquisition and interpretation. The International Photoacoustic Standardisation Consortium (IPASC) undertook an community exercise in 2022 to identify and understand these barriers, then develop a roadmap of strategic plans to address them. Here, we outline the nature and scope of the barriers that were identified, along with short-, medium- and long-term community efforts required to overcome them, both within and beyond the IPASC group.
RESUMEN
Photoacoustic tomography (PAT) has the potential to recover morphological and functional tissue properties with high spatial resolution. However, previous attempts to solve the optical inverse problem with supervised machine learning were hampered by the absence of labeled reference data. While this bottleneck has been tackled by simulating training data, the domain gap between real and simulated images remains an unsolved challenge. We propose a novel approach to PAT image synthesis that involves subdividing the challenge of generating plausible simulations into two disjoint problems: (1) Probabilistic generation of realistic tissue morphology, and (2) pixel-wise assignment of corresponding optical and acoustic properties. The former is achieved with Generative Adversarial Networks (GANs) trained on semantically annotated medical imaging data. According to a validation study on a downstream task our approach yields more realistic synthetic images than the traditional model-based approach and could therefore become a fundamental step for deep learning-based quantitative PAT (qPAT).
RESUMEN
Photoacoustic (PA) imaging has the potential to revolutionize functional medical imaging in healthcare due to the valuable information on tissue physiology contained in multispectral photoacoustic measurements. Clinical translation of the technology requires conversion of the high-dimensional acquired data into clinically relevant and interpretable information. In this work, we present a deep learning-based approach to semantic segmentation of multispectral photoacoustic images to facilitate image interpretability. Manually annotated photoacoustic and ultrasound imaging data are used as reference and enable the training of a deep learning-based segmentation algorithm in a supervised manner. Based on a validation study with experimentally acquired data from 16 healthy human volunteers, we show that automatic tissue segmentation can be used to create powerful analyses and visualizations of multispectral photoacoustic images. Due to the intuitive representation of high-dimensional information, such a preprocessing algorithm could be a valuable means to facilitate the clinical translation of photoacoustic imaging.
RESUMEN
SIGNIFICANCE: Optical and acoustic imaging techniques enable noninvasive visualisation of structural and functional properties of tissue. The quantification of measurements, however, remains challenging due to the inverse problems that must be solved. Emerging data-driven approaches are promising, but they rely heavily on the presence of high-quality simulations across a range of wavelengths due to the lack of ground truth knowledge of tissue acoustical and optical properties in realistic settings. AIM: To facilitate this process, we present the open-source simulation and image processing for photonics and acoustics (SIMPA) Python toolkit. SIMPA is being developed according to modern software design standards. APPROACH: SIMPA enables the use of computational forward models, data processing algorithms, and digital device twins to simulate realistic images within a single pipeline. SIMPA's module implementations can be seamlessly exchanged as SIMPA abstracts from the concrete implementation of each forward model and builds the simulation pipeline in a modular fashion. Furthermore, SIMPA provides comprehensive libraries of biological structures, such as vessels, as well as optical and acoustic properties and other functionalities for the generation of realistic tissue models. RESULTS: To showcase the capabilities of SIMPA, we show examples in the context of photoacoustic imaging: the diversity of creatable tissue models, the customisability of a simulation pipeline, and the degree of realism of the simulations. CONCLUSIONS: SIMPA is an open-source toolkit that can be used to simulate optical and acoustic imaging modalities. The code is available at: https://github.com/IMSY-DKFZ/simpa, and all of the examples and experiments in this paper can be reproduced using the code available at: https://github.com/IMSY-DKFZ/simpa_paper_experiments.
Asunto(s)
Óptica y Fotónica , Programas Informáticos , Acústica , Dimetilpolisiloxanos , Procesamiento de Imagen Asistido por Computador/métodosRESUMEN
Mesoscopic photoacoustic imaging (PAI) enables non-invasive visualisation of tumour vasculature. The visual or semi-quantitative 2D measurements typically applied to mesoscopic PAI data fail to capture the 3D vessel network complexity and lack robust ground truths for assessment of accuracy. Here, we developed a pipeline for quantifying 3D vascular networks captured using mesoscopic PAI and tested the preservation of blood volume and network structure with topological data analysis. Ground truth data of in silico synthetic vasculatures and a string phantom indicated that learning-based segmentation best preserves vessel diameter and blood volume at depth, while rule-based segmentation with vesselness image filtering accurately preserved network structure in superficial vessels. Segmentation of vessels in breast cancer patient-derived xenografts (PDXs) compared favourably to ex vivo immunohistochemistry. Furthermore, our findings underscore the importance of validating segmentation methods when applying mesoscopic PAI as a tool to evaluate vascular networks in vivo.
RESUMEN
Photoacoustic imaging (PAI) is an emerging modality that has shown promise for improving patient management in a range of applications. Unfortunately, the current lack of uniformity in PAI data formats compromises inter-user data exchange and comparison, which impedes: technological progress; effective research collaboration; and efforts to deliver multi-centre clinical trials. To overcome this challenge, the International Photoacoustic Standardisation Consortium (IPASC) has established a data format with a defined consensus metadata structure and developed an open-source software application programming interface (API) to enable conversion from proprietary file formats into the IPASC format. The format is based on Hierarchical Data Format 5 (HDF5) and designed to store photoacoustic raw time series data. Internal quality control mechanisms are included to ensure completeness and consistency of the converted data. By unifying the variety of proprietary data and metadata definitions into a consensus format, IPASC hopes to facilitate the exchange and comparison of PAI data.
RESUMEN
Photoacoustic imaging (PAI) is a promising emerging imaging modality that enables spatially resolved imaging of optical tissue properties up to several centimeters deep in tissue, creating the potential for numerous exciting clinical applications. However, extraction of relevant tissue parameters from the raw data requires the solving of inverse image reconstruction problems, which have proven extremely difficult to solve. The application of deep learning methods has recently exploded in popularity, leading to impressive successes in the context of medical imaging and also finding first use in the field of PAI. Deep learning methods possess unique advantages that can facilitate the clinical translation of PAI, such as extremely fast computation times and the fact that they can be adapted to any given problem. In this review, we examine the current state of the art regarding deep learning in PAI and identify potential directions of research that will help to reach the goal of clinical applicability.
RESUMEN
The ability of photoacoustic imaging to measure functional tissue properties, such as blood oxygenation sO[Formula: see text], enables a wide variety of possible applications. sO[Formula: see text] can be computed from the ratio of oxyhemoglobin HbO[Formula: see text] and deoxyhemoglobin Hb, which can be distuinguished by multispectral photoacoustic imaging due to their distinct wavelength-dependent absorption. However, current methods for estimating sO[Formula: see text] yield inaccurate results in realistic settings, due to the unknown and wavelength-dependent influence of the light fluence on the signal. In this work, we propose learned spectral decoloring to enable blood oxygenation measurements to be inferred from multispectral photoacoustic imaging. The method computes sO[Formula: see text] pixel-wise, directly from initial pressure spectra [Formula: see text], which represent initial pressure values at a fixed spatial location [Formula: see text] over all recorded wavelengths [Formula: see text]. The method is compared to linear unmixing approaches, as well as pO[Formula: see text] and blood gas analysis reference measurements. Experimental results suggest that the proposed method is able to obtain sO[Formula: see text] estimates from multispectral photoacoustic measurements in silico, in vitro, and in vivo.
RESUMEN
PURPOSE: Photoacoustic tomography (PAT) is a novel imaging technique that can spatially resolve both morphological and functional tissue properties, such as vessel topology and tissue oxygenation. While this capacity makes PAT a promising modality for the diagnosis, treatment, and follow-up of various diseases, a current drawback is the limited field of view provided by the conventionally applied 2D probes. METHODS: In this paper, we present a novel approach to 3D reconstruction of PAT data (Tattoo tomography) that does not require an external tracking system and can smoothly be integrated into clinical workflows. It is based on an optical pattern placed on the region of interest prior to image acquisition. This pattern is designed in a way that a single tomographic image of it enables the recovery of the probe pose relative to the coordinate system of the pattern, which serves as a global coordinate system for image compounding. RESULTS: To investigate the feasibility of Tattoo tomography, we assessed the quality of 3D image reconstruction with experimental phantom data and in vivo forearm data. The results obtained with our prototype indicate that the Tattoo method enables the accurate and precise 3D reconstruction of PAT data and may be better suited for this task than the baseline method using optical tracking. CONCLUSIONS: In contrast to previous approaches to 3D ultrasound (US) or PAT reconstruction, the Tattoo approach neither requires complex external hardware nor training data acquired for a specific application. It could thus become a valuable tool for clinical freehand PAT.
Asunto(s)
Imagenología Tridimensional/métodos , Fantasmas de Imagen , Tatuaje/métodos , Tomografía Computarizada por Rayos X/métodos , Ultrasonografía/métodos , HumanosRESUMEN
Image-based tracking of medical instruments is an integral part of surgical data science applications. Previous research has addressed the tasks of detecting, segmenting and tracking medical instruments based on laparoscopic video data. However, the proposed methods still tend to fail when applied to challenging images and do not generalize well to data they have not been trained on. This paper introduces the Heidelberg Colorectal (HeiCo) data set - the first publicly available data set enabling comprehensive benchmarking of medical instrument detection and segmentation algorithms with a specific emphasis on method robustness and generalization capabilities. Our data set comprises 30 laparoscopic videos and corresponding sensor data from medical devices in the operating room for three different types of laparoscopic surgery. Annotations include surgical phase labels for all video frames as well as information on instrument presence and corresponding instance-wise segmentation masks for surgical instruments (if any) in more than 10,000 individual frames. The data has successfully been used to organize international competitions within the Endoscopic Vision Challenges 2017 and 2019.
Asunto(s)
Colon Sigmoide/cirugía , Proctocolectomía Restauradora/instrumentación , Recto/cirugía , Sistemas de Navegación Quirúrgica , Ciencia de los Datos , Humanos , LaparoscopíaRESUMEN
The erratum corrects an error in the published article.
RESUMEN
Spreading depolarization (SD) is a self-propagating wave of near-complete neuronal depolarization that is abundant in a wide range of neurological conditions, including stroke. SD was only recently documented in humans and is now considered a therapeutic target for brain injury, but the mechanisms related to SD in complex brains are not well understood. While there are numerous approaches to interventional imaging of SD on the exposed brain surface, measuring SD deep in brain is so far only possible with low spatiotemporal resolution and poor contrast. Here, we show that photoacoustic imaging enables the study of SD and its hemodynamics deep in the gyrencephalic brain with high spatiotemporal resolution. As rapid neuronal depolarization causes tissue hypoxia, we achieve this by continuously estimating blood oxygenation with an intraoperative hybrid photoacoustic and ultrasonic imaging system. Due to its high resolution, promising imaging depth and high contrast, this novel approach to SD imaging can yield new insights into SD and thereby lead to advances in stroke, and brain injury research.