Your browser doesn't support javascript.
loading
: 20 | 50 | 100
1 - 20 de 31
1.
Photoacoustics ; 33: 100555, 2023 Oct.
Article En | MEDLINE | ID: mdl-38021286

Photoacoustic (PA) imaging has the potential to deliver non-invasive diagnostic information. However, skin tone differences bias PA target visualization, as the elevated optical absorption of melanated skin decreases optical fluence within the imaging plane and increases the presence of acoustic clutter. This paper demonstrates that short-lag spatial coherence (SLSC) beamforming mitigates this bias. PA data from the forearm of 18 volunteers were acquired with 750-, 810-, and 870-nm wavelengths. Skin tones ranging from light to dark were objectively quantified using the individual typology angle (ITA°). The signal-to-noise ratio (SNR) of the radial artery (RA) and surrounding clutter were measured. Clutter was minimal (e.g., -16 dB relative to the RA) with lighter skin tones and increased to -8 dB with darker tones, which compromised RA visualization in conventional PA images. SLSC beamforming achieved a median SNR improvement of 3.8 dB, resulting in better RA visualization for all skin tones.

2.
Biomed Opt Express ; 14(8): 4349-4368, 2023 Aug 01.
Article En | MEDLINE | ID: mdl-37799699

Photoacoustic imaging has demonstrated recent promise for surgical guidance, enabling visualization of tool tips during surgical and non-surgical interventions. To receive photoacoustic signals, most conventional transducers are rigid, while a flexible array is able to deform and provide complete contact on surfaces with different geometries. In this work, we present photoacoustic images acquired with a flexible array transducer in multiple concave shapes in phantom and ex vivo bovine liver experiments targeted toward interventional photoacoustic applications. We validate our image reconstruction equations for known sensor geometries with simulated data, and we provide empirical elevation field-of-view, target position, and image quality measurements. The elevation field-of-view was 6.08 mm at a depth of 4 cm and greater than 13 mm at a depth of 5 cm. The target depth agreement with ground truth ranged 98.35-99.69%. The mean lateral and axial target sizes when imaging 600 µm-core-diameter optical fibers inserted within the phantoms ranged 0.98-2.14 mm and 1.61-2.24 mm, respectively. The mean ± one standard deviation of lateral and axial target sizes when surrounded by liver tissue were 1.80±0.48 mm and 2.17±0.24 mm, respectively. Contrast, signal-to-noise, and generalized contrast-to-noise ratios ranged 6.92-24.42 dB, 46.50-67.51 dB, and 0.76-1, respectively, within the elevational field-of-view. Results establish the feasibility of implementing photoacoustic-guided surgery with a flexible array transducer.

3.
BME Front ; 20222022.
Article En | MEDLINE | ID: mdl-36714302

The massive and continuous spread of COVID-19 has motivated researchers around the world to intensely explore, understand, and develop new techniques for diagnosis and treatment. Although lung ultrasound imaging is a less established approach when compared to other medical imaging modalities such as X-ray and CT, multiple studies have demonstrated its promise to diagnose COVID-19 patients. At the same time, many deep learning models have been built to improve the diagnostic efficiency of medical imaging. The integration of these initially parallel efforts has led multiple researchers to report deep learning applications in medical imaging of COVID-19 patients, most of which demonstrate the outstanding potential of deep learning to aid in the diagnosis of COVID-19. This invited review is focused on deep learning applications in lung ultrasound imaging of COVID-19 and provides a comprehensive overview of ultrasound systems utilized for data acquisition, associated datasets, deep learning models, and comparative performance.

4.
Biomed Opt Express ; 12(11): 7049-7050, 2021 Nov 01.
Article En | MEDLINE | ID: mdl-34858698

[This corrects the article on p. 1205 in vol. 12, PMID: 33796347.].

5.
Front Oncol ; 11: 759811, 2021.
Article En | MEDLINE | ID: mdl-34804959

PURPOSE: We proposed a Haar feature-based method for tracking endoscopic ultrasound (EUS) probe in diagnostic computed tomography (CT) and Magnetic Resonance Imaging (MRI) scans for guiding hydrogel injection without external tracking hardware. This study aimed to assess the feasibility of implementing our method with phantom and patient images. MATERIALS AND METHODS: Our methods included the pre-simulation section and Haar features extraction steps. Firstly, the simulated EUS set was generated based on anatomic information of interpolated CT/MRI images. Secondly, the efficient Haar features were extracted from simulated EUS images to create a Haar feature dictionary. The relative EUS probe position was estimated by searching the best matched Haar feature vector of the dictionary with Haar feature vector of target EUS images. The utilization of this method was validated using EUS phantom and patient CT/MRI images. RESULTS: In the phantom experiment, we showed that our Haar feature-based EUS probe tracking method can find the best matched simulated EUS image from a simulated EUS dictionary which includes 123 simulated images. The errors of all four target points between the real EUS image and the best matched EUS images were within 1 mm. In the patient CT/MRI scans, the best matched simulated EUS image was selected by our method accurately, thereby confirming the probe location. However, when applying our method in MRI images, our method is not always robust due to the low image resolution. CONCLUSIONS: Our Haar feature-based method is capable to find the best matched simulated EUS image from the dictionary. We demonstrated the feasibility of our method for tracking EUS probe without external tracking hardware, thereby guiding the hydrogel injection between the head of the pancreas and duodenum.

6.
Biomed Opt Express ; 12(7): 4115-4118, 2021 Jul 01.
Article En | MEDLINE | ID: mdl-34457402

This feature issue of Biomedical Optics Express covered all aspects of translational photoacoustic research. Application areas include screening and diagnosis of diseases, imaging of disease progression and therapeutic response, and image-guided treatment, such as surgery, drug delivery, and photothermal/photodynamic therapy. The feature issue also covers relevant developments in photoacoustic instrumentation, contrast agents, image processing and reconstruction algorithms.

7.
IEEE Trans Med Imaging ; 40(11): 3178-3189, 2021 11.
Article En | MEDLINE | ID: mdl-34101588

Ultrasound imaging has been developed for image-guided radiotherapy for tumor tracking, and the flexible array transducer is a promising tool for this task. It can reduce the user dependence and anatomical changes caused by the traditional ultrasound transducer. However, due to its flexible geometry, the conventional delay-and-sum (DAS) beamformer may apply incorrect time delay to the radio-frequency (RF) data and produce B-mode images with considerable defocusing and distortion. To address this problem, we propose a novel end-to-end deep learning approach that may alternate the conventional DAS beamformer when the transducer geometry is unknown. Different deep neural networks (DNNs) were designed to learn the proper time delays for each channel, and they were expected to reconstruct the undistorted high-quality B-mode images directly from RF channel data. We compared the DNN results to the standard DAS beamformed results using simulation and flexible array transducer scan data. With the proposed DNN approach, the averaged full-width-at-half-maximum (FWHM) of point scatters is 1.80 mm and 1.31 mm lower in simulation and scan results, respectively; the contrast-to-noise ratio (CNR) of the anechoic cyst in simulation and phantom scan is improved by 0.79 dB and 1.69 dB, respectively; and the aspect ratios of all the cysts are closer to 1. The evaluation results show that the proposed approach can effectively reduce the distortion and improve the lateral resolution and contrast of the reconstructed B-mode images.


Deep Learning , Image Processing, Computer-Assisted , Phantoms, Imaging , Transducers , Ultrasonography
8.
Biomed Opt Express ; 12(4): 2079-2117, 2021 Apr 01.
Article En | MEDLINE | ID: mdl-33996218

Photoacoustic imaging-the combination of optics and acoustics to visualize differences in optical absorption - has recently demonstrated strong viability as a promising method to provide critical guidance of multiple surgeries and procedures. Benefits include its potential to assist with tumor resection, identify hemorrhaged and ablated tissue, visualize metal implants (e.g., needle tips, tool tips, brachytherapy seeds), track catheter tips, and avoid accidental injury to critical subsurface anatomy (e.g., major vessels and nerves hidden by tissue during surgery). These benefits are significant because they reduce surgical error, associated surgery-related complications (e.g., cancer recurrence, paralysis, excessive bleeding), and accidental patient death in the operating room. This invited review covers multiple aspects of the use of photoacoustic imaging to guide both surgical and related non-surgical interventions. Applicable organ systems span structures within the head to contents of the toes, with an eye toward surgical and interventional translation for the benefit of patients and for use in operating rooms and interventional suites worldwide. We additionally include a critical discussion of complete systems and tools needed to maximize the success of surgical and interventional applications of photoacoustic-based technology, spanning light delivery, acoustic detection, and robotic methods. Multiple enabling hardware and software integration components are also discussed, concluding with a summary and future outlook based on the current state of technological developments, recent achievements, and possible new directions.

9.
Biomed Opt Express ; 12(3): 1205-1216, 2021 Mar 01.
Article En | MEDLINE | ID: mdl-33796347

Photoacoustic imaging is a promising technique to provide guidance during multiple surgeries and procedures. One challenge with this technique is that major blood vessels in the liver are difficult to differentiate from surrounding tissue within current safety limits, which only exist for human skin and eyes. In this paper, we investigate the safety of raising this limit for liver tissue excited with a 750 nm laser wavelength and approximately 30 mJ laser energy (corresponding to approximately 150 mJ/cm2 fluence). Laparotomies were performed on six swine to empirically investigate potential laser-related liver damage. Laser energy was applied for temporal durations of 1 minute, 10 minutes, and 20 minutes. Lasered liver lobes were excised either immediately after laser application (3 swine) or six weeks after surgery (3 swine). Cell damage was assessed using liver damage blood biomarkers and histopathology analyses of 41 tissue samples total. The biomarkers were generally normal over a 6 week post-surgical in vivo study period. Histopathology revealed no cell death, although additional pathology was present (i.e., hemorrhage, inflammation, fibrosis) due to handling, sample resection, and fibrous adhesions as a result of the laparotomy. These results support a new protocol for studying laser-related liver damage, indicating the potential to raise the safety limit for liver photoacoustic imaging to approximately 150 mJ/cm2 with a laser wavelength of 750 nm and for imaging durations up to 10 minutes without causing cell death. This investigation and protocol may be applied to other tissues and extended to additional wavelengths and energies, which is overall promising for introducing new tissue-specific laser safety limits for photoacoustic-guided surgery.

10.
Cell ; 184(3): 561-565, 2021 02 04.
Article En | MEDLINE | ID: mdl-33503447

Our nationwide network of BME women faculty collectively argue that racial funding disparity by the National Institutes of Health (NIH) remains the most insidious barrier to success of Black faculty in our profession. We thus refocus attention on this critical barrier and suggest solutions on how it can be dismantled.


Biomedical Research/economics , Black or African American , Financial Management , Research Personnel/economics , Humans , National Institutes of Health (U.S.)/economics , Racial Groups , United States
11.
Biomed Opt Express ; 11(7): 3684-3698, 2020 Jul 01.
Article En | MEDLINE | ID: mdl-33014560

The generalized contrast-to-noise ratio (gCNR) is a relatively new image quality metric designed to assess the probability of lesion detectability in ultrasound images. Although gCNR was initially demonstrated with ultrasound images, the metric is theoretically applicable to multiple types of medical images. In this paper, the applicability of gCNR to photoacoustic images is investigated. The gCNR was computed for both simulated and experimental photoacoustic images generated by amplitude-based (i.e., delay-and-sum) and coherence-based (i.e., short-lag spatial coherence) beamformers. These gCNR measurements were compared to three more traditional image quality metrics (i.e., contrast, contrast-to-noise ratio, and signal-to-noise ratio) applied to the same datasets. An increase in qualitative target visibility generally corresponded with increased gCNR. In addition, gCNR magnitude was more directly related to the separability of photoacoustic signals from their background, which degraded with the presence of limited bandwidth artifacts and increased levels of channel noise. At high gCNR values (i.e., 0.95-1), contrast, contrast-to-noise ratio, and signal-to-noise ratio varied by up to 23.7-56.2 dB, 2.0-3.4, and 26.5-7.6×1020, respectively, for simulated, experimental phantom, and in vivo data. Therefore, these traditional metrics can experience large variations when a target is fully detectable, and additional increases in these values would have no impact on photoacoustic target detectability. In addition, gCNR is robust to changes in traditional metrics introduced by applying a minimum threshold to image amplitudes. In tandem with other photoacoustic image quality metrics and with a defined range of 0 to 1, gCNR has promising potential to provide additional insight, particularly when designing new beamformers and image formation techniques and when reporting quantitative performance without an opportunity to qualitatively assess corresponding images (e.g., in text-only abstracts).

12.
J Appl Phys ; 128(6): 060904, 2020 Aug 14.
Article En | MEDLINE | ID: mdl-32817994

Minimally invasive surgeries often require complicated maneuvers and delicate hand-eye coordination and ideally would incorporate "x-ray vision" to see beyond tool tips and underneath tissues prior to making incisions. Photoacoustic imaging has the potential to offer this feature but not with ionizing x-rays. Instead, optical fibers and acoustic receivers enable photoacoustic sensing of major structures-such as blood vessels and nerves-that are otherwise hidden from view. This imaging process is initiated by transmitting laser pulses that illuminate regions of interest, causing thermal expansion and the generation of sound waves that are detectable with conventional ultrasound transducers. The recorded signals are then converted to images through the beamforming process. Photoacoustic imaging may be implemented to both target and avoid blood-rich surgical contents (and in some cases simultaneously or independently visualize optical fiber tips or metallic surgical tool tips) in order to prevent accidental injury and assist device operators during minimally invasive surgeries and interventional procedures. Novel light delivery systems, counterintuitive findings, and robotic integration methods introduced by the Photoacoustic & Ultrasonic Systems Engineering Lab are summarized in this invited Perspective, setting the foundation and rationale for the subsequent discussion of the author's views on possible future directions for this exciting frontier known as photoacoustic-guided surgery.

13.
Photoacoustics ; 19: 100183, 2020 Sep.
Article En | MEDLINE | ID: mdl-32695578

Real-time intraoperative guidance during minimally invasive neurosurgical procedures (e.g., endonasal transsphenoidal surgery) is often limited to endoscopy and CT-guided image navigation, which can be suboptimal at locating underlying blood vessels and nerves. Accidental damage to these critical structures can have severe surgical complications, including patient blindness and death. Photoacoustic image guidance was previously proposed as a method to prevent accidental injury. While the proposed technique remains promising, the original light delivery and sound reception components of this technology require alterations to make the technique suitable for patient use. This paper presents simulation and experimental studies performed with both an intact human skull (which was cleaned from tissue attachments) and a complete human cadaver head (with contents and surrounding tissue intact) in order to investigate optimal locations for ultrasound probe placement during photoacoustic imaging and to test the feasibility of a modified light delivery design. Volumetric x-ray CT images of the human skull were used to create k-Wave simulations of acoustic wave propagation within this cranial environment. Photoacoustic imaging of the internal carotid artery (ICA) was performed with this same skull. Optical fibers emitting 750 nm light were inserted into the nasal cavity for ICA illumination. The ultrasound probe was placed on three optimal regions identified by simulations: (1) nasal cavity, (2) ocular region, and (3) 1 mm-thick temporal bone (which received 9.2%, 4.7%, and 3.8% of the initial photoacoustic pressure, respectively, in simulations). For these three probe locations, the contrast of the ICA in comparative experimental photoacoustic images was 27 dB, 19 dB, and 12 dB, respectively, with delay-and-sum (DAS) beamforming and laser pulse energies of 3 mJ, 5 mJ, and 4.2 mJ, respectively. Short-lag spatial coherence (SLSC) beamforming improved the contrast of these DAS images by up to 15 dB, enabled visualization of multiple cross-sectional ICA views in a single image, and enabled the use of lower laser energies. Combined simulation and experimental results with the emptied skull and >1 mm-thick temporal bone indicated that the ocular and nasal regions were more optimal probe locations than the temporal ultrasound probe location. Results from both the same skull filled with ovine brains and eyes and the human cadaver head validate the ocular region as an optimal acoustic window for our current system setup, producing high-contrast (i.e., up to 35 dB) DAS and SLSC photoacoustic images within the laser safety limits of a novel, compact light delivery system design that is independent of surgical tools (i.e., a fiber bundle with 6.8 mm outer diameter, 2 mm-diameter optical aperture, and an air gap spacing between the sphenoid bone and fiber tips). These results are promising toward identifying, quantifying, and overcoming major system design barriers to proceed with future patient testing.

14.
Article En | MEDLINE | ID: mdl-32396084

Single plane wave transmissions are promising for automated imaging tasks requiring high ultrasound frame rates over an extended field of view. However, a single plane wave insonification typically produces suboptimal image quality. To address this limitation, we are exploring the use of deep neural networks (DNNs) as an alternative to delay-and-sum (DAS) beamforming. The objectives of this work are to obtain information directly from raw channel data and to simultaneously generate both a segmentation map for automated ultrasound tasks and a corresponding ultrasound B-mode image for interpretable supervision of the automation. We focus on visualizing and segmenting anechoic targets surrounded by tissue and ignoring or deemphasizing less important surrounding structures. DNNs trained with Field II simulations were tested with simulated, experimental phantom, and in vivo data sets that were not included during training. With unfocused input channel data (i.e., prior to the application of receive time delays), simulated, experimental phantom, and in vivo test data sets achieved mean ± standard deviation Dice similarity coefficients of 0.92 ± 0.13, 0.92 ± 0.03, and 0.77 ± 0.07, respectively, and generalized contrast-to-noise ratios (gCNRs) of 0.95 ± 0.08, 0.93 ± 0.08, and 0.75 ± 0.14, respectively. With subaperture beamformed channel data and a modification to the input layer of the DNN architecture to accept these data, the fidelity of image reconstruction increased (e.g., mean gCNR of multiple acquisitions of two in vivo breast cysts ranged 0.89-0.96), but DNN display frame rates were reduced from 395 to 287 Hz. Overall, the DNNs successfully translated feature representations learned from simulated data to phantom and in vivo data, which is promising for this novel approach to simultaneous ultrasound image formation and segmentation.


Deep Learning , Image Processing, Computer-Assisted/methods , Ultrasonography/methods , Algorithms , Breast/diagnostic imaging , Breast Neoplasms/diagnostic imaging , Female , Humans , Phantoms, Imaging
15.
Article En | MEDLINE | ID: mdl-31796398

In the last 30 years, the contrast-to-noise ratio (CNR) has been used to estimate the contrast and lesion detectability in ultrasound images. Recent studies have shown that the CNR cannot be used with modern beamformers, as dynamic range alterations can produce arbitrarily high CNR values with no real effect on the probability of lesion detection. We generalize the definition of CNR based on the overlap area between two probability density functions. This generalized CNR (gCNR) is robust against dynamic range alterations; it can be applied to all kind of images, units, or scales; it provides a quantitative measure for contrast; and it has a simple statistical interpretation, i.e., the success rate that can be expected from an ideal observer at the task of separating pixels. We test gCNR on several state-of-the-art imaging algorithms and, in addition, on a trivial compression of the dynamic range. We observe that CNR varies greatly between the state-of-the-art methods, with improvements larger than 100%. We observe that trivial compression leads to a CNR improvement of over 200%. The proposed index, however, yields the same value for compressed and uncompressed images. The tested methods showed mismatched performance in terms of lesion detectability, with variations in gCNR ranging from -0.08 to +0.29. This new metric fixes a methodological flaw in the way we study contrast and allows us to assess the relevance of new imaging algorithms.


Image Processing, Computer-Assisted/methods , Ultrasonography/methods , Algorithms , Cysts/diagnostic imaging , Models, Biological , Phantoms, Imaging , Ultrasonography/instrumentation , Ultrasonography/standards
16.
Phys Med Biol ; 64(18): 185006, 2019 09 11.
Article En | MEDLINE | ID: mdl-31323649

We have previously developed a robotic ultrasound imaging system for motion monitoring in abdominal radiation therapy. Owing to the slow speed of ultrasound image processing, our previous system could only track abdominal motions under breath-hold. To overcome this limitation, a novel 2D-based image processing method for tracking intra-fraction respiratory motion is proposed. Fifty-seven different anatomical features acquired from 27 sets of 2D ultrasound sequences were used in this study. Three 2D ultrasound sequences were acquired with the robotic ultrasound system from three healthy volunteers. The remaining datasets were provided by the 2015 MICCAI Challenge on Liver Ultrasound Tracking. All datasets were preprocessed to extract the feature point, and a patient-specific motion pattern was extracted by principal component analysis and slow feature analysis (SFA). The tracking finds the most similar frame (or indexed frame) by a k-dimensional-tree-based nearest neighbor search for estimating the tracked object location. A template image was updated dynamically through the indexed frame to perform a fast template matching (TM) within a learned smaller search region on the incoming frame. The mean tracking error between manually annotated landmarks and the location extracted from the indexed training frame is 1.80 ± 1.42 mm. Adding a fast TM procedure within a small search region reduces the mean tracking error to 1.14 ± 1.16 mm. The tracking time per frame is 15 ms, which is well below the frame acquisition time. Furthermore, the anatomical reproducibility was measured by analyzing the location's anatomical landmark relative to the probe; the position-controlled probe has better reproducibility and yields a smaller mean error across all three volunteer cases, compared to the force-controlled probe (2.69 versus 11.20 mm in the superior-inferior direction and 1.19 versus 8.21 mm in the anterior-posterior direction). Our method reduces the processing time for tracking respiratory motion significantly, which can reduce the delivery uncertainty.


Abdomen/diagnostic imaging , Abdomen/radiation effects , Dose Fractionation, Radiation , Machine Learning , Movement , Radiotherapy, Image-Guided/methods , Respiration , Healthy Volunteers , Humans , Image Processing, Computer-Assisted , Radiotherapy Planning, Computer-Assisted , Reproducibility of Results , Ultrasonography
17.
Article En | MEDLINE | ID: mdl-30507500

Ultrasound is frequently used in conjunction with mammography in order to detect breast cancer as early as possible. However, due largely to the heterogeneity of breast tissue, ultrasound images are plagued with clutter that obstructs important diagnostic features. Short-lag spatial coherence (SLSC) imaging has proven to be effective at clutter reduction in noisy ultrasound images. M -Weighted SLSC and Robust-SLSC (R-SLSC) imaging were recently introduced to further improve image quality at higher lag values, while R-SLSC imaging has the added benefit of enabling the adjustment of tissue texture to produce a tissue signal-to-noise ratio (SNR) that is quantitatively similar to B-mode speckle SNR. This paper investigates the initial application of SLSC, M -Weighted SLSC, and R-SLSC imaging to nine targets in the female breast [two simple cysts, one complicated cyst, two fibroadenomas, one hematoma, one complex cystic and solid mass, one invasive ductal carcinoma (IDC), and one ductal carcinoma in situ (DCIS)]. As expected, R-SLSC beamforming improves cyst and hematoma contrast by up to 6.35 and 1.55 dB, respectively, when compared to the original B-mode image, and similar improvements are achieved with SLSC and M -Weighted SLSC imaging. However, an interesting finding from this initial investigation is that the solid masses (i.e., fibroadenoma, complex cystic and solid mass, IDC, and DCIS), which appear as hypoechoic in the B-mode image, have similarly high coherence to that of surrounding tissue in coherence-based images. This work holds promise for using SLSC, M -Weighted SLSC, and/or R-SLSC imaging to distinguish between fluid-filled and solid hypoechoic breast masses.


Breast Neoplasms/diagnostic imaging , Breast/diagnostic imaging , Image Interpretation, Computer-Assisted/methods , Ultrasonography, Mammary/methods , Female , Humans , Signal-To-Noise Ratio
18.
Biomed Opt Express ; 9(11): 5566-5582, 2018 Nov 01.
Article En | MEDLINE | ID: mdl-30460147

Directly displaying the spatial coherence of photoacoustic signals (i.e., coherence-based photoacoustic imaging) remarkably improves image contrast, signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR), and imaging depth when compared to conventional amplitude-based reconstruction techniques (e.g., backprojection, delay-and-sum beamforming, and Fourier-based reconstruction). We recently developed photoacoustic-specific theory to describe the spatial coherence process as a function of the element spacing on a receive acoustic aperture to enable photoacoustic image optimization without requiring experiments. However, this theory lacked noise models, which contributed to significant departures in coherence measurements when compared to experimental data, particularly at higher values of element separation. In this paper, we develop and implement two models based on experimental observations of noise in photoacoustic spatial coherence measurements to improve our existing spatial coherence theory. These models were derived to describe the effects of incident fluence variations, low-energy light sources (e.g., pulsed laser diodes and light-emitting diodes), averaging multiple signals from low-energy light sources, and imaging with light sources that are > 5mm from photoacoustic targets. Results qualitatively match experimental coherence functions and provide similar contrast, SNR, and CNR to experimental SLSC images. In particular, the added noise affects image quality metrics by introducing large variations in target contrast and significantly reducing target CNR and SNR when compared to minimal-noise cases. These results provide insight into additional requirements for optimization of coherence-based photoacoustic image quality.

19.
Sci Rep ; 8(1): 15519, 2018 10 19.
Article En | MEDLINE | ID: mdl-30341371

In intraoperative settings, the presence of acoustic clutter and reflection artifacts from metallic surgical tools often reduces the effectiveness of ultrasound imaging and complicates the localization of surgical tool tips. We propose an alternative approach for tool tracking and navigation in these challenging acoustic environments by augmenting ultrasound systems with a light source (to perform photoacoustic imaging) and a robot (to autonomously and robustly follow a surgical tool regardless of the tissue medium). The robotically controlled ultrasound probe continuously visualizes the location of the tool tip by segmenting and tracking photoacoustic signals generated from an optical fiber inside the tool. System validation in the presence of fat, muscle, brain, skull, and liver tissue with and without the presence of an additional clutter layer resulted in mean signal tracking errors <2 mm, mean probe centering errors <1 mm, and successful recovery from ultrasound perturbations, representing either patient motion or switching from photoacoustic images to ultrasound images to search for a target of interest. A detailed analysis of channel SNR in controlled experiments with and without significant acoustic clutter revealed that the detection of a needle tip is possible with photoacoustic imaging, particularly in cases where ultrasound imaging traditionally fails. Results show promise for guiding surgeries and procedures in acoustically challenging environments with this novel robotic and photoacoustic system combination.


Image Processing, Computer-Assisted/methods , Light , Photoacoustic Techniques/trends , Surgery, Computer-Assisted/methods , Ultrasonography, Interventional/methods , Adipose Tissue/diagnostic imaging , Algorithms , Animals , Cattle , Chickens , Muscles/diagnostic imaging , Needles , Optical Fibers , Robotics , Spectrum Analysis
20.
Med Phys ; 45(11): 4986-5003, 2018 Nov.
Article En | MEDLINE | ID: mdl-30168159

PURPOSE: Compensation for respiratory motion is important during abdominal cancer treatments. In this work we report the results of the 2015 MICCAI Challenge on Liver Ultrasound Tracking and extend the 2D results to relate them to clinical relevance in form of reducing treatment margins and hence sparing healthy tissues, while maintaining full duty cycle. METHODS: We describe methodologies for estimating and temporally predicting respiratory liver motion from continuous ultrasound imaging, used during ultrasound-guided radiation therapy. Furthermore, we investigated the trade-off between tracking accuracy and runtime in combination with temporal prediction strategies and their impact on treatment margins. RESULTS: Based on 2D ultrasound sequences from 39 volunteers, a mean tracking accuracy of 0.9 mm was achieved when combining the results from the 4 challenge submissions (1.2 to 3.3 mm). The two submissions for the 3D sequences from 14 volunteers provided mean accuracies of 1.7 and 1.8 mm. In combination with temporal prediction, using the faster (41 vs 228 ms) but less accurate (1.4 vs 0.9 mm) tracking method resulted in substantially reduced treatment margins (70% vs 39%) in contrast to mid-ventilation margins, as it avoided non-linear temporal prediction by keeping the treatment system latency low (150 vs 400 ms). Acceleration of the best tracking method would improve the margin reduction to 75%. CONCLUSIONS: Liver motion estimation and prediction during free-breathing from 2D ultrasound images can substantially reduce the in-plane motion uncertainty and hence treatment margins. Employing an accurate tracking method while avoiding non-linear temporal prediction would be favorable. This approach has the potential to shorten treatment time compared to breath-hold and gated approaches, and increase treatment efficiency and safety.


Algorithms , Imaging, Three-Dimensional/methods , Liver/diagnostic imaging , Liver/radiation effects , Radiotherapy, Image-Guided/methods , Adult , Healthy Volunteers , Humans , Ultrasonography , Young Adult
...