Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 33
Filtrar
Más filtros













Base de datos
Intervalo de año de publicación
1.
BMC Musculoskelet Disord ; 25(1): 412, 2024 May 27.
Artículo en Inglés | MEDLINE | ID: mdl-38802774

RESUMEN

BACKGROUND: Dysfunctional gliding of deep fascia and muscle layers forms the basis of myofascial pain and dysfunction, which can cause chronic shoulder pain. Ultrasound shear strain imaging may offer a non-invasive tool to quantitatively evaluate the extent of muscular dysfunctional gliding and its correlation with pain. This case study is the first to use ultrasound shear strain imaging to report the shear strain between the pectoralis major and minor muscles in shoulders with and without chronic pain. CASE PRESENTATION: The shear strain between the pectoralis major and minor muscles during shoulder rotation in a volunteer with chronic shoulder pain was measured with ultrasound shear strain imaging. The results show that the mean ± standard deviation shear strain was 0.40 ± 0.09 on the affected side, compared to 1.09 ± 0.18 on the unaffected side (p<0.05). The results suggest that myofascial dysfunction may cause the muscles to adhere together thereby reducing shear strain on the affected side. CONCLUSION: Our findings elucidate a potential pathophysiology of myofascial dysfunction in chronic shoulder pain and reveal the potential utility of ultrasound imaging to provide a useful biomarker for shear strain evaluation between the pectoralis major and minor muscles.


Asunto(s)
Dolor Crónico , Dolor de Hombro , Ultrasonografía , Humanos , Dolor de Hombro/diagnóstico por imagen , Dolor de Hombro/fisiopatología , Dolor de Hombro/etiología , Dolor Crónico/diagnóstico por imagen , Dolor Crónico/fisiopatología , Ultrasonografía/métodos , Síndromes del Dolor Miofascial/diagnóstico por imagen , Síndromes del Dolor Miofascial/fisiopatología , Adulto , Masculino , Músculos Pectorales/diagnóstico por imagen , Músculos Pectorales/fisiopatología , Femenino , Resistencia al Corte
2.
Cancers (Basel) ; 16(7)2024 Mar 22.
Artículo en Inglés | MEDLINE | ID: mdl-38610923

RESUMEN

To develop ultrasound-guided radiotherapy, we proposed an assistant structure with embedded markers along with a novel alternative method, the Aligned Peak Response (APR) method, to alter the conventional delay-and-sum (DAS) beamformer for reconstructing ultrasound images obtained from a flexible array. We simulated imaging targets in Field-II using point target phantoms with point targets at different locations. In the experimental phantom ultrasound images, image RF data were acquired with a flexible transducer with in-house assistant structures embedded with needle targets for testing the accuracy of the APR method. The lateral full width at half maximum (FWHM) values of the objective point target (OPT) in ground truth ultrasound images, APR-delayed ultrasound images with a flat shape, and images acquired with curved transducer radii of 500 mm and 700 mm were 3.96 mm, 4.95 mm, 4.96 mm, and 4.95 mm. The corresponding axial FWHM values were 1.52 mm, 4.08 mm, 5.84 mm, and 5.92 mm, respectively. These results demonstrate that the proposed assistant structure and the APR method have the potential to construct accurate delay curves without external shape sensing, thereby enabling a flexible ultrasound array for tracking pancreatic tumor targets in real time for radiotherapy.

3.
Commun Med (Lond) ; 4(1): 41, 2024 Mar 11.
Artículo en Inglés | MEDLINE | ID: mdl-38467808

RESUMEN

BACKGROUND: Deep neural networks (DNNs) to detect COVID-19 features in lung ultrasound B-mode images have primarily relied on either in vivo or simulated images as training data. However, in vivo images suffer from limited access to required manual labeling of thousands of training image examples, and simulated images can suffer from poor generalizability to in vivo images due to domain differences. We address these limitations and identify the best training strategy. METHODS: We investigated in vivo COVID-19 feature detection with DNNs trained on our carefully simulated datasets (40,000 images), publicly available in vivo datasets (174 images), in vivo datasets curated by our team (958 images), and a combination of simulated and internal or external in vivo datasets. Seven DNN training strategies were tested on in vivo B-mode images from COVID-19 patients. RESULTS: Here, we show that Dice similarity coefficients (DSCs) between ground truth and DNN predictions are maximized when simulated data are mixed with external in vivo data and tested on internal in vivo data (i.e., 0.482 ± 0.211), compared with using only simulated B-mode image training data (i.e., 0.464 ± 0.230) or only external in vivo B-mode training data (i.e., 0.407 ± 0.177). Additional maximization is achieved when a separate subset of the internal in vivo B-mode images are included in the training dataset, with the greatest maximization of DSC (and minimization of required training time, or epochs) obtained after mixing simulated data with internal and external in vivo data during training, then testing on the held-out subset of the internal in vivo dataset (i.e., 0.735 ± 0.187). CONCLUSIONS: DNNs trained with simulated and in vivo data are promising alternatives to training with only real or only simulated data when segmenting in vivo COVID-19 lung ultrasound features.


Computational tools are often used to aid detection of COVID-19 from lung ultrasound images. However, this type of detection method can be prone to misdiagnosis if the computational tool is not properly trained and validated to detect image features associated with COVID-19 positive lungs. Here, we devise and test seven different strategies that include real patient data and simulated patient data to train the computational tool on how to correctly diagnose image features with high accuracy. Simulated data were created with software that models ultrasound physics and acoustic wave propagation. We find that incorporating simulated data in the training process improves training efficiency and detection accuracy, indicating that a properly curated simulated dataset can be used when real patient data are limited.

4.
IEEE Trans Biomed Eng ; 71(4): 1298-1307, 2024 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-38048239

RESUMEN

Flexible array transducers can adapt to patient-specific geometries during real-time ultrasound (US) image-guided therapy monitoring. This makes the system radiation-free and less user-dependency. Precise estimation of the flexible transducer's geometry is crucial for the delay-and-sum (DAS) beamforming algorithm to reconstruct B-mode US images. The primary innovation of this research is to build a system named FLexible transducer with EXternal tracking (FLEX) to estimate the position of each element of the flexible transducer and reconstruct precise US images. FLEX utilizes customized optical markers and a tracker to monitor the probe's geometry, employing a polygon fitting algorithm to estimate the position and azimuth angle of each transducer element. Subsequently, the traditional DAS algorithm processes the delay estimation from the tracked element position, reconstructing US images from radio-frequency (RF) channel data. The proposed method underwent evaluation on phantoms and cadaveric specimens, demonstrating its clinical feasibility. Deviations in tracked probe geometry compared to ground truth were minimal, measuring 0.50 ± 0.29 mm for the CIRS phantom, 0.54 ± 0.35 mm for the deformable phantom, and 0.36 ± 0.24 mm on the cadaveric specimen. Reconstructing the US image using tracked probe geometry significantly outperformed the untracked geometry, as indicated by a Dice score of 95.1 ± 3.3% versus 62.3 ± 9.2% for the CIRS phantom. The proposed method achieved high accuracy (<0.5 mm error) in tracking the element position for various random curvatures applicable for clinical deployment. The evaluation results show that the radiation-free proposed method can effectively reconstruct US images and assist in monitoring image-guided therapy with minimal user dependency.


Asunto(s)
Algoritmos , Transductores , Humanos , Ultrasonografía , Fantasmas de Imagen , Cadáver
5.
Artículo en Inglés | MEDLINE | ID: mdl-37956000

RESUMEN

When compared to fundamental B-mode imaging, coherence-based beamforming, and harmonic imaging are independently known to reduce acoustic clutter, distinguish solid from fluid content in indeterminate breast masses, and thereby reduce unnecessary biopsies during a breast cancer diagnosis. However, a systematic investigation of independent and combined coherence beamforming and harmonic imaging approaches is necessary for the clinical deployment of the most optimal approach. Therefore, we compare the performance of fundamental and harmonic images created with short-lag spatial coherence (SLSC), M-weighted SLSC (M-SLSC), SLSC combined with robust principal component analysis with no M-weighting (r-SLSC), and r-SLSC with M-weighting (R-SLSC), relative to traditional fundamental and harmonic B-mode images, when distinguishing solid from fluid breast masses. Raw channel data acquired from 40 total breast masses (28 solid, 7 fluid, 5 mixed) were beamformed and analyzed. The contrast of fluid masses was better with fundamental rather than harmonic coherence imaging, due to the lower spatial coherence within the fluid masses in the fundamental coherence images. Relative to SLSC imaging, M-SLSC, r-SLSC, and R-SLSC imaging provided similar contrast across multiple masses (with the exception of clinically challenging complicated cysts) and minimized the range of generalized contrast-to-noise ratios (gCNRs) of fluid masses, yet required additional computational resources. Among the eight coherence imaging modes compared, fundamental SLSC imaging best identified fluid versus solid breast mass contents, outperforming fundamental and harmonic B-mode imaging. With fundamental SLSC images, the specificity and sensitivity to identify fluid masses using the reader-independent metrics of contrast difference, mean lag one coherence (LOC), and gCNR were 0.86 and 1, 1 and 0.89, and 1 and 1, respectively. Results demonstrate that fundamental SLSC imaging and gCNR (or LOC if no coherence image or background region of interest is introduced) have the greatest potential to impact clinical decisions and improve the diagnostic certainty of breast mass contents. These observations are additionally anticipated to extend to masses in other organs.


Asunto(s)
Neoplasias de la Mama , Ultrasonografía Mamaria , Femenino , Humanos , Mama/diagnóstico por imagen , Neoplasias de la Mama/diagnóstico por imagen , Análisis de Componente Principal , Acústica
6.
J Biomed Opt ; 29(Suppl 1): S11505, 2024 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-38076439

RESUMEN

Significance: Interventional cardiac procedures often require ionizing radiation to guide cardiac catheters to the heart. To reduce the associated risks of ionizing radiation, photoacoustic imaging can potentially be combined with robotic visual servoing, with initial demonstrations requiring segmentation of catheter tips. However, typical segmentation algorithms applied to conventional image formation methods are susceptible to problematic reflection artifacts, which compromise the required detectability and localization of the catheter tip. Aim: We describe a convolutional neural network and the associated customizations required to successfully detect and localize in vivo photoacoustic signals from a catheter tip received by a phased array transducer, which is a common transducer for transthoracic cardiac imaging applications. Approach: We trained a network with simulated photoacoustic channel data to identify point sources, which appropriately model photoacoustic signals from the tip of an optical fiber inserted in a cardiac catheter. The network was validated with an independent simulated dataset, then tested on data from the tips of cardiac catheters housing optical fibers and inserted into ex vivo and in vivo swine hearts. Results: When validated with simulated data, the network achieved an F1 score of 98.3% and Euclidean errors (mean ± one standard deviation) of 1.02±0.84 mm for target depths of 20 to 100 mm. When tested on ex vivo and in vivo data, the network achieved F1 scores as large as 100.0%. In addition, for target depths of 40 to 90 mm in the ex vivo and in vivo data, up to 86.7% of axial and 100.0% of lateral position errors were lower than the axial and lateral resolution, respectively, of the phased array transducer. Conclusions: These results demonstrate the promise of the proposed method to identify photoacoustic sources in future interventional cardiology and cardiac electrophysiology applications.


Asunto(s)
Aprendizaje Profundo , Animales , Porcinos , Catéteres , Corazón/diagnóstico por imagen , Redes Neurales de la Computación , Algoritmos
7.
J Biomed Opt ; 28(9): 097001, 2023 09.
Artículo en Inglés | MEDLINE | ID: mdl-37671115

RESUMEN

Significance: Multispectral photoacoustic imaging has the potential to identify lipid-rich, myelinated nerve tissue in an interventional or surgical setting (e.g., to guide intraoperative decisions when exposing a nerve during reconstructive surgery by limiting operations to nerves needing repair, with no impact to healthy or regenerating nerves). Lipids have two optical absorption peaks within the NIR-II and NIR-III windows (i.e., 1000 to 1350 nm and 1550 to 1870 nm wavelength ranges, respectively) which can be exploited to obtain photoacoustic images. However, nerve visualization within the NIR-III window is more desirable due to higher lipid absorption peaks and a corresponding valley in the optical absorption of water. Aim: We present the first known optical absorption characterizations, photoacoustic spectral demonstrations, and histological validations to support in vivo photoacoustic nerve imaging in the NIR-III window. Approach: Four in vivo swine peripheral nerves were excised, and the optical absorption spectra of these fresh ex vivo nerves were characterized at wavelengths spanning 800 to 1880 nm, to provide the first known nerve optical absorbance spectra and to enable photoacoustic amplitude spectra characterization with the most optimal wavelength range. Prior to excision, the latter two of the four nerves were surrounded by aqueous, lipid-free, agarose blocks (i.e., 3% w/v agarose) to enhance acoustic coupling during in vivo multispectral photoacoustic imaging using the optimal NIR-III wavelengths (i.e., 1630 to 1850 nm) identified in the ex vivo studies. Results: There was a verified characteristic lipid absorption peak at 1725 nm for each ex vivo nerve. Results additionally suggest that the 1630 to 1850 nm wavelength range can successfully visualize and differentiate lipid-rich nerves from surrounding water-containing and lipid-deficient tissues and materials. Conclusions: Photoacoustic imaging using the optimal wavelengths identified and demonstrated for nerves holds promise for detection of myelination in exposed and isolated nerve tissue during a nerve repair surgery, with possible future implications for other surgeries and other optics-based technologies.


Asunto(s)
Acústica , Vaina de Mielina , Animales , Porcinos , Sefarosa , Análisis Espectral , Agua
8.
Cancers (Basel) ; 15(13)2023 Jun 22.
Artículo en Inglés | MEDLINE | ID: mdl-37444403

RESUMEN

Pancreatic cancer with less than 10% 3-year survival rate is one of deadliest cancer types and greatly benefits from enhanced radiotherapy. Organ motion monitoring helps spare the normal tissue from high radiation and, in turn, enables the dose escalation to the target that has been shown to improve the effectiveness of RT by doubling and tripling post-RT survival rate. The flexible array transducer is a novel and promising solution to address the limitation of conventional US probes. We proposed a novel shape estimation for flexible array transducer using two sequential algorithms: (i) an optical tracking-based system that uses the optical markers coordinates attached to the probe at specific positions to estimate the array shape in real-time and (ii) a fully automatic shape optimization algorithm that automatically searches for the optimal array shape that results in the highest quality reconstructed image. We conducted phantom and in vivo experiments to evaluate the estimated array shapes and the accuracy of reconstructed US images. The proposed method reconstructed US images with low full-width-at-half-maximum (FWHM) of the point scatters, correct aspect ratio of the cyst, and high-matching score with the ground truth. Our results demonstrated that the proposed methods reconstruct high-quality ultrasound images with significantly less defocusing and distortion compared with those without any correction. Specifically, the automatic optimization method reduced the array shape estimation error to less than half-wavelength of transmitted wave, resulting in a high-quality reconstructed image.

9.
Annu Rev Biomed Eng ; 25: 207-232, 2023 06 08.
Artículo en Inglés | MEDLINE | ID: mdl-37000966

RESUMEN

Photoacoustic techniques have shown promise in identifying molecular changes in bone tissue and visualizing tissue microstructure. This capability represents significant advantages over gold standards (i.e., dual-energy X-ray absorptiometry) for bone evaluation without requiring ionizing radiation. Instead, photoacoustic imaging uses light to penetrate through bone, followed by acoustic pressure generation, resulting in highly sensitive optical absorption contrast in deep biological tissues. This review covers multiple bone-related photoacoustic imaging contributions to clinical applications, spanning bone cancer, joint pathologies, spinal disorders, osteoporosis, bone-related surgical guidance, consolidation monitoring, and transsphenoidal and transcranial imaging. We also present a summary of photoacoustic-based techniques for characterizing biomechanical properties of bone, including temperature, guided waves, spectral parameters, and spectroscopy. We conclude with a future outlook based on the current state of technological developments, recent achievements, and possible new directions.


Asunto(s)
Neoplasias Óseas , Técnicas Fotoacústicas , Humanos , Técnicas Fotoacústicas/métodos , Tomografía Computarizada por Rayos X , Huesos/diagnóstico por imagen , Análisis Espectral
10.
Ultrasound Med Biol ; 49(1): 256-268, 2023 01.
Artículo en Inglés | MEDLINE | ID: mdl-36333154

RESUMEN

Traditional breast ultrasound imaging is a low-cost, real-time and portable method to assist with breast cancer screening and diagnosis, with particular benefits for patients with dense breast tissue. We previously demonstrated that incorporating coherence-based beamforming additionally improves the distinction of fluid-filled from solid breast masses, based on qualitative image interpretation by board-certified radiologists. However, variable sensitivity (range: 0.71-1.00 when detecting fluid-filled masses) was achieved by the individual radiologist readers. Therefore, we propose two objective coherence metrics, lag-one coherence (LOC) and coherence length (CL), to quantitatively determine the content of breast masses without requiring reader assessment. Data acquired from 31 breast masses were analyzed. Ideal separation (i.e., 1.00 sensitivity and specificity) was achieved between fluid-filled and solid breast masses based on the mean or median LOC value within each mass. When separated based on mean and median CL values, the sensitivity/specificity decreased to 1.00/0.95 and 0.92/0.89, respectively. The greatest sensitivity and specificity were achieved in dense, rather than non-dense, breast tissue. These results support the introduction of an objective, reader-independent method for automated diagnoses of cystic breast masses.


Asunto(s)
Neoplasias de la Mama , Mamografía , Femenino , Humanos , Mamografía/métodos , Densidad de la Mama , Neoplasias de la Mama/diagnóstico por imagen , Ultrasonografía Mamaria/métodos , Ultrasonografía , Sensibilidad y Especificidad
11.
Front Oncol ; 12: 996537, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-36237341

RESUMEN

Purpose: In this study, we aim to further evaluate the accuracy of ultrasound tracking for intra-fraction pancreatic tumor motion during radiotherapy by a phantom-based study. Methods: Twelve patients with pancreatic cancer who were treated with stereotactic body radiation therapy were enrolled in this study. The displacement points of the respiratory cycle were acquired from 4DCT and transferred to a motion platform to mimic realistic breathing movements in our phantom study. An ultrasound abdominal phantom was placed and fixed in the motion platform. The ground truth of phantom movement was recorded by tracking an optical tracker attached to this phantom. One tumor inside the phantom was the tracking target. In the evaluation of the results, the monitoring results from the ultrasound system were compared with the phantom motion results from the infrared camera. Differences between infrared monitoring motion and ultrasound tracking motion were analyzed by calculating the root-mean-square error. Results: The 82.2% ultrasound tracking motion was within a 0.5 mm difference value between ultrasound tracking displacement and infrared monitoring motion. 0.7% ultrasound tracking failed to track accurately (a difference value > 2.5 mm). These differences between ultrasound tracking motion and infrared monitored motion do not correlate with respiratory displacements, respiratory velocity, or respiratory acceleration by linear regression analysis. Conclusions: The highly accurate monitoring results of this phantom study prove that the ultrasound tracking system may be a potential method for real-time monitoring targets, allowing more accurate delivery of radiation doses.

12.
Artículo en Inglés | MEDLINE | ID: mdl-35446763

RESUMEN

The successful integration of computer vision, robotic actuation, and photoacoustic imaging to find and follow targets of interest during surgical and interventional procedures requires accurate photoacoustic target detectability. This detectability has traditionally been assessed with image quality metrics, such as contrast, contrast-to-noise ratio, and signal-to-noise ratio (SNR). However, predicting target tracking performance expectations when using these traditional metrics is difficult due to unbounded values and sensitivity to image manipulation techniques like thresholding. The generalized contrast-to-noise ratio (gCNR) is a recently introduced alternative target detectability metric, with previous work dedicated to empirical demonstrations of applicability to photoacoustic images. In this article, we present theoretical approaches to model and predict the gCNR of photoacoustic images with an associated theoretical framework to analyze relationships between imaging system parameters and computer vision task performance. Our theoretical gCNR predictions are validated with histogram-based gCNR measurements from simulated, experimental phantom, ex vivo, and in vivo datasets. The mean absolute errors between predicted and measured gCNR values ranged from 3.2 ×10-3 to 2.3 ×10-2 for each dataset, with channel SNRs ranging -40 to 40 dB and laser energies ranging 0.07 [Formula: see text] to 68 mJ. Relationships among gCNR, laser energy, target and background image parameters, target segmentation, and threshold levels were also investigated. Results provide a promising foundation to enable predictions of photoacoustic gCNR and visual servoing segmentation accuracy. The efficiency of precursory surgical and interventional tasks (e.g., energy selection for photoacoustic-guided surgeries) may also be improved with the proposed framework.


Asunto(s)
Técnicas Fotoacústicas , Robótica , Computadores , Procesamiento de Imagen Asistido por Computador/métodos , Fantasmas de Imagen , Relación Señal-Ruido , Análisis Espectral
13.
Artículo en Inglés | MEDLINE | ID: mdl-34224351

RESUMEN

Deep learning for ultrasound image formation is rapidly garnering research support and attention, quickly rising as the latest frontier in ultrasound image formation, with much promise to balance both image quality and display speed. Despite this promise, one challenge with identifying optimal solutions is the absence of unified evaluation methods and datasets that are not specific to a single research group. This article introduces the largest known international database of ultrasound channel data and describes the associated evaluation methods that were initially developed for the challenge on ultrasound beamforming with deep learning (CUBDL), which was offered as a component of the 2020 IEEE International Ultrasonics Symposium. We summarize the challenge results and present qualitative and quantitative assessments using both the initially closed CUBDL evaluation test dataset (which was crowd-sourced from multiple groups around the world) and additional in vivo breast ultrasound data contributed after the challenge was completed. As an example quantitative assessment, single plane wave images from the CUBDL Task 1 dataset produced a mean generalized contrast-to-noise ratio (gCNR) of 0.67 and a mean lateral resolution of 0.42 mm when formed with delay-and-sum beamforming, compared with a mean gCNR as high as 0.81 and a mean lateral resolution as low as 0.32 mm when formed with networks submitted by the challenge winners. We also describe contributed CUBDL data that may be used for training of future networks. The compiled database includes a total of 576 image acquisition sequences. We additionally introduce a neural-network-based global sound speed estimator implementation that was necessary to fairly evaluate the results obtained with this international database. The integration of CUBDL evaluation methods, evaluation code, network weights from the challenge winners, and all datasets described herein are publicly available (visit https://cubdl.jhu.edu for details).


Asunto(s)
Aprendizaje Profundo , Procesamiento de Imagen Asistido por Computador , Redes Neurales de la Computación , Fantasmas de Imagen , Ultrasonografía
14.
J Biomed Opt ; 26(7)2021 07.
Artículo en Inglés | MEDLINE | ID: mdl-34272841

RESUMEN

SIGNIFICANCE: Simulations have the potential to be a powerful tool when planning the placement of photoacoustic imaging system components for surgical guidance. While elastic simulations (which include both compressional and shear waves) are expected to more accurately represent the physical transcranial acoustic wave propagation process, these simulations are more time-consuming and memory-intensive than the compressional-wave-only simulations that our group previously used to identify optimal acoustic windows for transcranial photoacoustic imaging. AIM: We present qualitative and quantitative comparisons of compressional and elastic wave simulations to determine which option is more suitable for preoperative surgical planning. APPROACH: Compressional and elastic photoacoustic k-Wave simulations were performed based on a computed tomography volume of a human cadaver head. Photoacoustic sources were placed in the locations of the internal carotid arteries and likely positions of neurosurgical instrument tips. Transducers received signals from three previously identified optimal acoustic windows (i.e., the ocular, nasal, and temporal regions). Target detectability, image-based target size estimates, and target-to-instrument distances were measured using the generalized contrast-to-noise ratio (gCNR), resolution, and relative source distances, respectively, for each simulation method. RESULTS: The gCNR was equivalent between compressional and elastic simulations. The areas of the -6 dB contours of point spread functions utilized to measure resolution differed by 0.33 to 3.35 mm2. Target-to-instrument distance measurements were within 1.24 mm of the true distances. CONCLUSIONS: These results indicate that it is likely sufficient to utilize the less time-consuming, less memory-intensive compressional wave simulations for presurgical planning.


Asunto(s)
Neurocirugia , Acústica , Simulación por Computador , Humanos , Fantasmas de Imagen , Sonido
15.
Artículo en Inglés | MEDLINE | ID: mdl-34310298

RESUMEN

This work demonstrates that the combination of multi-line transmission (MLT) and short-lag spatial coherence (SLSC) imaging improves the contrast of highly coherent structures within soft tissues when compared to both traditional SLSC imaging and conventional delay and sum (DAS) beamforming. Experimental tests with small (i.e., [Formula: see text]-3 mm) targets embedded in homogeneous and heterogeneous backgrounds were conducted. DAS or SLSC images were reconstructed when implementing MLT with varying numbers of simultaneously transmitted beams. In images degraded by acoustic clutter, MLT SLSC achieved up to 34.1 dB better target contrast and up to 16 times higher frame rates when compared to the more conventional single-line transmission SLSC images, with lateral resolution improvements as large as 38.2%. MLT SLSC thus represents a promising technique for clinical applications in which ultrasound visualization of highly coherent targets is required (e.g., breast microcalcifications, kidney stones, and percutaneous biopsy needle tracking) and would otherwise be challenging due to the strong presence of acoustic clutter.


Asunto(s)
Acústica , Diagnóstico por Imagen , Procesamiento de Imagen Asistido por Computador , Fantasmas de Imagen , Ultrasonografía
16.
Lasers Surg Med ; 53(6): 748-775, 2021 08.
Artículo en Inglés | MEDLINE | ID: mdl-34015146

RESUMEN

This article reviews deep learning applications in biomedical optics with a particular emphasis on image formation. The review is organized by imaging domains within biomedical optics and includes microscopy, fluorescence lifetime imaging, in vivo microscopy, widefield endoscopy, optical coherence tomography, photoacoustic imaging, diffuse tomography, and functional optical brain imaging. For each of these domains, we summarize how deep learning has been applied and highlight methods by which deep learning can enable new capabilities for optics in medicine. Challenges and opportunities to improve translation and adoption of deep learning in biomedical optics are also summarized. Lasers Surg. Med. © 2021 Wiley Periodicals LLC.


Asunto(s)
Aprendizaje Profundo , Microscopía , Imagen Óptica , Óptica y Fotónica , Tomografía de Coherencia Óptica
17.
IEEE Trans Med Imaging ; 40(12): 3279-3292, 2021 12.
Artículo en Inglés | MEDLINE | ID: mdl-34018931

RESUMEN

Hysterectomy (i.e., surgical removal of the uterus) requires severing the main blood supply to the uterus (i.e., the uterine arteries) while preserving the nearby, often overlapping, ureters. In this paper, we investigate dual-wavelength and audiovisual photoacoustic imaging-based approaches to visualize and differentiate the ureter from the uterine artery and to provide the real-time information needed to avoid accidental ureteral injuries during hysterectomies. Dual-wavelength 690/750 nm photoacoustic imaging was implemented during laparoscopic and open hysterectomies performed on human cadavers, with a custom display approach designed to visualize the ureter and uterine artery. The proximity of the surgical tool to the ureter was calculated and conveyed by tracking the surgical tool in photoacoustic images and mapping distance to auditory signals. The dual-wavelength display showed up to 10 dB contrast differences between the ureter and uterine artery at three separation distances (i.e., 4 mm, 5 mm, and 6 mm) during the open hysterectomy. During the laparoscopic hysterectomy, the ureter and uterine artery were visualized in the dual-wavelength image with up to 24 dB contrast differences. Distances between the ureter and the surgical tool ranged from 2.47 to 7.31 mm. These results are promising for the introduction of dual-wavelength photoacoustic imaging to differentiate the ureter from the uterine artery, estimate the position of the ureter relative to a surgical tool tip, map photoacoustic-based distance measurements to auditory signals, and ultimately guide hysterectomy procedures to reduce the risk of accidental ureteral injuries.


Asunto(s)
Laparoscopía , Uréter , Cadáver , Femenino , Humanos , Histerectomía , Uréter/diagnóstico por imagen , Uréter/cirugía , Útero
18.
IEEE Trans Biomed Eng ; 68(8): 2479-2489, 2021 08.
Artículo en Inglés | MEDLINE | ID: mdl-33347403

RESUMEN

OBJECTIVE: Spinal fusion surgeries require accurate placement of pedicle screws in anatomic corridors without breaching bone boundaries. We are developing a combined ultrasound and photoacoustic image guidance system to avoid pedicle screw misplacement and accidental bone breaches, which can lead to nerve damage. METHODS: Pedicle cannulation was performed on a human cadaver, with co-registered photoacoustic and ultrasound images acquired at various time points during the procedure. Bony landmarks obtained from coherence-based ultrasound images of lumbar vertebrae were registered to post-operative CT images. Registration methods were additionally tested on an ex vivo caprine vertebra. RESULTS: Locally weighted short-lag spatial coherence (LW-SLSC) ultrasound imaging enhanced the visualization of bony structures with generalized contrast-to-noise ratios (gCNRs) of 0.99 and 0.98-1.00 in the caprine and human vertebrae, respectively. Short-lag spatial coherence (SLSC) and amplitude-based delay-and-sum (DAS) ultrasound imaging generally produced lower gCNRs of 0.98 and 0.84, respectively, in the caprine vertebra and 0.84-0.93 and 0.34-0.99, respectively, in the human vertebrae. The mean ± standard deviation of the area of -6 dB contours created from DAS photoacoustic images acquired with an optical fiber inserted in prepared pedicle holes (i.e., fiber surrounded by cancellous bone) and holes created after intentional breaches (i.e., fiber exposed to cortical bone) was 10.06 ±5.22 mm 2 and 2.47 ±0.96 mm 2, respectively (p 0.01). CONCLUSIONS: Coherence-based LW-SLSC and SLSC beamforming improved visualization of bony anatomical landmarks for ultrasound-to-CT registration, while amplitude-based DAS beamforming successfully distinguished photoacoustic signals within the pedicle from less desirable signals characteristic of impending bone breaches. SIGNIFICANCE: These results are promising to improve visual registration of ultrasound and photoacoustic images with CT images, as well as to assist surgeons with identifying and avoiding impending bone breaches during pedicle cannulation in spinal fusion surgeries.


Asunto(s)
Fusión Vertebral , Cirugía Asistida por Computador , Animales , Cateterismo , Cabras , Humanos , Vértebras Lumbares/diagnóstico por imagen , Vértebras Lumbares/cirugía , Ultrasonografía
19.
Artículo en Inglés | MEDLINE | ID: mdl-32746173

RESUMEN

The photoacoustic effect relies on optical transmission, which causes thermal expansion and generates acoustic signals. Coherence-based photoacoustic signal processing is often preferred over more traditional signal processing methods due to improved signal-to-noise ratios, imaging depth, and resolution in applications such as cell tracking, blood flow estimation, and imaging. However, these applications lack a theoretical spatial coherence model to support their implementation. In this article, the photoacoustic spatial coherence theory is derived to generate theoretical spatial coherence functions. These theoretical spatial coherence functions are compared with k-Wave simulated data and experimental data from point and circular targets (0.1-12 mm in diameter) with generally good agreement, particularly in the shorter spatial lag region. The derived theory was used to hypothesize and test previously unexplored principles for optimizing photoacoustic short-lag spatial coherence (SLSC) images, including the influence of the incident light profile on photoacoustic spatial coherence functions and associated SLSC image contrast and resolution. Results also confirm previous trends from experimental observations, including changes in SLSC image resolution and contrast as a function of the first M lags summed to create SLSC images. For example, small targets (e.g., <1-4 mm in diameter) can be imaged with larger M values to boost target contrast and resolution, and contrast can be further improved by reducing the illuminating beam to a size that is smaller than the target size. Overall, the presented theory provides a promising foundation to support a variety of coherence-based photoacoustic signal processing methods, and the associated theory-based simulation methods are more straightforward than the existing k-Wave simulation methods for SLSC images.


Asunto(s)
Técnicas Fotoacústicas/métodos , Procesamiento de Señales Asistido por Computador , Animales , Hígado/irrigación sanguínea , Hígado/diagnóstico por imagen , Fantasmas de Imagen , Porcinos
20.
J Biomed Opt ; 25(7): 1-19, 2020 07.
Artículo en Inglés | MEDLINE | ID: mdl-32713168

RESUMEN

SIGNIFICANCE: Photoacoustic-based visual servoing is a promising technique for surgical tool tip tracking and automated visualization of photoacoustic targets during interventional procedures. However, one outstanding challenge has been the reliability of obtaining segmentations using low-energy light sources that operate within existing laser safety limits. AIM: We developed the first known graphical processing unit (GPU)-based real-time implementation of short-lag spatial coherence (SLSC) beamforming for photoacoustic imaging and applied this real-time algorithm to improve signal segmentation during photoacoustic-based visual servoing with low-energy lasers. APPROACH: A 1-mm-core-diameter optical fiber was inserted into ex vivo bovine tissue. Photoacoustic-based visual servoing was implemented as the fiber was manually displaced by a translation stage, which provided ground truth measurements of the fiber displacement. GPU-SLSC results were compared with a central processing unit (CPU)-SLSC approach and an amplitude-based delay-and-sum (DAS) beamforming approach. Performance was additionally evaluated with in vivo cardiac data. RESULTS: The GPU-SLSC implementation achieved frame rates up to 41.2 Hz, representing a factor of 348 speedup when compared with offline CPU-SLSC. In addition, GPU-SLSC successfully recovered low-energy signals (i.e., ≤268 µJ) with mean ± standard deviation of signal-to-noise ratios of 11.2 ± 2.4 (compared with 3.5 ± 0.8 with conventional DAS beamforming). When energies were lower than the safety limit for skin (i.e., 394.6 µJ for 900-nm wavelength laser light), the median and interquartile range (IQR) of visual servoing tracking errors obtained with GPU-SLSC were 0.64 and 0.52 mm, respectively (which were lower than the median and IQR obtained with DAS by 1.39 and 8.45 mm, respectively). GPU-SLSC additionally reduced the percentage of failed segmentations when applied to in vivo cardiac data. CONCLUSIONS: Results are promising for the use of low-energy, miniaturized lasers to perform GPU-SLSC photoacoustic-based visual servoing in the operating room with laser pulse repetition frequencies as high as 41.2 Hz.


Asunto(s)
Algoritmos , Técnicas Fotoacústicas , Animales , Bovinos , Diagnóstico por Imagen , Fantasmas de Imagen , Reproducibilidad de los Resultados , Relación Señal-Ruido , Ultrasonografía
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA