Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 31
Filtrar
Mais filtros

Base de dados
País/Região como assunto
Tipo de documento
Intervalo de ano de publicação
1.
Cell ; 184(3): 561-565, 2021 02 04.
Artigo em Inglês | MEDLINE | ID: mdl-33503447

RESUMO

Our nationwide network of BME women faculty collectively argue that racial funding disparity by the National Institutes of Health (NIH) remains the most insidious barrier to success of Black faculty in our profession. We thus refocus attention on this critical barrier and suggest solutions on how it can be dismantled.


Assuntos
Pesquisa Biomédica/economia , Negro ou Afro-Americano , Administração Financeira , Pesquisadores/economia , Humanos , National Institutes of Health (U.S.)/economia , Grupos Raciais , Estados Unidos
2.
J Appl Clin Med Phys ; 18(4): 84-96, 2017 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-28574192

RESUMO

PURPOSE: Stereotactic body radiation therapy (SBRT) allows for high radiation doses to be delivered to the pancreatic tumors with limited toxicity. Nevertheless, the respiratory motion of the pancreas introduces major uncertainty during SBRT. Ultrasound imaging is a non-ionizing, non-invasive, and real-time technique for intrafraction monitoring. A configuration is not available to place the ultrasound probe during pancreas SBRT for monitoring. METHODS AND MATERIALS: An arm-bridge system was designed and built. A CT scan of the bridge-held ultrasound probe was acquired and fused to ten previously treated pancreatic SBRT patient CTs as virtual simulation CTs. Both step-and-shoot intensity-modulated radiation therapy (IMRT) and volumetric-modulated arc therapy (VMAT) planning were performed on virtual simulation CT. The accuracy of our tracking algorithm was evaluated by programmed motion phantom with simulated breath-hold 3D movement. An IRB-approved volunteer study was also performed to evaluate feasibility of system setup. Three healthy subjects underwent the same patient setup required for pancreas SBRT with active breath control (ABC). 4D ultrasound images were acquired for monitoring. Ten breath-hold cycles were monitored for both phantom and volunteers. For the phantom study, the target motion tracked by ultrasound was compared with motion tracked by the infrared camera. For the volunteer study, the reproducibility of ABC breath-hold was assessed. RESULTS: The volunteer study results showed that the arm-bridge system allows placement of an ultrasound probe. The ultrasound monitoring showed less than 2 mm reproducibility of ABC breath-hold in healthy volunteers. The phantom monitoring accuracy is 0.14 ± 0.08 mm, 0.04 ± 0.1 mm, and 0.25 ± 0.09 mm in three directions. On dosimetry part, 100% of virtual simulation plans passed protocol criteria. CONCLUSIONS: Our ultrasound system can be potentially used for real-time monitoring during pancreas SBRT without compromising planning quality. The phantom study showed high monitoring accuracy of the system, and the volunteer study showed feasibility of the clinical workflow.


Assuntos
Movimentos dos Órgãos , Neoplasias Pancreáticas/diagnóstico por imagem , Neoplasias Pancreáticas/radioterapia , Radiocirurgia/métodos , Planejamento da Radioterapia Assistida por Computador , Respiração , Ultrassonografia de Intervenção/métodos , Algoritmos , Estudos de Viabilidade , Humanos , Imagens de Fantasmas , Radioterapia de Intensidade Modulada , Reprodutibilidade dos Testes
3.
Biomed Opt Express ; 14(8): 4349-4368, 2023 Aug 01.
Artigo em Inglês | MEDLINE | ID: mdl-37799699

RESUMO

Photoacoustic imaging has demonstrated recent promise for surgical guidance, enabling visualization of tool tips during surgical and non-surgical interventions. To receive photoacoustic signals, most conventional transducers are rigid, while a flexible array is able to deform and provide complete contact on surfaces with different geometries. In this work, we present photoacoustic images acquired with a flexible array transducer in multiple concave shapes in phantom and ex vivo bovine liver experiments targeted toward interventional photoacoustic applications. We validate our image reconstruction equations for known sensor geometries with simulated data, and we provide empirical elevation field-of-view, target position, and image quality measurements. The elevation field-of-view was 6.08 mm at a depth of 4 cm and greater than 13 mm at a depth of 5 cm. The target depth agreement with ground truth ranged 98.35-99.69%. The mean lateral and axial target sizes when imaging 600 µm-core-diameter optical fibers inserted within the phantoms ranged 0.98-2.14 mm and 1.61-2.24 mm, respectively. The mean ± one standard deviation of lateral and axial target sizes when surrounded by liver tissue were 1.80±0.48 mm and 2.17±0.24 mm, respectively. Contrast, signal-to-noise, and generalized contrast-to-noise ratios ranged 6.92-24.42 dB, 46.50-67.51 dB, and 0.76-1, respectively, within the elevational field-of-view. Results establish the feasibility of implementing photoacoustic-guided surgery with a flexible array transducer.

4.
Photoacoustics ; 33: 100555, 2023 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-38021286

RESUMO

Photoacoustic (PA) imaging has the potential to deliver non-invasive diagnostic information. However, skin tone differences bias PA target visualization, as the elevated optical absorption of melanated skin decreases optical fluence within the imaging plane and increases the presence of acoustic clutter. This paper demonstrates that short-lag spatial coherence (SLSC) beamforming mitigates this bias. PA data from the forearm of 18 volunteers were acquired with 750-, 810-, and 870-nm wavelengths. Skin tones ranging from light to dark were objectively quantified using the individual typology angle (ITA°). The signal-to-noise ratio (SNR) of the radial artery (RA) and surrounding clutter were measured. Clutter was minimal (e.g., -16 dB relative to the RA) with lighter skin tones and increased to -8 dB with darker tones, which compromised RA visualization in conventional PA images. SLSC beamforming achieved a median SNR improvement of 3.8 dB, resulting in better RA visualization for all skin tones.

5.
BME Front ; 20222022.
Artigo em Inglês | MEDLINE | ID: mdl-36714302

RESUMO

The massive and continuous spread of COVID-19 has motivated researchers around the world to intensely explore, understand, and develop new techniques for diagnosis and treatment. Although lung ultrasound imaging is a less established approach when compared to other medical imaging modalities such as X-ray and CT, multiple studies have demonstrated its promise to diagnose COVID-19 patients. At the same time, many deep learning models have been built to improve the diagnostic efficiency of medical imaging. The integration of these initially parallel efforts has led multiple researchers to report deep learning applications in medical imaging of COVID-19 patients, most of which demonstrate the outstanding potential of deep learning to aid in the diagnosis of COVID-19. This invited review is focused on deep learning applications in lung ultrasound imaging of COVID-19 and provides a comprehensive overview of ultrasound systems utilized for data acquisition, associated datasets, deep learning models, and comparative performance.

6.
IEEE Trans Med Imaging ; 40(11): 3178-3189, 2021 11.
Artigo em Inglês | MEDLINE | ID: mdl-34101588

RESUMO

Ultrasound imaging has been developed for image-guided radiotherapy for tumor tracking, and the flexible array transducer is a promising tool for this task. It can reduce the user dependence and anatomical changes caused by the traditional ultrasound transducer. However, due to its flexible geometry, the conventional delay-and-sum (DAS) beamformer may apply incorrect time delay to the radio-frequency (RF) data and produce B-mode images with considerable defocusing and distortion. To address this problem, we propose a novel end-to-end deep learning approach that may alternate the conventional DAS beamformer when the transducer geometry is unknown. Different deep neural networks (DNNs) were designed to learn the proper time delays for each channel, and they were expected to reconstruct the undistorted high-quality B-mode images directly from RF channel data. We compared the DNN results to the standard DAS beamformed results using simulation and flexible array transducer scan data. With the proposed DNN approach, the averaged full-width-at-half-maximum (FWHM) of point scatters is 1.80 mm and 1.31 mm lower in simulation and scan results, respectively; the contrast-to-noise ratio (CNR) of the anechoic cyst in simulation and phantom scan is improved by 0.79 dB and 1.69 dB, respectively; and the aspect ratios of all the cysts are closer to 1. The evaluation results show that the proposed approach can effectively reduce the distortion and improve the lateral resolution and contrast of the reconstructed B-mode images.


Assuntos
Aprendizado Profundo , Processamento de Imagem Assistida por Computador , Imagens de Fantasmas , Transdutores , Ultrassonografia
7.
Biomed Opt Express ; 12(4): 2079-2117, 2021 Apr 01.
Artigo em Inglês | MEDLINE | ID: mdl-33996218

RESUMO

Photoacoustic imaging-the combination of optics and acoustics to visualize differences in optical absorption - has recently demonstrated strong viability as a promising method to provide critical guidance of multiple surgeries and procedures. Benefits include its potential to assist with tumor resection, identify hemorrhaged and ablated tissue, visualize metal implants (e.g., needle tips, tool tips, brachytherapy seeds), track catheter tips, and avoid accidental injury to critical subsurface anatomy (e.g., major vessels and nerves hidden by tissue during surgery). These benefits are significant because they reduce surgical error, associated surgery-related complications (e.g., cancer recurrence, paralysis, excessive bleeding), and accidental patient death in the operating room. This invited review covers multiple aspects of the use of photoacoustic imaging to guide both surgical and related non-surgical interventions. Applicable organ systems span structures within the head to contents of the toes, with an eye toward surgical and interventional translation for the benefit of patients and for use in operating rooms and interventional suites worldwide. We additionally include a critical discussion of complete systems and tools needed to maximize the success of surgical and interventional applications of photoacoustic-based technology, spanning light delivery, acoustic detection, and robotic methods. Multiple enabling hardware and software integration components are also discussed, concluding with a summary and future outlook based on the current state of technological developments, recent achievements, and possible new directions.

8.
Biomed Opt Express ; 12(7): 4115-4118, 2021 Jul 01.
Artigo em Inglês | MEDLINE | ID: mdl-34457402

RESUMO

This feature issue of Biomedical Optics Express covered all aspects of translational photoacoustic research. Application areas include screening and diagnosis of diseases, imaging of disease progression and therapeutic response, and image-guided treatment, such as surgery, drug delivery, and photothermal/photodynamic therapy. The feature issue also covers relevant developments in photoacoustic instrumentation, contrast agents, image processing and reconstruction algorithms.

9.
Front Oncol ; 11: 759811, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34804959

RESUMO

PURPOSE: We proposed a Haar feature-based method for tracking endoscopic ultrasound (EUS) probe in diagnostic computed tomography (CT) and Magnetic Resonance Imaging (MRI) scans for guiding hydrogel injection without external tracking hardware. This study aimed to assess the feasibility of implementing our method with phantom and patient images. MATERIALS AND METHODS: Our methods included the pre-simulation section and Haar features extraction steps. Firstly, the simulated EUS set was generated based on anatomic information of interpolated CT/MRI images. Secondly, the efficient Haar features were extracted from simulated EUS images to create a Haar feature dictionary. The relative EUS probe position was estimated by searching the best matched Haar feature vector of the dictionary with Haar feature vector of target EUS images. The utilization of this method was validated using EUS phantom and patient CT/MRI images. RESULTS: In the phantom experiment, we showed that our Haar feature-based EUS probe tracking method can find the best matched simulated EUS image from a simulated EUS dictionary which includes 123 simulated images. The errors of all four target points between the real EUS image and the best matched EUS images were within 1 mm. In the patient CT/MRI scans, the best matched simulated EUS image was selected by our method accurately, thereby confirming the probe location. However, when applying our method in MRI images, our method is not always robust due to the low image resolution. CONCLUSIONS: Our Haar feature-based method is capable to find the best matched simulated EUS image from the dictionary. We demonstrated the feasibility of our method for tracking EUS probe without external tracking hardware, thereby guiding the hydrogel injection between the head of the pancreas and duodenum.

10.
Biomed Opt Express ; 12(3): 1205-1216, 2021 Mar 01.
Artigo em Inglês | MEDLINE | ID: mdl-33796347

RESUMO

Photoacoustic imaging is a promising technique to provide guidance during multiple surgeries and procedures. One challenge with this technique is that major blood vessels in the liver are difficult to differentiate from surrounding tissue within current safety limits, which only exist for human skin and eyes. In this paper, we investigate the safety of raising this limit for liver tissue excited with a 750 nm laser wavelength and approximately 30 mJ laser energy (corresponding to approximately 150 mJ/cm2 fluence). Laparotomies were performed on six swine to empirically investigate potential laser-related liver damage. Laser energy was applied for temporal durations of 1 minute, 10 minutes, and 20 minutes. Lasered liver lobes were excised either immediately after laser application (3 swine) or six weeks after surgery (3 swine). Cell damage was assessed using liver damage blood biomarkers and histopathology analyses of 41 tissue samples total. The biomarkers were generally normal over a 6 week post-surgical in vivo study period. Histopathology revealed no cell death, although additional pathology was present (i.e., hemorrhage, inflammation, fibrosis) due to handling, sample resection, and fibrous adhesions as a result of the laparotomy. These results support a new protocol for studying laser-related liver damage, indicating the potential to raise the safety limit for liver photoacoustic imaging to approximately 150 mJ/cm2 with a laser wavelength of 750 nm and for imaging durations up to 10 minutes without causing cell death. This investigation and protocol may be applied to other tissues and extended to additional wavelengths and energies, which is overall promising for introducing new tissue-specific laser safety limits for photoacoustic-guided surgery.

11.
Biomed Opt Express ; 12(11): 7049-7050, 2021 Nov 01.
Artigo em Inglês | MEDLINE | ID: mdl-34858698

RESUMO

[This corrects the article on p. 1205 in vol. 12, PMID: 33796347.].

12.
J Appl Phys ; 128(6): 060904, 2020 Aug 14.
Artigo em Inglês | MEDLINE | ID: mdl-32817994

RESUMO

Minimally invasive surgeries often require complicated maneuvers and delicate hand-eye coordination and ideally would incorporate "x-ray vision" to see beyond tool tips and underneath tissues prior to making incisions. Photoacoustic imaging has the potential to offer this feature but not with ionizing x-rays. Instead, optical fibers and acoustic receivers enable photoacoustic sensing of major structures-such as blood vessels and nerves-that are otherwise hidden from view. This imaging process is initiated by transmitting laser pulses that illuminate regions of interest, causing thermal expansion and the generation of sound waves that are detectable with conventional ultrasound transducers. The recorded signals are then converted to images through the beamforming process. Photoacoustic imaging may be implemented to both target and avoid blood-rich surgical contents (and in some cases simultaneously or independently visualize optical fiber tips or metallic surgical tool tips) in order to prevent accidental injury and assist device operators during minimally invasive surgeries and interventional procedures. Novel light delivery systems, counterintuitive findings, and robotic integration methods introduced by the Photoacoustic & Ultrasonic Systems Engineering Lab are summarized in this invited Perspective, setting the foundation and rationale for the subsequent discussion of the author's views on possible future directions for this exciting frontier known as photoacoustic-guided surgery.

13.
Photoacoustics ; 19: 100183, 2020 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-32695578

RESUMO

Real-time intraoperative guidance during minimally invasive neurosurgical procedures (e.g., endonasal transsphenoidal surgery) is often limited to endoscopy and CT-guided image navigation, which can be suboptimal at locating underlying blood vessels and nerves. Accidental damage to these critical structures can have severe surgical complications, including patient blindness and death. Photoacoustic image guidance was previously proposed as a method to prevent accidental injury. While the proposed technique remains promising, the original light delivery and sound reception components of this technology require alterations to make the technique suitable for patient use. This paper presents simulation and experimental studies performed with both an intact human skull (which was cleaned from tissue attachments) and a complete human cadaver head (with contents and surrounding tissue intact) in order to investigate optimal locations for ultrasound probe placement during photoacoustic imaging and to test the feasibility of a modified light delivery design. Volumetric x-ray CT images of the human skull were used to create k-Wave simulations of acoustic wave propagation within this cranial environment. Photoacoustic imaging of the internal carotid artery (ICA) was performed with this same skull. Optical fibers emitting 750 nm light were inserted into the nasal cavity for ICA illumination. The ultrasound probe was placed on three optimal regions identified by simulations: (1) nasal cavity, (2) ocular region, and (3) 1 mm-thick temporal bone (which received 9.2%, 4.7%, and 3.8% of the initial photoacoustic pressure, respectively, in simulations). For these three probe locations, the contrast of the ICA in comparative experimental photoacoustic images was 27 dB, 19 dB, and 12 dB, respectively, with delay-and-sum (DAS) beamforming and laser pulse energies of 3 mJ, 5 mJ, and 4.2 mJ, respectively. Short-lag spatial coherence (SLSC) beamforming improved the contrast of these DAS images by up to 15 dB, enabled visualization of multiple cross-sectional ICA views in a single image, and enabled the use of lower laser energies. Combined simulation and experimental results with the emptied skull and >1 mm-thick temporal bone indicated that the ocular and nasal regions were more optimal probe locations than the temporal ultrasound probe location. Results from both the same skull filled with ovine brains and eyes and the human cadaver head validate the ocular region as an optimal acoustic window for our current system setup, producing high-contrast (i.e., up to 35 dB) DAS and SLSC photoacoustic images within the laser safety limits of a novel, compact light delivery system design that is independent of surgical tools (i.e., a fiber bundle with 6.8 mm outer diameter, 2 mm-diameter optical aperture, and an air gap spacing between the sphenoid bone and fiber tips). These results are promising toward identifying, quantifying, and overcoming major system design barriers to proceed with future patient testing.

14.
Biomed Opt Express ; 11(7): 3684-3698, 2020 Jul 01.
Artigo em Inglês | MEDLINE | ID: mdl-33014560

RESUMO

The generalized contrast-to-noise ratio (gCNR) is a relatively new image quality metric designed to assess the probability of lesion detectability in ultrasound images. Although gCNR was initially demonstrated with ultrasound images, the metric is theoretically applicable to multiple types of medical images. In this paper, the applicability of gCNR to photoacoustic images is investigated. The gCNR was computed for both simulated and experimental photoacoustic images generated by amplitude-based (i.e., delay-and-sum) and coherence-based (i.e., short-lag spatial coherence) beamformers. These gCNR measurements were compared to three more traditional image quality metrics (i.e., contrast, contrast-to-noise ratio, and signal-to-noise ratio) applied to the same datasets. An increase in qualitative target visibility generally corresponded with increased gCNR. In addition, gCNR magnitude was more directly related to the separability of photoacoustic signals from their background, which degraded with the presence of limited bandwidth artifacts and increased levels of channel noise. At high gCNR values (i.e., 0.95-1), contrast, contrast-to-noise ratio, and signal-to-noise ratio varied by up to 23.7-56.2 dB, 2.0-3.4, and 26.5-7.6×1020, respectively, for simulated, experimental phantom, and in vivo data. Therefore, these traditional metrics can experience large variations when a target is fully detectable, and additional increases in these values would have no impact on photoacoustic target detectability. In addition, gCNR is robust to changes in traditional metrics introduced by applying a minimum threshold to image amplitudes. In tandem with other photoacoustic image quality metrics and with a defined range of 0 to 1, gCNR has promising potential to provide additional insight, particularly when designing new beamformers and image formation techniques and when reporting quantitative performance without an opportunity to qualitatively assess corresponding images (e.g., in text-only abstracts).

15.
Artigo em Inglês | MEDLINE | ID: mdl-32396084

RESUMO

Single plane wave transmissions are promising for automated imaging tasks requiring high ultrasound frame rates over an extended field of view. However, a single plane wave insonification typically produces suboptimal image quality. To address this limitation, we are exploring the use of deep neural networks (DNNs) as an alternative to delay-and-sum (DAS) beamforming. The objectives of this work are to obtain information directly from raw channel data and to simultaneously generate both a segmentation map for automated ultrasound tasks and a corresponding ultrasound B-mode image for interpretable supervision of the automation. We focus on visualizing and segmenting anechoic targets surrounded by tissue and ignoring or deemphasizing less important surrounding structures. DNNs trained with Field II simulations were tested with simulated, experimental phantom, and in vivo data sets that were not included during training. With unfocused input channel data (i.e., prior to the application of receive time delays), simulated, experimental phantom, and in vivo test data sets achieved mean ± standard deviation Dice similarity coefficients of 0.92 ± 0.13, 0.92 ± 0.03, and 0.77 ± 0.07, respectively, and generalized contrast-to-noise ratios (gCNRs) of 0.95 ± 0.08, 0.93 ± 0.08, and 0.75 ± 0.14, respectively. With subaperture beamformed channel data and a modification to the input layer of the DNN architecture to accept these data, the fidelity of image reconstruction increased (e.g., mean gCNR of multiple acquisitions of two in vivo breast cysts ranged 0.89-0.96), but DNN display frame rates were reduced from 395 to 287 Hz. Overall, the DNNs successfully translated feature representations learned from simulated data to phantom and in vivo data, which is promising for this novel approach to simultaneous ultrasound image formation and segmentation.


Assuntos
Aprendizado Profundo , Processamento de Imagem Assistida por Computador/métodos , Ultrassonografia/métodos , Algoritmos , Mama/diagnóstico por imagem , Neoplasias da Mama/diagnóstico por imagem , Feminino , Humanos , Imagens de Fantasmas
16.
Artigo em Inglês | MEDLINE | ID: mdl-31796398

RESUMO

In the last 30 years, the contrast-to-noise ratio (CNR) has been used to estimate the contrast and lesion detectability in ultrasound images. Recent studies have shown that the CNR cannot be used with modern beamformers, as dynamic range alterations can produce arbitrarily high CNR values with no real effect on the probability of lesion detection. We generalize the definition of CNR based on the overlap area between two probability density functions. This generalized CNR (gCNR) is robust against dynamic range alterations; it can be applied to all kind of images, units, or scales; it provides a quantitative measure for contrast; and it has a simple statistical interpretation, i.e., the success rate that can be expected from an ideal observer at the task of separating pixels. We test gCNR on several state-of-the-art imaging algorithms and, in addition, on a trivial compression of the dynamic range. We observe that CNR varies greatly between the state-of-the-art methods, with improvements larger than 100%. We observe that trivial compression leads to a CNR improvement of over 200%. The proposed index, however, yields the same value for compressed and uncompressed images. The tested methods showed mismatched performance in terms of lesion detectability, with variations in gCNR ranging from -0.08 to +0.29. This new metric fixes a methodological flaw in the way we study contrast and allows us to assess the relevance of new imaging algorithms.


Assuntos
Processamento de Imagem Assistida por Computador/métodos , Ultrassonografia/métodos , Algoritmos , Cistos/diagnóstico por imagem , Modelos Biológicos , Imagens de Fantasmas , Ultrassonografia/instrumentação , Ultrassonografia/normas
17.
Artigo em Inglês | MEDLINE | ID: mdl-30507500

RESUMO

Ultrasound is frequently used in conjunction with mammography in order to detect breast cancer as early as possible. However, due largely to the heterogeneity of breast tissue, ultrasound images are plagued with clutter that obstructs important diagnostic features. Short-lag spatial coherence (SLSC) imaging has proven to be effective at clutter reduction in noisy ultrasound images. M -Weighted SLSC and Robust-SLSC (R-SLSC) imaging were recently introduced to further improve image quality at higher lag values, while R-SLSC imaging has the added benefit of enabling the adjustment of tissue texture to produce a tissue signal-to-noise ratio (SNR) that is quantitatively similar to B-mode speckle SNR. This paper investigates the initial application of SLSC, M -Weighted SLSC, and R-SLSC imaging to nine targets in the female breast [two simple cysts, one complicated cyst, two fibroadenomas, one hematoma, one complex cystic and solid mass, one invasive ductal carcinoma (IDC), and one ductal carcinoma in situ (DCIS)]. As expected, R-SLSC beamforming improves cyst and hematoma contrast by up to 6.35 and 1.55 dB, respectively, when compared to the original B-mode image, and similar improvements are achieved with SLSC and M -Weighted SLSC imaging. However, an interesting finding from this initial investigation is that the solid masses (i.e., fibroadenoma, complex cystic and solid mass, IDC, and DCIS), which appear as hypoechoic in the B-mode image, have similarly high coherence to that of surrounding tissue in coherence-based images. This work holds promise for using SLSC, M -Weighted SLSC, and/or R-SLSC imaging to distinguish between fluid-filled and solid hypoechoic breast masses.


Assuntos
Neoplasias da Mama/diagnóstico por imagem , Mama/diagnóstico por imagem , Interpretação de Imagem Assistida por Computador/métodos , Ultrassonografia Mamária/métodos , Feminino , Humanos , Razão Sinal-Ruído
18.
Phys Med Biol ; 64(18): 185006, 2019 09 11.
Artigo em Inglês | MEDLINE | ID: mdl-31323649

RESUMO

We have previously developed a robotic ultrasound imaging system for motion monitoring in abdominal radiation therapy. Owing to the slow speed of ultrasound image processing, our previous system could only track abdominal motions under breath-hold. To overcome this limitation, a novel 2D-based image processing method for tracking intra-fraction respiratory motion is proposed. Fifty-seven different anatomical features acquired from 27 sets of 2D ultrasound sequences were used in this study. Three 2D ultrasound sequences were acquired with the robotic ultrasound system from three healthy volunteers. The remaining datasets were provided by the 2015 MICCAI Challenge on Liver Ultrasound Tracking. All datasets were preprocessed to extract the feature point, and a patient-specific motion pattern was extracted by principal component analysis and slow feature analysis (SFA). The tracking finds the most similar frame (or indexed frame) by a k-dimensional-tree-based nearest neighbor search for estimating the tracked object location. A template image was updated dynamically through the indexed frame to perform a fast template matching (TM) within a learned smaller search region on the incoming frame. The mean tracking error between manually annotated landmarks and the location extracted from the indexed training frame is 1.80 ± 1.42 mm. Adding a fast TM procedure within a small search region reduces the mean tracking error to 1.14 ± 1.16 mm. The tracking time per frame is 15 ms, which is well below the frame acquisition time. Furthermore, the anatomical reproducibility was measured by analyzing the location's anatomical landmark relative to the probe; the position-controlled probe has better reproducibility and yields a smaller mean error across all three volunteer cases, compared to the force-controlled probe (2.69 versus 11.20 mm in the superior-inferior direction and 1.19 versus 8.21 mm in the anterior-posterior direction). Our method reduces the processing time for tracking respiratory motion significantly, which can reduce the delivery uncertainty.


Assuntos
Abdome/diagnóstico por imagem , Abdome/efeitos da radiação , Fracionamento da Dose de Radiação , Aprendizado de Máquina , Movimento , Radioterapia Guiada por Imagem/métodos , Respiração , Voluntários Saudáveis , Humanos , Processamento de Imagem Assistida por Computador , Planejamento da Radioterapia Assistida por Computador , Reprodutibilidade dos Testes , Ultrassonografia
19.
Sci Rep ; 8(1): 15519, 2018 10 19.
Artigo em Inglês | MEDLINE | ID: mdl-30341371

RESUMO

In intraoperative settings, the presence of acoustic clutter and reflection artifacts from metallic surgical tools often reduces the effectiveness of ultrasound imaging and complicates the localization of surgical tool tips. We propose an alternative approach for tool tracking and navigation in these challenging acoustic environments by augmenting ultrasound systems with a light source (to perform photoacoustic imaging) and a robot (to autonomously and robustly follow a surgical tool regardless of the tissue medium). The robotically controlled ultrasound probe continuously visualizes the location of the tool tip by segmenting and tracking photoacoustic signals generated from an optical fiber inside the tool. System validation in the presence of fat, muscle, brain, skull, and liver tissue with and without the presence of an additional clutter layer resulted in mean signal tracking errors <2 mm, mean probe centering errors <1 mm, and successful recovery from ultrasound perturbations, representing either patient motion or switching from photoacoustic images to ultrasound images to search for a target of interest. A detailed analysis of channel SNR in controlled experiments with and without significant acoustic clutter revealed that the detection of a needle tip is possible with photoacoustic imaging, particularly in cases where ultrasound imaging traditionally fails. Results show promise for guiding surgeries and procedures in acoustically challenging environments with this novel robotic and photoacoustic system combination.


Assuntos
Processamento de Imagem Assistida por Computador/métodos , Luz , Técnicas Fotoacústicas/tendências , Cirurgia Assistida por Computador/métodos , Ultrassonografia de Intervenção/métodos , Tecido Adiposo/diagnóstico por imagem , Algoritmos , Animais , Bovinos , Galinhas , Músculos/diagnóstico por imagem , Agulhas , Fibras Ópticas , Robótica , Análise Espectral
20.
Phys Med Biol ; 63(14): 144001, 2018 07 11.
Artigo em Inglês | MEDLINE | ID: mdl-29923832

RESUMO

It is well known that there are structural differences between cortical and cancellous bone. However, spinal surgeons currently have no reliable method to non-invasively determine these differences in real-time when choosing the optimal starting point and trajectory to insert pedicle screws and avoid surgical complications associated with breached or weakened bone. This paper explores 3D photoacoustic imaging of a human vertebra to noninvasively differentiate cortical from cancellous bone for this surgical task. We observed that signals from the cortical bone tend to appear as compact, high-amplitude signals, while signals from the cancellous bone have lower amplitudes and are more diffuse. In addition, we discovered that the location of the light source for photoacoustic imaging is a critical parameter that can be adjusted to non-invasively determine the optimal entry point into the pedicle. Once inside the pedicle, statistically significant differences in the contrast and SNR of signals originating from the cancellous core of the pedicle (when compared to signals originating from the surrounding cortical bone) were obtained with laser energies of 0.23-2.08 mJ (p < 0.05). Similar quantitative differences were observed with an energy of 1.57 mJ at distances ⩾6 mm from the cortical bone of the pedicle. These quantifiable differences between cortical and cancellous bone (when imaging with an ultrasound probe in direct contact with each bone type) can potentially be used to ensure an optimal trajectory during surgery. Our results are promising for the introduction and development of photoacoustic imaging systems to overcome a wide range of longstanding challenges with spinal surgeries, including challenges with the occurrence of bone breaches due to misplaced pedicle screws.


Assuntos
Osso Esponjoso/diagnóstico por imagem , Osso Cortical/diagnóstico por imagem , Vértebras Lombares/diagnóstico por imagem , Técnicas Fotoacústicas/métodos , Fusão Vertebral/métodos , Osso Esponjoso/cirurgia , Osso Cortical/cirurgia , Humanos , Vértebras Lombares/cirurgia
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA