Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 163
Filter
1.
Article in English | MEDLINE | ID: mdl-38888820

ABSTRACT

PURPOSE: To facilitate the integration of point of gaze (POG) as an input modality for robot-assisted surgery, we introduce a robust head movement compensation gaze tracking system for the da Vinci Surgical System. Previous surgical eye gaze trackers require multiple recalibrations and suffer from accuracy loss when users move from the calibrated position. We investigate whether eye corner detection can reduce gaze estimation error in a robotic surgery context. METHODS: A polynomial regressor is first used to estimate POG after an 8-point calibration, and then, using another regressor, the POG error from head movement is estimated from the shift in 2D eye corner location. Eye corners are computed by first detecting regions of interest using the You Only Look Once (YOLO) object detector trained on 1600 annotated eye images (open dataset included). Contours are then extracted from the bounding boxes and a derivative-based curvature detector refines the eye corner. RESULTS: Through a user study (n = 24), our corner-contingent head compensation algorithm showed an error reduction in degrees of visual angle of 1.20 ∘ (p = 0.037) for the left eye and 1.26 ∘ (p = 0.079) for the right compared to the previous gold-standard POG error correction method. In addition, the eye corner pipeline showed a root-mean-squared error of 3.57 (SD = 1.92) pixels in detecting eye corners over 201 annotated frames. CONCLUSION: We introduce an effective method of using eye corners to correct for eye gaze estimation, enabling the practical acquisition of POG in robotic surgery.

2.
JCO Clin Cancer Inform ; 8: e2300184, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38900978

ABSTRACT

PURPOSE: Prostate cancer (PCa) represents a highly heterogeneous disease that requires tools to assess oncologic risk and guide patient management and treatment planning. Current models are based on various clinical and pathologic parameters including Gleason grading, which suffers from a high interobserver variability. In this study, we determine whether objective machine learning (ML)-driven histopathology image analysis would aid us in better risk stratification of PCa. MATERIALS AND METHODS: We propose a deep learning, histopathology image-based risk stratification model that combines clinicopathologic data along with hematoxylin and eosin- and Ki-67-stained histopathology images. We train and test our model, using a five-fold cross-validation strategy, on a data set from 502 treatment-naïve PCa patients who underwent radical prostatectomy (RP) between 2000 and 2012. RESULTS: We used the concordance index as a measure to evaluate the performance of various risk stratification models. Our risk stratification model on the basis of convolutional neural networks demonstrated superior performance compared with Gleason grading and the Cancer of the Prostate Risk Assessment Post-Surgical risk stratification models. Using our model, 3.9% of the low-risk patients were correctly reclassified to be high-risk and 21.3% of the high-risk patients were correctly reclassified as low-risk. CONCLUSION: These findings highlight the importance of ML as an objective tool for histopathology image assessment and patient risk stratification. With further validation on large cohorts, the digital pathology risk classification we propose may be helpful in guiding administration of adjuvant therapy including radiotherapy after RP.


Subject(s)
Deep Learning , Neoplasm Grading , Prostatic Neoplasms , Humans , Prostatic Neoplasms/pathology , Prostatic Neoplasms/surgery , Male , Risk Assessment/methods , Prostatectomy/methods , Aged , Middle Aged , Image Processing, Computer-Assisted/methods
3.
Med Image Anal ; 96: 103197, 2024 Aug.
Article in English | MEDLINE | ID: mdl-38805765

ABSTRACT

Graph convolutional neural networks have shown significant potential in natural and histopathology images. However, their use has only been studied in a single magnification or multi-magnification with either homogeneous graphs or only different node types. In order to leverage the multi-magnification information and improve message passing with graph convolutional networks, we handle different embedding spaces at each magnification by introducing the Multi-Scale Relational Graph Convolutional Network (MS-RGCN) as a multiple instance learning method. We model histopathology image patches and their relation with neighboring patches and patches at other scales (i.e., magnifications) as a graph. We define separate message-passing neural networks based on node and edge types to pass the information between different magnification embedding spaces. We experiment on prostate cancer histopathology images to predict the grade groups based on the extracted features from patches. We also compare our MS-RGCN with multiple state-of-the-art methods with evaluations on several source and held-out datasets. Our method outperforms the state-of-the-art on all of the datasets and image types consisting of tissue microarrays, whole-mount slide regions, and whole-slide images. Through an ablation study, we test and show the value of the pertinent design features of the MS-RGCN.


Subject(s)
Neural Networks, Computer , Prostatic Neoplasms , Humans , Prostatic Neoplasms/diagnostic imaging , Prostatic Neoplasms/pathology , Male , Image Interpretation, Computer-Assisted/methods , Image Processing, Computer-Assisted/methods , Machine Learning , Algorithms
4.
Article in English | MEDLINE | ID: mdl-38789882

ABSTRACT

PURPOSE: Transoral robotic surgery (TORS) is a challenging procedure due to its small workspace and complex anatomy. Ultrasound (US) image guidance has the potential to improve surgical outcomes, but an appropriate method for US probe manipulation has not been defined. This study evaluates using an additional robotic (4th) arm on the da Vinci Surgical System to perform extracorporeal US scanning for image guidance in TORS. METHODS: A stereoscopic imaging system and da Vinci-compatible US probe attachment were developed to enable control of the extracorporeal US probe from the surgeon console. The prototype was compared to freehand US by nine operators in three tasks on a healthy volunteer: (1) identification of the common carotid artery, (2) carotid artery scanning, and (3) identification of the submandibular gland. Operator workload and user experience were evaluated using a questionnaire. RESULTS: The robotic US tasks took longer than freehand US tasks (2.09x longer; p = 0.001 ) and had higher operator workload (2.12x higher; p = 0.004 ). However, operator-rated performance was closer (avg robotic/avg freehand = 0.66; p = 0.017 ), and scanning performance measured by MRI-US average Hausdorff distance provided no statistically significant difference. CONCLUSION: Extracorporeal US scanning for intraoperative US image guidance is a convenient approach for providing the surgeon direct control over the US image plane during TORS, with little modification to the existing operating room workflow. Although more time-consuming and higher operator workload, several methods have been identified to address these limitations.

5.
Med Image Anal ; 94: 103131, 2024 May.
Article in English | MEDLINE | ID: mdl-38442528

ABSTRACT

As computer vision algorithms increase in capability, their applications in clinical systems will become more pervasive. These applications include: diagnostics, such as colonoscopy and bronchoscopy; guiding biopsies, minimally invasive interventions, and surgery; automating instrument motion; and providing image guidance using pre-operative scans. Many of these applications depend on the specific visual nature of medical scenes and require designing algorithms to perform in this environment. In this review, we provide an update to the field of camera-based tracking and scene mapping in surgery and diagnostics in medical computer vision. We begin with describing our review process, which results in a final list of 515 papers that we cover. We then give a high-level summary of the state of the art and provide relevant background for those who need tracking and mapping for their clinical applications. After which, we review datasets provided in the field and the clinical needs that motivate their design. Then, we delve into the algorithmic side, and summarize recent developments. This summary should be especially useful for algorithm designers and to those looking to understand the capability of off-the-shelf methods. We maintain focus on algorithms for deformable environments while also reviewing the essential building blocks in rigid tracking and mapping since there is a large amount of crossover in methods. With the field summarized, we discuss the current state of the tracking and mapping methods along with needs for future algorithms, needs for quantification, and the viability of clinical applications. We then provide some research directions and questions. We conclude that new methods need to be designed or combined to support clinical applications in deformable environments, and more focus needs to be put into collecting datasets for training and evaluation.


Subject(s)
Surgery, Computer-Assisted , Humans , Surgery, Computer-Assisted/methods , Algorithms , Computers
6.
IEEE Trans Med Imaging ; 43(7): 2634-2645, 2024 Jul.
Article in English | MEDLINE | ID: mdl-38437151

ABSTRACT

Quantifying performance of methods for tracking and mapping tissue in endoscopic environments is essential for enabling image guidance and automation of medical interventions and surgery. Datasets developed so far either use rigid environments, visible markers, or require annotators to label salient points in videos after collection. These are respectively: not general, visible to algorithms, or costly and error-prone. We introduce a novel labeling methodology along with a dataset that uses said methodology, Surgical Tattoos in Infrared (STIR). STIR has labels that are persistent but invisible to visible spectrum algorithms. This is done by labelling tissue points with IR-fluorescent dye, indocyanine green (ICG), and then collecting visible light video clips. STIR comprises hundreds of stereo video clips in both in vivo and ex vivo scenes with start and end points labelled in the IR spectrum. With over 3,000 labelled points, STIR will help to quantify and enable better analysis of tracking and mapping methods. After introducing STIR, we analyze multiple different frame-based tracking methods on STIR using both 3D and 2D endpoint error and accuracy metrics. STIR is available at https://dx.doi.org/10.21227/w8g4-g548.


Subject(s)
Algorithms , Indocyanine Green , Tattooing , Tattooing/methods , Infrared Rays , Animals , Surgery, Computer-Assisted/methods , Humans , Image Processing, Computer-Assisted/methods , Video Recording/methods
7.
Int J Comput Assist Radiol Surg ; 19(2): 199-208, 2024 Feb.
Article in English | MEDLINE | ID: mdl-37610603

ABSTRACT

PURPOSE: To achieve effective robot-assisted laparoscopic prostatectomy, the integration of transrectal ultrasound (TRUS) imaging system which is the most widely used imaging modality in prostate imaging is essential. However, manual manipulation of the ultrasound transducer during the procedure will significantly interfere with the surgery. Therefore, we propose an image co-registration algorithm based on a photoacoustic marker (PM) method, where the ultrasound/photoacoustic (US/PA) images can be registered to the endoscopic camera images to ultimately enable the TRUS transducer to automatically track the surgical instrument. METHODS: An optimization-based algorithm is proposed to co-register the images from the two different imaging modalities. The principle of light propagation and an uncertainty in PM detection were assumed in this algorithm to improve the stability and accuracy of the algorithm. The algorithm is validated using the previously developed US/PA image-guided system with a da Vinci surgical robot. RESULTS: The target-registration-error (TRE) is measured to evaluate the proposed algorithm. In both simulation and experimental demonstration, the proposed algorithm achieved a sub-centimeter accuracy which is acceptable in practical clinics (i.e., 1.15 ± 0.29 mm from the experimental evaluation). The result is also comparable with our previous approach (i.e., 1.05 ± 0.37 mm), and the proposed method can be implemented with a normal white light stereo camera and does not require highly accurate localization of the PM. CONCLUSION: The proposed frame registration algorithm enabled a simple yet efficient integration of commercial US/PA imaging system into laparoscopic surgical setting by leveraging the characteristic properties of acoustic wave propagation and laser excitation, contributing to automated US/PA image-guided surgical intervention applications.


Subject(s)
Laparoscopy , Prostatic Neoplasms , Robotics , Surgery, Computer-Assisted , Male , Humans , Imaging, Three-Dimensional/methods , Ultrasonography/methods , Surgery, Computer-Assisted/methods , Algorithms , Prostatectomy/methods , Prostatic Neoplasms/surgery
8.
IEEE Robot Autom Lett ; 8(3): 1287-1294, 2023 Mar.
Article in English | MEDLINE | ID: mdl-37997605

ABSTRACT

This paper introduces the first integrated real-time intraoperative surgical guidance system, in which an endoscope camera of da Vinci surgical robot and a transrectal ultrasound (TRUS) transducer are co-registered using photoacoustic markers that are detected in both fluorescence (FL) and photoacoustic (PA) imaging. The co-registered system enables the TRUS transducer to track the laser spot illuminated by a pulsed-laser-diode attached to the surgical instrument, providing both FL and PA images of the surgical region-of-interest (ROI). As a result, the generated photoacoustic marker is visualized and localized in the da Vinci endoscopic FL images, and the corresponding tracking can be conducted by rotating the TRUS transducer to display the PA image of the marker. A quantitative evaluation revealed that the average registration and tracking errors were 0.84 mm and 1.16°, respectively. This study shows that the co-registered photoacoustic marker tracking can be effectively deployed intraoperatively using TRUS+PA imaging providing functional guidance of the surgical ROI.

9.
Biomed Opt Express ; 14(11): 6016-6030, 2023 Nov 01.
Article in English | MEDLINE | ID: mdl-38021122

ABSTRACT

Real-time transrectal ultrasound (TRUS) image guidance during robot-assisted laparoscopic radical prostatectomy has the potential to enhance surgery outcomes. Whether conventional or photoacoustic TRUS is used, the robotic system and the TRUS must be registered to each other. Accurate registration can be performed using photoacoustic (PA markers). However, this requires a manual search by an assistant [IEEE Robot. Autom. Lett8, 1287 (2023).10.1109/LRA.2022.3191788]. This paper introduces the first automatic search for PA markers using a transrectal ultrasound robot. This effectively reduces the challenges associated with the da Vinci-TRUS registration. This paper investigated the performance of three search algorithms in simulation and experiment: Weighted Average (WA), Golden Section Search (GSS), and Ternary Search (TS). For validation, a surgical prostate scenario was mimicked and various ex vivo tissues were tested. As a result, the WA algorithm can achieve 0.53°±0.30° average error after 9 data acquisitions, while the TS and GSS algorithm can achieve 0.29∘±0.31∘ and 0.48°±0.32° average errors after 28 data acquisitions.

10.
Med Image Anal ; 89: 102878, 2023 10.
Article in English | MEDLINE | ID: mdl-37541100

ABSTRACT

Ultrasound (US) is one of the most widely used modalities for clinical intervention and diagnosis due to the merits of providing non-invasive, radiation-free, and real-time images. However, free-hand US examinations are highly operator-dependent. Robotic US System (RUSS) aims at overcoming this shortcoming by offering reproducibility, while also aiming at improving dexterity, and intelligent anatomy and disease-aware imaging. In addition to enhancing diagnostic outcomes, RUSS also holds the potential to provide medical interventions for populations suffering from the shortage of experienced sonographers. In this paper, we categorize RUSS as teleoperated or autonomous. Regarding teleoperated RUSS, we summarize their technical developments, and clinical evaluations, respectively. This survey then focuses on the review of recent work on autonomous robotic US imaging. We demonstrate that machine learning and artificial intelligence present the key techniques, which enable intelligent patient and process-specific, motion and deformation-aware robotic image acquisition. We also show that the research on artificial intelligence for autonomous RUSS has directed the research community toward understanding and modeling expert sonographers' semantic reasoning and action. Here, we call this process, the recovery of the "language of sonography". This side result of research on autonomous robotic US acquisitions could be considered as valuable and essential as the progress made in the robotic US examination itself. This article will provide both engineers and clinicians with a comprehensive understanding of RUSS by surveying underlying techniques. Additionally, we present the challenges that the scientific community needs to face in the coming years in order to achieve its ultimate goal of developing intelligent robotic sonographer colleagues. These colleagues are expected to be capable of collaborating with human sonographers in dynamic environments to enhance both diagnostic and intraoperative imaging.


Subject(s)
Robotic Surgical Procedures , Robotics , Humans , Artificial Intelligence , Reproducibility of Results , Ultrasonography/methods
11.
Science ; 381(6654): 141-146, 2023 Jul 14.
Article in English | MEDLINE | ID: mdl-37440630

ABSTRACT

Artificial intelligence (AI) applications in medical robots are bringing a new era to medicine. Advanced medical robots can perform diagnostic and surgical procedures, aid rehabilitation, and provide symbiotic prosthetics to replace limbs. The technology used in these devices, including computer vision, medical image analysis, haptics, navigation, precise manipulation, and machine learning (ML) , could allow autonomous robots to carry out diagnostic imaging, remote surgery, surgical subtasks, or even entire surgical procedures. Moreover, AI in rehabilitation devices and advanced prosthetics can provide individualized support, as well as improved functionality and mobility (see the figure). The combination of extraordinary advances in robotics, medicine, materials science, and computing could bring safer, more efficient, and more widely available patient care in the future. -Gemma K. Alderton.

12.
IEEE Trans Med Imaging ; 42(11): 3436-3450, 2023 11.
Article in English | MEDLINE | ID: mdl-37342953

ABSTRACT

This article describes a novel system for quantitative and volumetric measurement of tissue elasticity in the prostate using simultaneous multi-frequency tissue excitation. Elasticity is computed by using a local frequency estimator to measure the three-dimensional local wavelengths of steady-state shear waves within the prostate gland. The shear wave is created using a mechanical voice coil shaker which transmits simultaneous multi-frequency vibrations transperineally. Radio frequency data is streamed directly from a BK Medical 8848 transrectal ultrasound transducer to an external computer where tissue displacement due to the excitation is measured using a speckle tracking algorithm. Bandpass sampling is used that eliminates the need for an ultra-fast frame rate to track the tissue motion and allows for accurate reconstruction at a sampling frequency that is below the Nyquist rate. A roll motor with computer control is used to rotate the transducer and obtain 3D data. Two commercially available phantoms were used to validate both the accuracy of the elasticity measurements as well as the functional feasibility of using the system for in vivo prostate imaging. The phantom measurements were compared with 3D Magnetic Resonance Elastography (MRE), where a high correlation of 96% was achieved. In addition, the system has been used in two separate clinical studies as a method for cancer identification. Qualitative and quantitative results of 11 patients from these clinical studies are presented here. Furthermore, an AUC of 0.87±0.12 was achieved for malignant vs. benign classification using a binary support vector machine classifier trained with data from the latest clinical study with leave one patient out cross-validation.


Subject(s)
Elasticity Imaging Techniques , Male , Humans , Elasticity Imaging Techniques/methods , Prostate/diagnostic imaging , Ultrasonography , Elasticity , Vibration , Phantoms, Imaging
13.
Article in English | MEDLINE | ID: mdl-37235463

ABSTRACT

Real-time ultrasound imaging plays an important role in ultrasound-guided interventions. The 3-D imaging provides more spatial information compared to conventional 2-D frames by considering the volumes of data. One of the main bottlenecks of 3-D imaging is the long data acquisition time, which reduces practicality and can introduce artifacts from unwanted patient or sonographer motion. This article introduces the first shear wave absolute vibro-elastography (S-WAVE) method with real-time volumetric acquisition using a matrix array transducer. In S-WAVE, an external vibration source generates mechanical vibrations inside the tissue. The tissue motion is then estimated and used in solving a wave equation inverse problem to provide the tissue elasticity. A matrix array transducer is used with a Verasonics ultrasound machine and a frame rate of 2000 volumes/s to acquire 100 radio frequency (RF) volumes in 0.05 s. Using plane wave (PW) and compounded diverging wave (CDW) imaging methods, we estimate axial, lateral, and elevational displacements over 3-D volumes. The curl of the displacements is used with local frequency estimation to estimate elasticity in the acquired volumes. Ultrafast acquisition extends substantially the possible S-WAVE excitation frequency range, now up to 800 Hz, enabling new tissue modeling and characterization. The method was validated on three homogeneous liver fibrosis phantoms and on four different inclusions within a heterogeneous phantom. The homogeneous phantom results show less than 8% (PW) and 5% (CDW) difference between the manufacturer values and the corresponding estimated values over a frequency range of 80-800 Hz. The estimated elasticity values for the heterogeneous phantom at 400-Hz excitation frequency show the average errors of 9% (PW) and 6% (CDW) compared to the provided average values by magnetic resonance elastography (MRE). Furthermore, both imaging methods were able to detect the inclusions within the elasticity volumes. An ex vivo study on a bovine liver sample shows less than 11% (PW) and 9% (CDW) difference between the estimated elasticity ranges by the proposed method and the elasticity ranges provided by MRE and acoustic radiation force impulse (ARFI).

14.
Opt Express ; 31(9): 13895-13910, 2023 Apr 24.
Article in English | MEDLINE | ID: mdl-37157265

ABSTRACT

A new development in photoacoustic (PA) imaging has been the use of compact, portable and low-cost laser diodes (LDs), but LD-based PA imaging suffers from low signal intensity recorded by the conventional transducers. A common method to improve signal strength is temporal averaging, which reduces frame rate and increases laser exposure to patients. To tackle this problem, we propose a deep learning method that will denoise point source PA radio-frequency (RF) data before beamforming with a very few frames, even one. We also present a deep learning method to automatically reconstruct point sources from noisy pre-beamformed data. Finally, we employ a strategy of combined denoising and reconstruction, which can supplement the reconstruction algorithm for very low signal-to-noise ratio inputs.

15.
Article in English | MEDLINE | ID: mdl-37027576

ABSTRACT

Quantitative tissue stiffness characterization using ultrasound (US) has been shown to improve prostate cancer (PCa) detection in multiple studies. Shear wave absolute vibro-elastography (SWAVE) allows quantitative and volumetric assessment of tissue stiffness using external multifrequency excitation. This article presents a proof of concept of a first-of-a-kind 3-D hand-operated endorectal SWAVE system designed to be used during systematic prostate biopsy. The system is developed with a clinical US machine, requiring only an external exciter that can be mounted directly to the transducer. Subsector acquisition of radio frequency (RF) data allows imaging of shear waves with a high effective frame rate (up to 250 Hz). The system was characterized using eight different quality assurance phantoms. Due to the invasive nature of prostate imaging, at this early stage of development, validation of in vivo human tissue was instead carried out by intercostally scanning the livers of n = 7 healthy volunteers. The results are compared with 3-D magnetic resonance elastography (MRE) and an existing 3-D SWAVE system with a matrix array transducer (M-SWAVE). High correlations were found with MRE (99% in phantoms, 94% in liver data) and with M-SWAVE (99% in phantoms, 98% in liver data).


Subject(s)
Elasticity Imaging Techniques , Prostatic Neoplasms , Transducers , Humans , Male , Proof of Concept Study , Elasticity Imaging Techniques/methods , Prostatic Neoplasms/diagnostic imaging , Image-Guided Biopsy/methods , Ultrasonography
16.
Int J Comput Assist Radiol Surg ; 18(6): 1061-1068, 2023 Jun.
Article in English | MEDLINE | ID: mdl-37103728

ABSTRACT

PURPOSE: Trans-oral robotic surgery (TORS) using the da Vinci surgical robot is a new minimally-invasive surgery method to treat oropharyngeal tumors, but it is a challenging operation. Augmented reality (AR) based on intra-operative ultrasound (US) has the potential to enhance the visualization of the anatomy and cancerous tumors to provide additional tools for decision-making in surgery. METHODS: We propose a US-guided AR system for TORS, with the transducer placed on the neck for a transcervical view. Firstly, we perform a novel MRI-to-transcervical 3D US registration study, comprising (i) preoperative MRI to preoperative US registration, and (ii) preoperative to intraoperative US registration to account for tissue deformation due to retraction. Secondly, we develop a US-robot calibration method with an optical tracker and demonstrate its use in an AR system that displays anatomy models in the surgeon's console in real-time. RESULTS: Our AR system achieves a projection error from the US to the stereo cameras of 27.14 and 26.03 pixels (image is 540[Formula: see text]960) in a water bath experiment. The average target registration error (TRE) for MRI to 3D US is 8.90 mm for the 3D US transducer and 5.85 mm for freehand 3D US, and the TRE for pre-intra operative US registration is 7.90 mm. CONCLUSION: We demonstrate the feasibility of each component of the first complete pipeline for MRI-US-robot-patient registration for a proof-of-concept transcervical US-guided AR system for TORS. Our results show that trans-cervical 3D US is a promising technique for TORS image guidance.


Subject(s)
Augmented Reality , Robotic Surgical Procedures , Surgery, Computer-Assisted , Humans , Robotic Surgical Procedures/methods , Surgery, Computer-Assisted/methods , Ultrasonography/methods , Ultrasonics , Imaging, Three-Dimensional/methods
17.
Int J Comput Assist Radiol Surg ; 18(10): 1811-1818, 2023 Oct.
Article in English | MEDLINE | ID: mdl-37093527

ABSTRACT

PURPOSE: In "human teleoperation" (HT), mixed reality (MR) and haptics are used to tightly couple an expert leader to a human follower [1]. To determine the feasibility of HT for teleultrasound, we quantify the ability of humans to track a position and/or force trajectory via MR cues. The human response time, precision, frequency response, and step response were characterized, and several rendering methods were compared. METHODS: Volunteers (n=11) performed a series of tasks as the follower in our HT system. The tasks involved tracking pre-recorded series of motions and forces while pose and force were recorded. The volunteers then performed frequency response tests and filled out a questionnaire. RESULTS: Following force and pose simultaneously was more difficult but did not lead to significant performance degradation versus following one at a time. On average, subjects tracked positions, orientations, and forces with RMS tracking errors of [Formula: see text] mm, [Formula: see text], [Formula: see text] N, steady-state errors of [Formula: see text] mm, [Formula: see text] N, and lags of [Formula: see text] ms, respectively. Performance decreased with input frequency, depending on the input amplitude. CONCLUSION: Teleoperating a person through MR is a novel concept with many possible applications. However, it is unknown what performance is achievable and which approaches work best. This paper thus characterizes human tracking ability in MR HT for teleultrasound, which is important for designing future tightly coupled guidance and training systems using MR.


Subject(s)
Augmented Reality , Robotics , Humans
18.
Int J Comput Assist Radiol Surg ; 18(6): 1093-1099, 2023 Jun.
Article in English | MEDLINE | ID: mdl-36995513

ABSTRACT

PURPOSE: Prostate imaging to guide biopsy remains unsatisfactory, with current solutions suffering from high complexity and poor accuracy and reliability. One novel entrant into this field is micro-ultrasound (microUS), which uses a high-frequency imaging probe to achieve very high spatial resolution, and achieves prostate cancer detection rates equivalent to multiparametric magnetic resonance imaging (mpMRI). However, the ExactVu transrectal microUS probe has a unique geometry that makes it challenging to acquire controlled, repeatable three-dimensional (3D) transrectal ultrasound (TRUS) volumes. We describe the design, fabrication, and validation of a 3D acquisition system that allows for the accurate use of the ExactVu microUS device for volumetric prostate imaging. METHODS: The design uses a motorized, computer-controlled brachytherapy stepper to rotate the ExactVu transducer about its axis. We perform geometric validation using a phantom with known dimensions and compare performance with magnetic resonance imaging (MRI) using a commercial quality assurance anthropomorphic prostate phantom. RESULTS: Our geometric validation shows accuracy of 1 mm or less in all three directions, and images of an anthropomorphic phantom qualitatively match those acquired using MRI and show good agreement quantitatively. CONCLUSION: We describe the first system to acquire robotically controlled 3D microUS images using the ExactVu microUS system. The reconstructed 3D microUS images are accurate, which will allow for future applications of the ExactVu microUS system in prostate specimen and in vivo imaging.


Subject(s)
Prostate , Prostatic Neoplasms , Male , Humans , Prostate/diagnostic imaging , Prostate/pathology , Reproducibility of Results , Ultrasonography/methods , Magnetic Resonance Imaging/methods , Image-Guided Biopsy/methods , Imaging, Three-Dimensional/methods , Prostatic Neoplasms/pathology
19.
NMR Biomed ; 36(7): e4899, 2023 07.
Article in English | MEDLINE | ID: mdl-36628624

ABSTRACT

Liver magnetic resonance elastography (MRE) is a noninvasive stiffness measurement technique that captures the tissue displacement in the phase of the signal. To limit the scanning time to a single breath-hold, liver MRE usually involves advanced readout techniques such as simultaneous multislice (SMS) or multishot methods. Furthermore, all these readout techniques require additional in-plane acceleration using either parallel imaging capabilities, such as sensitivity encoding (SENSE), or k -space undersampling, such as compressed sensing (CS). However, these methods apply a single regularization function on the complex image. This study aims to design and evaluate methods that use separate regularization on the magnitude and phase of MRE to exploit their distinct spatiotemporal characteristics. Specifically, we introduce two compressed sensing methods. The first method, termed phase-regularized compressed sensing (PRCS), applies a two-dimensional total variation (TV) prior to the magnitude and two-dimensional wavelet regularization to the phase. The second method, termed displacement-regularized compressed sensing (DRCS), exploits the spatiotemporal redundancy using 3D total variation on the magnitude. Additionally, DRCS includes a displacement fitting function to apply wavelet regularization to the displacement phasor. Both DRCS and PRCS were evaluated with different levels of compression factors in three datasets: an in silico abdomen dataset, an in vitro tissue-mimicking phantom, and an in vivo liver dataset. The reconstructed images were compared with the full sampled reconstruction, zero-filling reconstruction, wavelet-regularized compressed sensing, and a low rank plus sparse reconstruction. The metrics used for quantitative evaluation were the structural similarity index (SSIM) of magnitude (M-SSIM), displacement (D-SSIM), and shear modulus (S-SSIM), and mean shear modulus. Results from highly undersampled in silico and in vitro datasets demonstrate that the DRCS method provides higher reconstruction quality than the conventional compressed sensing method for a wide range of stiffness values. Notably, DRCS provides 24% and 22% increase in D-SSIM compared with CS for the in silico and in vitro datasets, respectively. Comparison with liver stiffness measured from full sampled data and highly undersampled data (CR=4) demonstrates that the DRCS method provided the strongest correlation ( R 2 =0.95), second-lowest mean bias (-0.18 kPa, lowest for CS with -0.16 kPa), and lowest coefficient of variation (CV=3.6%). Our results demonstrate the potential of using DRCS to improve the reconstruction quality of accelerated MRE.


Subject(s)
Data Compression , Elasticity Imaging Techniques , Reproducibility of Results , Data Compression/methods , Abdomen , Phantoms, Imaging , Magnetic Resonance Imaging/methods , Image Processing, Computer-Assisted/methods , Algorithms
20.
Ultrasound Med Biol ; 48(12): 2486-2501, 2022 12.
Article in English | MEDLINE | ID: mdl-36180312

ABSTRACT

Pregnancy complications such as pre-eclampsia (PE) and intrauterine growth restriction (IUGR) are associated with structural and functional changes in the placenta. Different elastography techniques with an ability to assess the mechanical properties of tissue can identify and monitor the pathological state of the placenta. Currently available elastography techniques have been used with promising results to detect placenta abnormalities; however, limitations include inadequate measurement depth and safety concerns from high negative pressure pulses. Previously, we described a shear wave absolute vibro-elastography (SWAVE) method by applying external low-frequency mechanical vibrations to generate shear waves and studied 61 post-delivery clinically normal placentas to explore the feasibility of SWAVE for placental assessment and establish a measurement baseline. This next phase of the study, namely, SWAVE 2.0, improves the previous system and elasticity reconstruction by incorporating a multi-frequency acquisition system and using a 3-D local frequency estimation (LFE) method. Compared with its 2-D counterpart, the proposed system using 3-D LFE was found to reduce the bias and variance in elasticity measurements in tissue-mimicking phantoms. In the aim of investigating the potential of improved SWAVE 2.0 measurements to identify placental abnormalities, we studied 46 post-delivery placentas, including 26 diseased (16 IUGR and 10 PE) and 20 normal control placentas. By use of a 3.33-MHz motorized curved-array transducer, multi-frequency (80,100 and 120 Hz) elasticity measures were obtained with 3-D LFE, and both IUGR (15.30 ± 2.96 kPa, p = 3.35e-5) and PE (12.33 ± 4.88 kPa, p = 0.017) placentas were found to be significantly stiffer compared with the control placentas (8.32 ± 3.67 kPa). A linear discriminant analysis (LDA) classifier was able to classify between healthy and diseased placentas with a sensitivity, specificity and accuracy of 87%, 78% and 83% and an area under the receiver operating curve of 0.90 (95% confidence interval: 0.8-0.99). Further, the pregnancy outcome in terms of neonatal intensive care unit admission was predicted with a sensitivity, specificity and accuracy of 70%, 71%, 71%, respectively, and area under the receiver operating curve of 0.78 (confidence interval: 0.62-0.93). A viscoelastic characterization of placentas using a fractional rheological model revealed that the viscosity measures in terms of viscosity parameter n were significantly higher in IUGR (2.3 ± 0.21) and PE (2.11 ± 0.52) placentas than in normal placentas (1.45 ± 0.65). This work illustrates the potential relevance of elasticity and viscosity imaging using SWAVE 2.0 as a non-invasive technology for detection of placental abnormalities and the prediction of pregnancy outcomes.


Subject(s)
Elasticity Imaging Techniques , Placenta Diseases , Infant, Newborn , Pregnancy , Female , Humans , Elasticity Imaging Techniques/methods , Placenta/diagnostic imaging , Viscosity , Placenta Diseases/diagnostic imaging , Elasticity , Fetal Growth Retardation/diagnostic imaging , Biomarkers
SELECTION OF CITATIONS
SEARCH DETAIL
...