Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 10 de 10
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
Med Phys ; 51(4): 2998-3009, 2024 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-38060696

RESUMEN

BACKGROUND: The static magnetic field present in magnetic resonance (MR)-guided radiotherapy systems can influence dose deposition and charged particle collection in air-filled ionization chambers. Thus, accurately quantifying the effect of the magnetic field on ionization chamber response is critical for output calibration. Formalisms for reference dosimetry in a magnetic field have been proposed, whereby a magnetic field quality conversion factor kB,Q is defined to account for the combined effects of the magnetic field on the radiation detector. Determination of kB,Q in the literature has focused on Monte Carlo simulation studies, with experimental validation limited to only a few ionization chamber models. PURPOSE: The purpose of this study is to experimentally measure kB,Q for 11 ionization chamber models in two commercially available MR-guided radiotherapy systems: Elekta Unity and ViewRay MRIdian. METHODS: Eleven ionization chamber models were characterized in this study: Exradin A12, A12S, A28, and A26, PTW T31010, T31021, and T31022, and IBA FC23-C, CC25, CC13, and CC08. The experimental method to measure kB,Q utilized cross-calibration against a reference Exradin A1SL chamber. Absorbed dose to water was measured for the reference A1SL chamber positioned parallel to the magnetic field with its centroid placed at the machine isocenter at a depth of 10 cm in water for a 10 × 10 cm2 field size at that depth. Output was subsequently measured with the test chamber at the same point of measurement. kB,Q for the test chamber was computed as the ratio of reference dose to test chamber output, with this procedure repeated for each chamber in each MR-guided radiotherapy system. For the high-field 1.5 T Elekta Unity system, the dependence of kB,Q on the chamber orientation relative to the magnetic field was quantified by rotating the chamber about the machine isocenter. RESULTS: Measured kB,Q values for our test dataset of ionization chamber models ranged from 0.991 to 1.002, and 0.995 to 1.004 for the Elekta Unity and ViewRay MRIdian, respectively, with kB,Q tending to increase as the chamber sensitive volume increased. Measured kB,Q values largely agreed within uncertainty to published Monte Carlo simulation data and available experimental data. kB,Q deviation from unity was minimized for ionization chamber orientation parallel or antiparallel to the magnetic field, with increased deviations observed at perpendicular orientations. Overall (k = 1) uncertainty in the experimental determination of the magnetic field quality conversion factor, kB,Q was 0.71% and 0.72% for the Elekta Unity and ViewRay MRIdian systems, respectively. CONCLUSIONS: For a high-field MR-linac, the characterization of ionization chamber performance as angular orientation varied relative to the magnetic field confirmed that the ideal orientation for output calibration is parallel. For most of these chamber models, this study represents the first experimental characterization of chamber performance in clinical MR-linac beams. This is a critical step toward accurate output calibration for MR-guided radiotherapy systems and the measured kB,Q values will be an important reference data source for forthcoming MR-linac reference dosimetry protocols.


Asunto(s)
Radiometría , Radioterapia Guiada por Imagen , Efectividad Biológica Relativa , Campos Magnéticos , Método de Montecarlo , Agua
2.
Med Phys ; 51(4): 2665-2677, 2024 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-37888789

RESUMEN

BACKGROUND: Accurate segmentation of the clinical target volume (CTV) corresponding to the prostate with or without proximal seminal vesicles is required on transrectal ultrasound (TRUS) images during prostate brachytherapy procedures. Implanted needles cause artifacts that may make this task difficult and time-consuming. Thus, previous studies have focused on the simpler problem of segmentation in the absence of needles at the cost of reduced clinical utility. PURPOSE: To use a convolutional neural network (CNN) algorithm for segmentation of the prostatic CTV in TRUS images post-needle insertion obtained from prostate brachytherapy procedures to better meet the demands of the clinical procedure. METHODS: A dataset consisting of 144 3-dimensional (3D) TRUS images with implanted metal brachytherapy needles and associated manual CTV segmentations was used for training a 2-dimensional (2D) U-Net CNN using a Dice Similarity Coefficient (DSC) loss function. These were split by patient, with 119 used for training and 25 reserved for testing. The 3D TRUS training images were resliced at radial (around the axis normal to the coronal plane) and oblique angles through the center of the 3D image, as well as axial, coronal, and sagittal planes to obtain 3689 2D TRUS images and masks for training. The network generated boundary predictions on 300 2D TRUS images obtained from reslicing each of the 25 3D TRUS images used for testing into 12 radial slices (15° apart), which were then reconstructed into 3D surfaces. Performance metrics included DSC, recall, precision, unsigned and signed volume percentage differences (VPD/sVPD), mean surface distance (MSD), and Hausdorff distance (HD). In addition, we studied whether providing algorithm-predicted boundaries to the physicians and allowing modifications increased the agreement between physicians. This was performed by providing a subset of 3D TRUS images of five patients to five physicians who segmented the CTV using clinical software and repeated this at least 1 week apart. The five physicians were given the algorithm boundary predictions and allowed to modify them, and the resulting inter- and intra-physician variability was evaluated. RESULTS: Median DSC, recall, precision, VPD, sVPD, MSD, and HD of the 3D-reconstructed algorithm segmentations were 87.2 [84.1, 88.8]%, 89.0 [86.3, 92.4]%, 86.6 [78.5, 90.8]%, 10.3 [4.5, 18.4]%, 2.0 [-4.5, 18.4]%, 1.6 [1.2, 2.0] mm, and 6.0 [5.3, 8.0] mm, respectively. Segmentation time for a set of 12 2D radial images was 2.46 [2.44, 2.48] s. With and without U-Net starting points, the intra-physician median DSCs were 97.0 [96.3, 97.8]%, and 94.4 [92.5, 95.4]% (p < 0.0001), respectively, while the inter-physician median DSCs were 94.8 [93.3, 96.8]% and 90.2 [88.7, 92.1]%, respectively (p < 0.0001). The median segmentation time for physicians, with and without U-Net-generated CTV boundaries, were 257.5 [211.8, 300.0] s and 288.0 [232.0, 333.5] s, respectively (p = 0.1034). CONCLUSIONS: Our algorithm performed at a level similar to physicians in a fraction of the time. The use of algorithm-generated boundaries as a starting point and allowing modifications reduced physician variability, although it did not significantly reduce the time compared to manual segmentations.


Asunto(s)
Braquiterapia , Aprendizaje Profundo , Neoplasias de la Próstata , Masculino , Humanos , Próstata/diagnóstico por imagen , Braquiterapia/métodos , Ultrasonografía , Algoritmos , Procesamiento de Imagen Asistido por Computador/métodos , Neoplasias de la Próstata/diagnóstico por imagen , Neoplasias de la Próstata/radioterapia
3.
Med Phys ; 50(5): 2649-2661, 2023 May.
Artículo en Inglés | MEDLINE | ID: mdl-36846880

RESUMEN

PURPOSE: High-dose-rate (HDR) interstitial brachytherapy (BT) is a common treatment technique for localized intermediate to high-risk prostate cancer. Transrectal ultrasound (US) imaging is typically used for guiding needle insertion, including localization of the needle tip which is critical for treatment planning. However, image artifacts can limit needle tip visibility in standard brightness (B)-mode US, potentially leading to dose delivery that deviates from the planned dose. To improve intraoperative tip visualization in visually obstructed needles, we propose a power Doppler (PD) US method which utilizes a novel wireless mechanical oscillator, validated in phantom experiments and clinical HDR-BT cases as part of a feasibility clinical trial. METHODS: Our wireless oscillator contains a DC motor housed in a 3D printed case and is powered by rechargeable battery allowing the device to be operated by one person with no additional equipment required in the operating room. The oscillator end-piece features a cylindrical shape designed for BT applications to fit on top of the commonly used cylindrical needle mandrins. Phantom validation was completed using tissue-equivalent agar phantoms with the clinical US system and both plastic and metal needles. Our PD method was tested using a needle implant pattern matching a standard HDR-BT procedure as well as an implant pattern designed to maximize needle shadowing artifacts. Needle tip localization accuracy was assessed using the clinical method based on ideal reference needles as well as a comparison to computed tomography (CT) as a gold standard. Clinical validation was completed in five patients who underwent standard HDR-BT as part of a feasibility clinical trial. Needle tips positions were identified using B-mode US and PD US with perturbation from our wireless oscillator. RESULTS: Absolute mean ± standard deviation tip error for B-mode alone, PD alone, and B-mode combined with PD was respectively: 0.3 ± 0.3 mm, 0.6 ± 0.5 mm, and 0.4 ± 0.2 mm for the mock HDR-BT needle implant; 0.8 ± 1.7 mm, 0.4 ± 0.6 mm, and 0.3 ± 0.5 mm for the explicit shadowing implant with plastic needles; and 0.5 ± 0.2 mm, 0.5 ± 0.3 mm, and 0.6 ± 0.2 mm for the explicit shadowing implant with metal needles. The total mean absolute tip error for all five patients in the feasibility clinical trial was 0.9 ± 0.7 mm using B-mode US alone and 0.8 ± 0.5 mm when including PD US, with increased benefit observed for needles classified as visually obstructed. CONCLUSIONS: Our proposed PD needle tip localization method is easy to implement and requires no modifications or additions to the standard clinical equipment or workflow. We have demonstrated decreased tip localization error and variation for visually obstructed needles in both phantom and clinical cases, including providing the ability to visualize needles previously not visible using B-mode US alone. This method has the potential to improve needle visualization in challenging cases without burdening the clinical workflow, potentially improving treatment accuracy in HDR-BT and more broadly in any minimally invasive needle-based procedure.


Asunto(s)
Braquiterapia , Neoplasias de la Próstata , Masculino , Humanos , Próstata/diagnóstico por imagen , Neoplasias de la Próstata/diagnóstico por imagen , Neoplasias de la Próstata/radioterapia , Neoplasias de la Próstata/cirugía , Ultrasonografía , Agujas , Ultrasonografía Doppler
4.
Brachytherapy ; 22(2): 199-209, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-36641305

RESUMEN

PURPOSE: The purpose of this study was to evaluate and clinically implement a deformable surface-based magnetic resonance imaging (MRI) to three-dimensional ultrasound (US) image registration algorithm for prostate brachytherapy (BT) with the aim to reduce operator dependence and facilitate dose escalation to an MRI-defined target. METHODS AND MATERIALS: Our surface-based deformable image registration (DIR) algorithm first translates and scales to align the US- and MR-defined prostate surfaces, followed by deformation of the MR-defined prostate surface to match the US-defined prostate surface. The algorithm performance was assessed in a phantom using three deformation levels, followed by validation in three retrospective high-dose-rate BT clinical cases. For comparison, manual rigid registration and cognitive fusion by physician were also employed. Registration accuracy was assessed using the Dice similarity coefficient (DSC) and target registration error (TRE) for embedded spherical landmarks. The algorithm was then implemented intraoperatively in a prospective clinical case. RESULTS: In the phantom, our DIR algorithm demonstrated a mean DSC and TRE of 0.74 ± 0.08 and 0.94 ± 0.49 mm, respectively, significantly improving the performance compared to manual rigid registration with 0.64 ± 0.16 and 1.88 ± 1.24 mm, respectively. Clinical results demonstrated reduced variability compared to the current standard of cognitive fusion by physicians. CONCLUSIONS: We successfully validated a DIR algorithm allowing for translation of MR-defined target and organ-at-risk contours into the intraoperative environment. Prospective clinical implementation demonstrated the intraoperative feasibility of our algorithm, facilitating targeted biopsies and dose escalation to the MR-defined lesion. This method provides the potential to standardize the registration procedure between physicians, reducing operator dependence.


Asunto(s)
Braquiterapia , Próstata , Masculino , Humanos , Próstata/diagnóstico por imagen , Próstata/patología , Braquiterapia/métodos , Estudios Retrospectivos , Estudios Prospectivos , Algoritmos , Imagen por Resonancia Magnética/métodos , Procesamiento de Imagen Asistido por Computador/métodos
5.
Osteoarthr Cartil Open ; 4(3): 100290, 2022 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-36474947

RESUMEN

Objective: This study aimed to develop a deep learning-based approach to automatically segment the femoral articular cartilage (FAC) in 3D ultrasound (US) images of the knee to increase time efficiency and decrease rater variability. Design: Our method involved deep learning predictions on 2DUS slices sampled in the transverse plane to view the cartilage of the femoral trochlea, followed by reconstruction into a 3D surface. A 2D U-Net was modified and trained using a dataset of 200 2DUS images resliced from 20 3DUS images. Segmentation accuracy was evaluated using a holdout dataset of 50 2DUS images resliced from 5 3DUS images. Absolute and signed error metrics were computed and FAC segmentation performance was compared between rater 1 and 2 manual segmentations. Results: Our U-Net-based algorithm performed with mean 3D DSC, recall, precision, VPD, MSD, and HD of 73.1 â€‹± â€‹3.9%, 74.8 â€‹± â€‹6.1%, 72.0 â€‹± â€‹6.3%, 10.4 â€‹± â€‹6.0%, 0.3 â€‹± â€‹0.1 â€‹mm, and 1.6 â€‹± â€‹0.7 â€‹mm, respectively. Compared to the individual 2D predictions, our algorithm demonstrated a decrease in performance after 3D reconstruction, but these differences were not found to be statistically significant. The percent difference between the manually segmented volumes of the 2 raters was 3.4%, and rater 2 demonstrated the largest VPD with 14.2 â€‹± â€‹11.4 â€‹mm3 compared to 10.4 â€‹± â€‹6.0 â€‹mm3 for rater 1. Conclusion: This study investigated the use of a modified U-Net algorithm to automatically segment the FAC in 3DUS knee images of healthy volunteers, demonstrating that this segmentation method would increase the efficiency of anterior femoral cartilage volume estimation and expedite the post-acquisition processing for 3D US images of the knee.

6.
Phys Med Biol ; 67(7)2022 03 29.
Artículo en Inglés | MEDLINE | ID: mdl-35240585

RESUMEN

Three-dimensional (3D) transrectal ultrasound (TRUS) is utilized in prostate cancer diagnosis and treatment, necessitating time-consuming manual prostate segmentation. We have previously developed an automatic 3D prostate segmentation algorithm involving deep learning prediction on radially sampled 2D images followed by 3D reconstruction, trained on a large, clinically diverse dataset with variable image quality. As large clinical datasets are rare, widespread adoption of automatic segmentation could be facilitated with efficient 2D-based approaches and the development of an image quality grading method. The complete training dataset of 6761 2D images, resliced from 206 3D TRUS volumes acquired using end-fire and side-fire acquisition methods, was split to train two separate networks using either end-fire or side-fire images. Split datasets were reduced to 1000, 500, 250, and 100 2D images. For deep learning prediction, modified U-Net and U-Net++ architectures were implemented and compared using an unseen test dataset of 40 3D TRUS volumes. A 3D TRUS image quality grading scale with three factors (acquisition quality, artifact severity, and boundary visibility) was developed to assess the impact on segmentation performance. For the complete training dataset, U-Net and U-Net++ networks demonstrated equivalent performance, but when trained using split end-fire/side-fire datasets, U-Net++ significantly outperformed the U-Net. Compared to the complete training datasets, U-Net++ trained using reduced-size end-fire and side-fire datasets demonstrated equivalent performance down to 500 training images. For this dataset, image quality had no impact on segmentation performance for end-fire images but did have a significant effect for side-fire images, with boundary visibility having the largest impact. Our algorithm provided fast (<1.5 s) and accurate 3D segmentations across clinically diverse images, demonstrating generalizability and efficiency when employed on smaller datasets, supporting the potential for widespread use, even when data is scarce. The development of an image quality grading scale provides a quantitative tool for assessing segmentation performance.


Asunto(s)
Aprendizaje Profundo , Neoplasias de la Próstata , Humanos , Masculino , Pelvis , Próstata/diagnóstico por imagen , Neoplasias de la Próstata/diagnóstico por imagen , Ultrasonografía
7.
Med Phys ; 49(6): 3944-3962, 2022 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-35319105

RESUMEN

BACKGROUND: Mammographic screening has reduced mortality in women through the early detection of breast cancer. However, the sensitivity for breast cancer detection is significantly reduced in women with dense breasts, in addition to being an independent risk factor. Ultrasound (US) has been proven effective in detecting small, early-stage, and invasive cancers in women with dense breasts. PURPOSE: To develop an alternative, versatile, and cost-effective spatially tracked three-dimensional (3D) US system for whole-breast imaging. This paper describes the design, development, and validation of the spatially tracked 3DUS system, including its components for spatial tracking, multi-image registration and fusion, feasibility for whole-breast 3DUS imaging and multi-planar visualization in tissue-mimicking phantoms, and a proof-of-concept healthy volunteer study. METHODS: The spatially tracked 3DUS system contains (a) a six-axis manipulator and counterbalanced stabilizer, (b) an in-house quick-release 3DUS scanner, adaptable to any commercially available US system, and removable, allowing for handheld 3DUS acquisition and two-dimensional US imaging, and (c) custom software for 3D tracking, 3DUS reconstruction, visualization, and spatial-based multi-image registration and fusion of 3DUS images for whole-breast imaging. Spatial tracking of the 3D position and orientation of the system and its joints (J1-6 ) were evaluated in a clinically accessible workspace for bedside point-of-care (POC) imaging. Multi-image registration and fusion of acquired 3DUS images were assessed with a quadrants-based protocol in tissue-mimicking phantoms and the target registration error (TRE) was quantified. Whole-breast 3DUS imaging and multi-planar visualization were evaluated with a tissue-mimicking breast phantom. Feasibility for spatially tracked whole-breast 3DUS imaging was assessed in a proof-of-concept healthy male and female volunteer study. RESULTS: Mean tracking errors were 0.87 ± 0.52, 0.70 ± 0.46, 0.53 ± 0.48, 0.34 ± 0.32, 0.43 ± 0.28, and 0.78 ± 0.54 mm for joints J1-6 , respectively. Lookup table (LUT) corrections minimized the error in joints J1 , J2 , and J5 . Compound motions exercising all joints simultaneously resulted in a mean tracking error of 1.08 ± 0.88 mm (N = 20) within the overall workspace for bedside 3DUS imaging. Multi-image registration and fusion of two acquired 3DUS images resulted in a mean TRE of 1.28 ± 0.10 mm. Whole-breast 3DUS imaging and multi-planar visualization in axial, sagittal, and coronal views were demonstrated with the tissue-mimicking breast phantom. The feasibility of the whole-breast 3DUS approach was demonstrated in healthy male and female volunteers. In the male volunteer, the high-resolution whole-breast 3DUS acquisition protocol was optimized without the added complexities of curvature and tissue deformations. With small post-acquisition corrections for motion, whole-breast 3DUS imaging was performed on the healthy female volunteer showing relevant anatomical structures and details. CONCLUSIONS: Our spatially tracked 3DUS system shows potential utility as an alternative, accurate, and feasible whole-breast approach with the capability for bedside POC imaging. Future work is focused on reducing misregistration errors due to motion and tissue deformations, to develop a robust spatially tracked whole-breast 3DUS acquisition protocol, then exploring its clinical utility for screening high-risk women with dense breasts.


Asunto(s)
Neoplasias de la Mama , Densidad de la Mama , Neoplasias de la Mama/diagnóstico por imagen , Detección Precoz del Cáncer , Femenino , Humanos , Imagenología Tridimensional/métodos , Masculino , Mamografía , Fantasmas de Imagen , Sistemas de Atención de Punto
8.
Brachytherapy ; 20(1): 248-256, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-32900644

RESUMEN

PURPOSE: Permanent breast seed implant (PBSI) brachytherapy is a novel technique for early-stage breast cancer. Computed tomography (CT) images are used for treatment planning and freehand 2D ultrasound for implant guidance. The multimodality imaging approach leads to discrepancies in target identification. To address this, a prototype 3D ultrasound (3DUS) system was recently developed for PBSI. In this study, we characterize the 3DUS system performance, establish QA baselines, and develop and test a method to register 3DUS images to CT images for PBSI planning. METHODS AND MATERIALS: 3DUS system performance was characterized by testing distance and volume measurement accuracy, and needle template alignment accuracy. 3DUS-CT registration was achieved through point-based registration using a 3D-printed model designed and constructed to provide visible landmarks on both images and tested on an in-house made gel breast phantom. RESULTS: The 3DUS system mean distance measurement accuracy was within 1% in axial, lateral, and elevational directions. A volumetric error of 3% was observed. The mean needle template alignment error was 1.0° ± 0.3 ° and 1.3 ± 0.5 mm. The mean 3DUS-CT registration error was within 3 mm when imaging at the breast centre or across all breast quadrants. CONCLUSIONS: This study provided baseline data to characterize the performance of a prototype 3DUS system for PBSI planning and developed and tested a method to obtain accurate 3DUS-CT image registration for PBSI planning. Future work will focus on system validation and characterization in a clinical context as well as the assessment of impact on treatment plans.


Asunto(s)
Braquiterapia , Braquiterapia/métodos , Mama , Humanos , Imagenología Tridimensional , Fantasmas de Imagen , Ultrasonografía
9.
Med Phys ; 47(6): 2413-2426, 2020 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-32166768

RESUMEN

PURPOSE: Needle-based procedures for diagnosing and treating prostate cancer, such as biopsy and brachytherapy, have incorporated three-dimensional (3D) transrectal ultrasound (TRUS) imaging to improve needle guidance. Using these images effectively typically requires the physician to manually segment the prostate to define the margins used for accurate registration, targeting, and other guidance techniques. However, manual prostate segmentation is a time-consuming and difficult intraoperative process, often occurring while the patient is under sedation (biopsy) or anesthetic (brachytherapy). Minimizing procedure time with a 3D TRUS prostate segmentation method could provide physicians with a quick and accurate prostate segmentation, and allow for an efficient workflow with improved patient throughput to enable faster patient access to care. The purpose of this study was to develop a supervised deep learning-based method to segment the prostate in 3D TRUS images from different facilities, generated using multiple acquisition methods and commercial ultrasound machine models to create a generalizable algorithm for needle-based prostate cancer procedures. METHODS: Our proposed method for 3D segmentation involved prediction on two-dimensional (2D) slices sampled radially around the approximate central axis of the prostate, followed by reconstruction into a 3D surface. A 2D U-Net was modified, trained, and validated using images from 84 end-fire and 122 side-fire 3D TRUS images acquired during clinical biopsies and brachytherapy procedures. Modifications to the expansion section of the standard U-Net included the addition of 50% dropouts and the use of transpose convolutions instead of standard upsampling followed by convolution to reduce overfitting and improve performance, respectively. Manual contours provided the annotations needed for the training, validation, and testing datasets, with the testing dataset consisting of 20 end-fire and 20 side-fire unseen 3D TRUS images. Since predicting with 2D images has the potential to lose spatial and structural information, comparisons to 3D reconstruction and optimized 3D networks including 3D V-Net, Dense V-Net, and High-resolution 3D-Net were performed following an investigation into different loss functions. An extended selection of absolute and signed error metrics were computed, including pixel map comparisons [dice similarity coefficient (DSC), recall, and precision], volume percent differences (VPD), mean surface distance (MSD), and Hausdorff distance (HD), to assess 3D segmentation accuracy. RESULTS: Overall, our proposed reconstructed modified U-Net performed with a median [first quartile, third quartile] absolute DSC, recall, precision, VPD, MSD, and HD of 94.1 [92.6, 94.9]%, 96.0 [93.1, 98.5]%, 93.2 [88.8, 95.4]%, 5.78 [2.49, 11.50]%, 0.89 [0.73, 1.09] mm, and 2.89 [2.37, 4.35] mm, respectively. When compared to the best-performing optimized 3D network (i.e., 3D V-Net with a Dice plus cross-entropy loss function), our proposed method performed with a significant improvement across nearly all metrics. A computation time <0.7 s per prostate was observed, which is a sufficiently short segmentation time for intraoperative implementation. CONCLUSIONS: Our proposed algorithm was able to provide a fast and accurate 3D segmentation across variable 3D TRUS prostate images, enabling a generalizable intraoperative solution for needle-based prostate cancer procedures. This method has the potential to decrease procedure times, supporting the increasing interest in needle-based 3D TRUS approaches.


Asunto(s)
Braquiterapia , Aprendizaje Profundo , Neoplasias de la Próstata , Humanos , Procesamiento de Imagen Asistido por Computador , Imagenología Tridimensional , Masculino , Próstata/diagnóstico por imagen , Neoplasias de la Próstata/diagnóstico por imagen , Neoplasias de la Próstata/radioterapia , Ultrasonografía
10.
Opt Express ; 24(22): 24959-24970, 2016 Oct 31.
Artículo en Inglés | MEDLINE | ID: mdl-27828436

RESUMEN

We report on a flow-through optical sensor consisting of a microcapillary with mirrored channels. Illuminating the structure from the side results in a complicated spectral interference pattern due to the different cavities formed between the inner and outer capillary walls. Using a Fourier transform technique to isolate the desired channel modes and measure their resonance shift, we obtain a refractometric detection limit of (6.3 ± 1.1) x 10-6 RIU near a center wavelength of 600 nm. This simple device demonstrates experimental refractometric sensitivities up to (5.6 ± 0.2) x 102 nm/RIU in the visible spectrum, and it is calculated to reach 1540 nm/RIU with a detection limit of 2.3 x 10-6 RIU at a wavelength of 1.55 µm. These values are comparable to or exceed some of the best Fabry-Perot sensors reported to date. Furthermore, the device can function as a gas or liquid sensor or even as a pressure sensor owing to its high refractometric sensitivity and simple operation.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...