Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 22
Filtrar
1.
Int J Comput Assist Radiol Surg ; 19(3): 493-506, 2024 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-38129364

RESUMEN

PURPOSE: We propose a large-factor super-resolution (SR) method for performing SR on registered medical image datasets. Conventional SR approaches use low-resolution (LR) and high-resolution (HR) image pairs to train a deep convolutional neural network (DCN). However, LR-HR images in medical imaging are commonly acquired from different imaging devices, and acquiring LR-HR image pairs needs registration. Registered LR-HR images have registration errors inevitably. Using LR-HR images with registration error for training an SR DCN causes collapsed SR results. To address these challenges, we introduce a novel SR approach designed specifically for registered LR-HR medical images. METHODS: We propose style-subnets-assisted generative latent bank for large-factor super-resolution (SGSR) trained with registered medical image datasets. Pre-trained generative models named generative latent bank (GLB), which stores rich image priors, can be applied in SR to generate realistic and faithful images. We improve GLB by newly introducing style-subnets-assisted GLB (S-GLB). We also propose a novel inter-uncertainty loss to boost our method's performance. Introducing more spatial information by inputting adjacent slices further improved the results. RESULTS: SGSR outperforms state-of-the-art (SOTA) supervised SR methods qualitatively and quantitatively on multiple datasets. SGSR achieved higher reconstruction accuracy than recently supervised baselines by increasing peak signal-to-noise ratio from 32.628 to 34.206 dB. CONCLUSION: SGSR performs large-factor SR while given a registered LR-HR medical image dataset with registration error for training. SGSR's results have both realistic textures and accurate anatomical structures due to favorable quantitative and qualitative results. Experiments on multiple datasets demonstrated SGSR's superiority over other SOTA methods. SR medical images generated by SGSR are expected to improve the accuracy of pre-surgery diagnosis and reduce patient burden.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Redes Neurales de la Computación , Humanos , Relación Señal-Ruido , Procesamiento de Imagen Asistido por Computador/métodos
2.
Anticancer Res ; 43(9): 4155-4160, 2023 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-37648314

RESUMEN

BACKGROUND/AIM: Immunotherapy using immune checkpoint inhibitors (ICIs) has revolutionized the treatment of advanced non-small cell lung cancer (NSCLC). Although several ICI options are available, the treatment regimen for NSCLC with large size tumors (large NSCLC) is controversial and the efficacy of anti-CTLA-4 antibody is unclear. This study thus investigated potential biomarkers for CTLA-4 blockade. PATIENTS AND METHODS: The correlation between tumor diameter and treatment duration was examined in patients with advanced NSCLC treated with anti-PD-1 antibody monotherapy in our institution. In addition, the ratio of tumor-infiltrating CD8+ T cells and regulatory T (Treg) cells in small and large size NSCLC was also evaluated using immunohistochemical staining. Finally, the efficacy of treatment with anti-CTLA-4 antibody against large NSCLC was investigated. RESULTS: A negative correlation was found between tumor diameter and treatment duration in patients treated with anti-PD-1 antibody monotherapy. Immuno-histochemical staining revealed that Treg cell infiltration was significantly higher in large NSCLC tumors than in small tumors. Among the patients with large NSCLC, the ICI regimen including anti-CTLA-4 antibody showed significant efficacies. CONCLUSION: Anti-PD-1 antibody monotherapy might be less effective against large NSCLC due to the infiltration of Treg cells. Therefore, it might be appropriate for large NSCLC to select a treatment including an anti-CTLA-4 antibody, which can target Treg cells.


Asunto(s)
Carcinoma de Pulmón de Células no Pequeñas , Neoplasias Pulmonares , Humanos , Carcinoma de Pulmón de Células no Pequeñas/tratamiento farmacológico , Linfocitos T CD8-positivos , Neoplasias Pulmonares/tratamiento farmacológico , Duración de la Terapia , Inmunoterapia
3.
J Med Imaging (Bellingham) ; 9(2): 024003, 2022 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-35399301

RESUMEN

Purpose: We propose a super-resolution (SR) method, named SR-CycleGAN, for SR of clinical computed tomography (CT) images to the micro-focus x-ray CT CT ( µ CT ) level. Due to the resolution limitations of clinical CT (about 500 × 500 × 500 µ m 3 / voxel ), it is challenging to obtain enough pathological information. On the other hand, µ CT scanning allows the imaging of lung specimens with significantly higher resolution (about 50 × 50 × 50 µ m 3 / voxel or higher), which allows us to obtain and analyze detailed anatomical information. As a way to obtain detailed information such as cancer invasion and bronchioles from preoperative clinical CT images of lung cancer patients, the SR of clinical CT images to the µ CT level is desired. Approach: Typical SR methods require aligned pairs of low-resolution (LR) and high-resolution images for training, but it is infeasible to obtain precisely aligned paired clinical CT and µ CT images. To solve this problem, we propose an unpaired SR approach that can perform SR on clinical CT to the µ CT level. We modify a conventional image-to-image translation network named CycleGAN to an inter-modality translation network named SR-CycleGAN. The modifications consist of three parts: (1) an innovative loss function named multi-modality super-resolution loss, (2) optimized SR network structures for enlarging the input LR image to 2 k -times by width and height to obtain the SR output, and (3) sub-pixel shuffling layers for reducing computing time. Results: Experimental results demonstrated that our method successfully performed SR of lung clinical CT images. SSIM and PSNR scores of our method were 0.54 and 17.71, higher than the conventional CycleGAN's scores of 0.05 and 13.64, respectively. Conclusions: The proposed SR-CycleGAN is usable for the SR of a lung clinical CT into µ CT scale, while conventional CycleGAN output images with low qualitative and quantitative values. More lung micro-anatomy information could be observed to aid diagnosis, such as the shape of bronchioles walls.

4.
Int J Comput Assist Radiol Surg ; 16(10): 1795-1804, 2021 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-34392469

RESUMEN

PURPOSE: Bronchoscopists rely on navigation systems during bronchoscopy to reduce the risk of getting lost in the complex bronchial tree-like structure and the homogeneous bronchus lumens. We propose a patient-specific branching level estimation method for bronchoscopic navigation because it is vital to identify the branches being examined in the bronchus tree during examination. METHODS: We estimate the branching level by integrating the changes in the number of bronchial orifices and the camera motions among the frames. We extract the bronchial orifice regions from a depth image, which is generated using a cycle generative adversarial network (CycleGAN) from real bronchoscopic images. We calculate the number of orifice regions using the vertical and horizontal projection profiles of the depth images and obtain the camera-moving direction using the feature point-based camera motion estimation. The changes in the number of bronchial orifices are combined with the camera-moving direction to estimate the branching level. RESULTS: We used three in vivo and one phantom case to train the CycleGAN model and four in vivo cases to validate the proposed method. We manually created the ground truth of the branching level. The experimental results showed that the proposed method can estimate the branching level with an average accuracy of 87.6%. The processing time per frame was about 61 ms. CONCLUSION: Experimental results show that it is feasible to estimate the branching level using the number of bronchial orifices and camera-motion estimation from real bronchoscopic images.


Asunto(s)
Algoritmos , Imagenología Tridimensional , Bronquios/diagnóstico por imagen , Broncoscopía , Humanos , Fantasmas de Imagen
5.
Int J Comput Assist Radiol Surg ; 16(6): 989-1001, 2021 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-34002340

RESUMEN

PURPOSE: A three-dimensional (3D) structure extraction technique viewed from a two-dimensional image is essential for the development of a computer-aided diagnosis (CAD) system for colonoscopy. However, a straightforward application of existing depth-estimation methods to colonoscopic images is impossible or inappropriate due to several limitations of colonoscopes. In particular, the absence of ground-truth depth for colonoscopic images hinders the application of supervised machine learning methods. To circumvent these difficulties, we developed an unsupervised and accurate depth-estimation method. METHOD: We propose a novel unsupervised depth-estimation method by introducing a Lambertian-reflection model as an auxiliary task to domain translation between real and virtual colonoscopic images. This auxiliary task contributes to accurate depth estimation by maintaining the Lambertian-reflection assumption. In our experiments, we qualitatively evaluate the proposed method by comparing it with state-of-the-art unsupervised methods. Furthermore, we present two quantitative evaluations of the proposed method using a measuring device, as well as a new 3D reconstruction technique and measured polyp sizes. RESULTS: Our proposed method achieved accurate depth estimation with an average estimation error of less than 1 mm for regions close to the colonoscope in both of two types of quantitative evaluations. Qualitative evaluation showed that the introduced auxiliary task reduces the effects of specular reflections and colon wall textures on depth estimation and our proposed method achieved smooth depth estimation without noise, thus validating the proposed method. CONCLUSIONS: We developed an accurate depth-estimation method with a new type of unsupervised domain translation with the auxiliary task. This method is useful for analysis of colonoscopic images and for the development of a CAD system since it can extract accurate 3D information.


Asunto(s)
Colon/diagnóstico por imagen , Enfermedades del Colon/diagnóstico , Colonoscopía/métodos , Aprendizaje Automático Supervisado , Humanos
6.
Int J Comput Assist Radiol Surg ; 15(10): 1619-1630, 2020 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-32770324

RESUMEN

PURPOSE: Due to the complex anatomical structure of bronchi and the resembling inner surfaces of airway lumina, bronchoscopic examinations require additional 3D navigational information to assist the physicians. A bronchoscopic navigation system provides the position of the endoscope in CT images with augmented anatomical information. To overcome the shortcomings of previous navigation systems, we propose using a technique known as visual simultaneous localization and mapping (SLAM) to improve bronchoscope tracking in navigation systems. METHODS: We propose an improved version of the visual SLAM algorithm and use it to estimate nt-specific bronchoscopic video as input. We improve the tracking procedure by adding more narrow criteria in feature matching to avoid mismatches. For validation, we collected several trials of bronchoscopic videos with a bronchoscope camera by exploring synthetic rubber bronchus phantoms. We simulated breath by adding periodic force to deform the phantom. We compared the camera positions from visual SLAM with the manually created ground truth of the camera pose. The number of successfully tracked frames was also compared between the original SLAM and the proposed method. RESULTS: We successfully tracked 29,559 frames at a speed of 80 ms per frame. This corresponds to 78.1% of all acquired frames. The average root mean square error for our technique was 3.02 mm, while that for the original was 3.61 mm. CONCLUSION: We present a novel methodology using visual SLAM for bronchoscope tracking. Our experimental results showed that it is feasible to use visual SLAM for the estimation of the bronchoscope camera pose during bronchoscopic navigation. Our proposed method tracked more frames and showed higher accuracy than the original technique did. Future work will include combining the tracking results with virtual bronchoscopy and validation with in vivo cases.


Asunto(s)
Bronquios/diagnóstico por imagen , Broncoscopios , Broncoscopía/métodos , Algoritmos , Simulación por Computador , Humanos , Imagenología Tridimensional/métodos , Fantasmas de Imagen , Reproducibilidad de los Resultados
7.
Healthc Technol Lett ; 6(6): 214-219, 2019 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-32038860

RESUMEN

A realistic image generation method for visualisation in endoscopic simulation systems is proposed in this study. Endoscopic diagnosis and treatment are performed in many hospitals. To reduce complications related to endoscope insertions, endoscopic simulation systems are used for training or rehearsal of endoscope insertions. However, current simulation systems generate non-realistic virtual endoscopic images. To improve the value of the simulation systems, improvement of the reality of their generated images is necessary. The authors propose a realistic image generation method for endoscopic simulation systems. Virtual endoscopic images are generated by using a volume rendering method from a CT volume of a patient. They improve the reality of the virtual endoscopic images using a virtual-to-real image-domain translation technique. The image-domain translator is implemented as a fully convolutional network (FCN). They train the FCN by minimising a cycle consistency loss function. The FCN is trained using unpaired virtual and real endoscopic images. To obtain high-quality image-domain translation results, they perform an image cleansing to the real endoscopic image set. They tested to use the shallow U-Net, U-Net, deep U-Net, and U-Net having residual units as the image-domain translator. The deep U-Net and U-Net having residual units generated quite realistic images.

8.
J Med Imaging (Bellingham) ; 4(4): 044502, 2017 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-29152534

RESUMEN

This paper presents a local intensity structure analysis based on an intensity targeted radial structure tensor (ITRST) and the blob-like structure enhancement filter based on it (ITRST filter) for the mediastinal lymph node detection algorithm from chest computed tomography (CT) volumes. Although the filter based on radial structure tensor analysis (RST filter) based on conventional RST analysis can be utilized to detect lymph nodes, some lymph nodes adjacent to regions with extremely high or low intensities cannot be detected. Therefore, we propose the ITRST filter, which integrates the prior knowledge on detection target intensity range into the RST filter. Our lymph node detection algorithm consists of two steps: (1) obtaining candidate regions using the ITRST filter and (2) removing false positives (FPs) using the support vector machine classifier. We evaluated lymph node detection performance of the ITRST filter on 47 contrast-enhanced chest CT volumes and compared it with the RST and Hessian filters. The detection rate of the ITRST filter was 84.2% with 9.1 FPs/volume for lymph nodes whose short axis was at least 10 mm, which outperformed the RST and Hessian filters.

9.
Int J Comput Assist Radiol Surg ; 8(3): 353-63, 2013 May.
Artículo en Inglés | MEDLINE | ID: mdl-23225021

RESUMEN

PURPOSE: Chronic obstructive pulmonary disease (COPD) is characterized by airflow limitations. Physicians frequently assess the stage using pulmonary function tests and chest CT images. This paper describes a novel method to assess COPD severity by combining measurements of pulmonary function tests (PFT) and the results of chest CT image analysis. METHODS: The proposed method utilizes measurements from PFTs and chest CT scans to assess COPD severity. This method automatically classifies COPD severity into five stages, described in GOLD guidelines, by a multi-class AdaBoost classifier. The classifier utilizes 24 measurements as feature values, which include 18 measurements from PFTs and six measurements based on chest CT image analysis. A total of 3 normal and 46 abnormal (COPD) examinations performed in adults were evaluated using the proposed method to test its diagnostic capability. RESULTS: The experimental results revealed that its accuracy rates were 100.0 % (resubstitution scheme) and 53.1 % (leave-one-out scheme). A total of 95.7 % of missed classifications were assigned in the neighboring severities. CONCLUSIONS: These results demonstrate that the proposed method is a feasible means to assess COPD severity. A much larger sample size will be required to establish the limits of the method and provide clinical validation.


Asunto(s)
Enfermedad Pulmonar Obstructiva Crónica/clasificación , Enfermedad Pulmonar Obstructiva Crónica/diagnóstico , Pruebas de Función Respiratoria , Tomografía Computarizada por Rayos X , Adulto , Factores de Edad , Anciano , Anciano de 80 o más Años , Algoritmos , Índice de Masa Corporal , Femenino , Humanos , Masculino , Persona de Mediana Edad , Enfermedad Pulmonar Obstructiva Crónica/fisiopatología , Reproducibilidad de los Resultados , Índice de Severidad de la Enfermedad
10.
Med Image Anal ; 16(3): 577-96, 2012 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-21334250

RESUMEN

This paper presents a new hybrid camera motion tracking method for bronchoscopic navigation combining SIFT, epipolar geometry analysis, Kalman filtering, and image registration. In a thorough evaluation, we compare it to state-of-the-art tracking methods. Our hybrid algorithm for predicting bronchoscope motion uses SIFT features and epipolar constraints to obtain an estimate for inter-frame pose displacements and Kalman filtering to find an estimate for the magnitude of the motion. We then execute bronchoscope tracking by performing image registration initialized by these estimates. This procedure registers the actual bronchoscopic video and the virtual camera images generated from 3D chest CT data taken prior to bronchoscopic examination for continuous bronchoscopic navigation. A comparative assessment of our new method and the state-of-the-art methods is performed on actual patient data and phantom data. Experimental results from both datasets demonstrate a significant performance boost of navigation using our new method. Our hybrid method is a promising means for bronchoscope tracking, and outperforms other methods based solely on Kalman filtering or image features and image registration.


Asunto(s)
Algoritmos , Broncoscopía/métodos , Interpretación de Imagen Asistida por Computador/métodos , Imagenología Tridimensional/métodos , Reconocimiento de Normas Patrones Automatizadas/métodos , Cirugía Asistida por Computador/métodos , Humanos , Aumento de la Imagen/métodos , Movimiento (Física) , Reproducibilidad de los Resultados , Sensibilidad y Especificidad
11.
Int J Comput Assist Radiol Surg ; 7(3): 465-82, 2012 May.
Artículo en Inglés | MEDLINE | ID: mdl-21739111

RESUMEN

PURPOSE: Pulmonary nodules may indicate the early stage of lung cancer, and the progress of lung cancer causes associated changes in the shape and number of pulmonary blood vessels. The automatic segmentation of pulmonary nodules and blood vessels is desirable for chest computer-aided diagnosis (CAD) systems. Since pulmonary nodules and blood vessels are often attached to each other, conventional nodule detection methods usually produce many false positives (FPs) in the blood vessel regions, and blood vessel segmentation methods may incorrectly segment the nodules that are attached to the blood vessels. A method to simultaneously and separately segment the pulmonary nodules and blood vessels was developed and tested. METHOD: A line structure enhancement (LSE) filter and a blob-like structure enhancement (BSE) filter were used to augment initial selection of vessel regions and nodule candidates, respectively. A front surface propagation (FSP) procedure was employed for precise segmentation of blood vessels and nodules. By employing a speed function that becomes fast at the initial vessel regions and slow at the nodule candidates to propagate the front surface, the front surface can be propagated to cover the blood vessel region with suppressed nodules. Hence, the resultant region covered by the front surface indicates pulmonary blood vessels. The lung nodule regions were finally obtained by removing the nodule candidates that are covered by the front surface. RESULT: A test data set was assembled including 20 standard-dose chest CT images obtained from a local database and 20 low-dose chest CT images obtained from lung image database consortium (LIDC). The average extraction rate of the pulmonary blood vessels was about 93%. The average TP rate of nodule detection was 95% with 9.8 FPs/case in standard-dose CT image, and 91.5% with 10.5 FPs/case in low-dose CT image, respectively. CONCLUSION: Pulmonary blood vessels and nodules segmentation method based on local intensity structure analysis and front surface propagation were developed. The method was shown to be feasible for nodule detection and vessel extraction in chest CAD.


Asunto(s)
Adenocarcinoma/diagnóstico por imagen , Diagnóstico por Computador/métodos , Imagenología Tridimensional , Neoplasias Pulmonares/diagnóstico por imagen , Programas Informáticos , Nódulo Pulmonar Solitario/diagnóstico por imagen , Tomografía Computarizada por Rayos X/métodos , Adenocarcinoma del Pulmón , Diagnóstico Diferencial , Reacciones Falso Positivas , Humanos , Intensificación de Imagen Radiográfica/métodos
12.
Med Image Anal ; 13(4): 621-33, 2009 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-19592291

RESUMEN

We propose a selective method of measurement for computing image similarities based on characteristic structure extraction and demonstrate its application to flexible endoscope navigation, in particular to a bronchoscope navigation system. Camera motion tracking is a fundamental function required for image-guided treatment or therapy systems. In recent years, an ultra-tiny electromagnetic sensor commercially became available, and many image-guided treatment or therapy systems use this sensor for tracking the camera position and orientation. However, due to space limitations, it is difficult to equip the tip of a bronchoscope with such a position sensor, especially in the case of ultra-thin bronchoscopes. Therefore, continuous image registration between real and virtual bronchoscopic images becomes an efficient tool for tracking the bronchoscope. Usually, image registration is done by calculating the image similarity between real and virtual bronchoscopic images. Since global schemes to measure image similarity, such as mutual information, squared gray-level difference, or cross correlation, average differences in intensity values over an entire region, they fail at tracking of scenes where less characteristic structures can be observed. The proposed method divides an entire image into a set of small subblocks and only selects those in which characteristic shapes are observed. Then image similarity is calculated within the selected subblocks. Selection is done by calculating feature values within each subblock. We applied our proposed method to eight pairs of chest X-ray CT images and bronchoscopic video images. The experimental results revealed that bronchoscope tracking using the proposed method could track up to 1600 consecutive bronchoscopic images (about 50s) without external position sensors. Tracking performance was greatly improved in comparison with a standard method utilizing squared gray-level differences of the entire images.


Asunto(s)
Algoritmos , Broncoscopía/métodos , Aumento de la Imagen/métodos , Interpretación de Imagen Asistida por Computador/métodos , Imagenología Tridimensional/métodos , Reconocimiento de Normas Patrones Automatizadas/métodos , Técnica de Sustracción , Inteligencia Artificial , Humanos , Reproducibilidad de los Resultados , Sensibilidad y Especificidad , Interfaz Usuario-Computador
13.
Acad Radiol ; 16(4): 486-94, 2009 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-19268861

RESUMEN

RATIONALE AND OBJECTIVES: Fecal tagging computed tomographic colonography (ftCTC) reduces the discomfort and the inconvenience of patients associated with bowel cleansing procedures before CT scanning. In conventional colonic polyp detection techniques for ftCTC, a digital bowel cleansing (DBC) technique is applied to detect polyps in tagged fecal materials (TFM). However, DBC removes the surface of soft tissues and hampers polyp detection. We developed a colonic polyp detection method for CT colonographic examination that enables the detection of polyps surrounded by air and polyps surrounded by TFM without DBC. MATERIALS AND METHODS: CT values inside the polyps surrounded by air and polyps surrounded by TFM tend to gradually increase (blob structure) and decrease (inverse-blob structure) from outward to inward, respectively. We developed blob and inverse-blob structure enhancement filters based on the eigenvalues of a Hessian matrix to detect polyps using their intensity characteristic. False-positive elimination is performed using three feature values: volume, maximum value of filter outputs, and standard deviation of CT values inside the polyp candidates. RESULTS: The proposed method is applied to 104 cases of ftCTC images that include 57 polyps larger than 6 mm in diameter. The sensitivity of the method was 91.2% (52/57) with 11.4 false positives per case. CONCLUSIONS: The proposed method detects polyps with high sensitivity and 11.4 false positives per case without adverse effects on the DBC.


Asunto(s)
Pólipos del Colon/diagnóstico por imagen , Colonografía Tomográfica Computarizada/métodos , Heces , Imagenología Tridimensional/métodos , Reconocimiento de Normas Patrones Automatizadas/métodos , Intensificación de Imagen Radiográfica/métodos , Interpretación de Imagen Radiográfica Asistida por Computador/métodos , Algoritmos , Inteligencia Artificial , Catárticos , Humanos , Reproducibilidad de los Resultados , Sensibilidad y Especificidad , Procesamiento de Señales Asistido por Computador , Técnica de Sustracción
14.
Med Image Comput Comput Assist Interv ; 12(Pt 2): 707-14, 2009.
Artículo en Inglés | MEDLINE | ID: mdl-20426174

RESUMEN

This paper presents a method for the automated anatomical labeling of bronchial branches extracted from 3D CT images based on machine learning and combination optimization. We also show applications of anatomical labeling on a bronchoscopy guidance system. This paper performs automated labeling by using machine learning and combination optimization. The actual procedure consists of four steps: (a) extraction of tree structures of the bronchus regions extracted from CT images, (b) construction of AdaBoost classifiers, (c) computation of candidate names for all branches by using the classifiers, (d) selection of best combination of anatomical names. We applied the proposed method to 90 cases of 3D CT datasets. The experimental results showed that the proposed method can assign correct anatomical names to 86.9% of the bronchial branches up to the sub-segmental lobe branches. Also, we overlaid the anatomical names of bronchial branches on real bronchoscopic views to guide real bronchoscopy.


Asunto(s)
Inteligencia Artificial , Broncografía/métodos , Broncoscopía/métodos , Reconocimiento de Normas Patrones Automatizadas/métodos , Interpretación de Imagen Radiográfica Asistida por Computador/métodos , Cirugía Asistida por Computador/métodos , Tomografía Computarizada por Rayos X/métodos , Algoritmos , Humanos , Intensificación de Imagen Radiográfica/métodos , Reproducibilidad de los Resultados , Sensibilidad y Especificidad
15.
Med Image Comput Comput Assist Interv ; 11(Pt 2): 535-42, 2008.
Artículo en Inglés | MEDLINE | ID: mdl-18982646

RESUMEN

This paper presents a study of tracking accuracy improvement of marker-free bronchoscope tracking using an electromagnetic tracking system. Bronchoscope tracking is an important function in a bronchoscope navigation system that assists a physician during bronchoscopic examination. Several research groups have presented a method for bronchoscope tracking using an ultra-tiny electromagnetic tracker (UEMT) that can be inserted into the working channel of a bronchoscope. In such a system, it is necessary to find the matrix T showing the relation between the coordinate systems of the CT image and the UEMT. This paper tries to improve the accuracy of this matrix by using not only the position information of the UEMT but also the orientation information. The proposed algorithm uses the running direction information of bronchial branches and the orientation information of the UEMT in the computation process of T. In the experiments using a bronchial phantom, the tracking accuracy was improved from 2.2 mm to 1.8 mm.


Asunto(s)
Bronquios/anatomía & histología , Broncoscopios , Broncoscopía/métodos , Aumento de la Imagen/instrumentación , Aumento de la Imagen/métodos , Interpretación de Imagen Asistida por Computador/instrumentación , Interpretación de Imagen Asistida por Computador/métodos , Algoritmos , Campos Electromagnéticos , Diseño de Equipo , Análisis de Falla de Equipo , Fantasmas de Imagen , Sensibilidad y Especificidad
16.
Med Image Comput Comput Assist Interv ; 10(Pt 2): 644-51, 2007.
Artículo en Inglés | MEDLINE | ID: mdl-18044623

RESUMEN

This paper presents a method for bronchoscope tracking without any fiducial markers using an ultra-tiny electromagnetic tracker (UEMT) for a bronchoscopy guidance system. The proposed method calculates the transformation matrix, which shows the relationship between the coordinates systems of the pre-operative CT images and the UEMT, by registering bronchial branches segmented from CT images and points measured by the UEMT attached at the tip of a bronchoscope. We dynamically compute the transformation matrix for every pre-defined number of measurements. We applied the proposed method to a bronchial phantom in several experimental environments. The experimental results showed the proposed method can track a bronchoscope camera with about 3.3mm of target registration error (TRE) for wood table environment and 4.0mm of TRE for examination table environment.


Asunto(s)
Algoritmos , Broncoscopios , Broncoscopía/métodos , Fenómenos Electromagnéticos/instrumentación , Aumento de la Imagen/instrumentación , Interpretación de Imagen Asistida por Computador/métodos , Imagenología Tridimensional/métodos , Fenómenos Electromagnéticos/métodos , Diseño de Equipo , Análisis de Falla de Equipo , Humanos , Aumento de la Imagen/métodos , Interpretación de Imagen Asistida por Computador/instrumentación , Imagenología Tridimensional/instrumentación , Miniaturización , Reproducibilidad de los Resultados , Sensibilidad y Especificidad , Telemetría/instrumentación , Telemetría/métodos
17.
Comput Aided Surg ; 11(3): 109-17, 2006 May.
Artículo en Inglés | MEDLINE | ID: mdl-16829504

RESUMEN

This paper describes a method for tracking a bronchoscope by combining a position sensor and image registration. A bronchoscopy guidance system is a tool for providing real-time navigation information acquired from pre-operative CT images to a physician during a bronchoscopic examination. In this system, one of the fundamental functions is tracking a bronchoscope's camera motion. Recently, a very small electromagnetic position sensor has become available. It is possible to insert this sensor into a bronchoscope's working channel to obtain the bronchoscope's camera motion. However, the accuracy of its output is inadequate for bronchoscope tracking. The proposed combination of the sensor and image registration between real and virtual bronchoscopic images derived from CT images is quite useful for improving tracking accuracy. Furthermore, this combination has enabled us to achieve a real-time bronchoscope guidance system. We performed evaluation experiments for the proposed method using a rubber phantom model. The experimental results showed that the proposed system allowed the bronchoscope's camera motion to be tracked at 2.5 frames per second.


Asunto(s)
Inteligencia Artificial , Broncoscopía/métodos , Interpretación de Imagen Radiográfica Asistida por Computador/instrumentación , Técnica de Sustracción , Fenómenos Electromagnéticos , Humanos , Imagenología Tridimensional , Reconocimiento de Normas Patrones Automatizadas , Fantasmas de Imagen , Interpretación de Imagen Radiográfica Asistida por Computador/métodos , Reproducibilidad de los Resultados , Integración de Sistemas
18.
Am J Surg Pathol ; 30(6): 750-3, 2006 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-16723854

RESUMEN

We report 2 cases of capillary hemangioma, each presenting as a solitary nodule in the peripheral lung. Both of the patients were asymptomatic with a small solitary nodule that had revealed by computed tomography. In both cases, the nodule was resected surgically under a clinical diagnosis of early lung cancer. Macroscopically, each lesion was ill defined and irregular in shape with a dark brown cut surface. Microscopically, the alveolar septa in both nodules were thickened by accumulations of numerous thin-walled capillary vessels, which characteristically extended along, or infiltrated, each septum. We diagnosed these lesions as "solitary capillary hemangioma" of the peripheral lung. Tumors or tumorlike lesions of capillary vessels in the lung are rare. Among them, pulmonary capillary hemangiomatosis (PCH) has been described as multiple nodules in the lung parenchyma or bronchovascular walls, comprised of infiltrating thin-walled capillary blood vessels. Moreover, PCH-like foci have been found in a retrospective study of autopsy cases. However, the presented cases should be differentiated from PCH in terms of their clinical setting such as history of hypertension or veno-occlusive disease and multiplicity of the lesion. This is a rare case series of solitary capillary hemangioma discovered incidentally during life, and the lesions were difficult to differentiate radiologically from early lung cancer. After the recent advances in imaging diagnosis for early detection of peripheral lung cancer, these lesions are important to bear in mind for differential diagnosis of bronchioloalveolar carcinoma.


Asunto(s)
Hemangioma Capilar/patología , Neoplasias Pulmonares/patología , Diagnóstico Diferencial , Femenino , Hemangioma Capilar/fisiopatología , Hemangioma Capilar/cirugía , Humanos , Enfermedades Pulmonares/patología , Neoplasias Pulmonares/fisiopatología , Neoplasias Pulmonares/cirugía , Masculino , Persona de Mediana Edad , Tomografía Computarizada por Rayos X
19.
Artículo en Inglés | MEDLINE | ID: mdl-17354827

RESUMEN

This paper presents a method for tracking a bronchoscope based on motion prediction and image registration from multiple initial starting points as a function of a bronchoscope navigation system. We try to improve performance of bronchoscope tracking based on image registration using multiple initial guesses estimated using motion prediction. This method basically tracks a bronchoscopic camera by image registration between real bronchoscopic images and virtual ones derived from CT images taken prior to the bronchoscopic examinations. As an initial guess for image registration, we use multiple starting points to avoid falling into local minima. These initial guesses are computed using the motion prediction results obtained from the Kalman filter's output. We applied the proposed method to nine pairs of X-ray CT images and real bronchoscopic video images. The experimental results showed significant performance in continuous tracking without using any positional sensors.


Asunto(s)
Algoritmos , Bronquios/anatomía & histología , Broncoscopía/métodos , Aumento de la Imagen/métodos , Interpretación de Imagen Asistida por Computador/métodos , Movimiento , Técnica de Sustracción , Inteligencia Artificial , Broncografía/métodos , Humanos , Reconocimiento de Normas Patrones Automatizadas/métodos , Reproducibilidad de los Resultados , Sensibilidad y Especificidad
20.
Artículo en Inglés | MEDLINE | ID: mdl-16686002

RESUMEN

In this paper, we propose a hybrid method for tracking a bronchoscope that uses a combination of magnetic sensor tracking and image registration. The position of a magnetic sensor placed in the working channel of the bronchoscope is provided by a magnetic tracking system. Because of respiratory motion, the magnetic sensor provides only the approximate position and orientation of the bronchoscope in the coordinate system of a CT image acquired before the examination. The sensor position and orientation is used as the starting point for an intensity-based registration between real bronchoscopic video images and virtual bronchoscopic images generated from the CT image. The output transformation of the image registration process is the position and orientation of the bronchoscope in the CT image. We tested the proposed method using a bronchial phantom model. Virtual breathing motion was generated to simulate respiratory motion. The proposed hybrid method successfully tracked the bronchoscope at a rate of approximately 1 Hz.


Asunto(s)
Inteligencia Artificial , Broncoscopía/métodos , Aumento de la Imagen/métodos , Interpretación de Imagen Asistida por Computador/métodos , Magnetismo , Técnica de Sustracción , Interfaz Usuario-Computador , Algoritmos , Artefactos , Humanos , Imagenología Tridimensional/métodos , Reconocimiento de Normas Patrones Automatizadas/métodos , Reproducibilidad de los Resultados , Mecánica Respiratoria , Sensibilidad y Especificidad , Integración de Sistemas , Grabación en Video/métodos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA