Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 24
Filter
1.
J Med Imaging (Bellingham) ; 11(3): 036002, 2024 May.
Article in English | MEDLINE | ID: mdl-38827776

ABSTRACT

Purpose: Early detection of cancer is crucial for lung cancer patients, as it determines disease prognosis. Lung cancer typically starts as bronchial lesions along the airway walls. Recent research has indicated that narrow-band imaging (NBI) bronchoscopy enables more effective bronchial lesion detection than other bronchoscopic modalities. Unfortunately, NBI video can be hard to interpret because physicians currently are forced to perform a time-consuming subjective visual search to detect bronchial lesions in a long airway-exam video. As a result, NBI bronchoscopy is not regularly used in practice. To alleviate this problem, we propose an automatic two-stage real-time method for bronchial lesion detection in NBI video and perform a first-of-its-kind pilot study of the method using NBI airway exam video collected at our institution. Approach: Given a patient's NBI video, the first method stage entails a deep-learning-based object detection network coupled with a multiframe abnormality measure to locate candidate lesions on each video frame. The second method stage then draws upon a Siamese network and a Kalman filter to track candidate lesions over multiple frames to arrive at final lesion decisions. Results: Tests drawing on 23 patient NBI airway exam videos indicate that the method can process an incoming video stream at a real-time frame rate, thereby making the method viable for real-time inspection during a live bronchoscopic airway exam. Furthermore, our studies showed a 93% sensitivity and 86% specificity for lesion detection; this compares favorably to a sensitivity and specificity of 80% and 84% achieved over a series of recent pooled clinical studies using the current time-consuming subjective clinical approach. Conclusion: The method shows potential for robust lesion detection in NBI video at a real-time frame rate. Therefore, it could help enable more common use of NBI bronchoscopy for bronchial lesion detection.

2.
J Imaging ; 10(8)2024 Aug 07.
Article in English | MEDLINE | ID: mdl-39194980

ABSTRACT

For patients at risk of developing either lung cancer or colorectal cancer, the identification of suspect lesions in endoscopic video is an important procedure. The physician performs an endoscopic exam by navigating an endoscope through the organ of interest, be it the lungs or intestinal tract, and performs a visual inspection of the endoscopic video stream to identify lesions. Unfortunately, this entails a tedious, error-prone search over a lengthy video sequence. We propose a deep learning architecture that enables the real-time detection and segmentation of lesion regions from endoscopic video, with our experiments focused on autofluorescence bronchoscopy (AFB) for the lungs and colonoscopy for the intestinal tract. Our architecture, dubbed ESFPNet, draws on a pretrained Mix Transformer (MiT) encoder and a decoder structure that incorporates a new Efficient Stage-Wise Feature Pyramid (ESFP) to promote accurate lesion segmentation. In comparison to existing deep learning models, the ESFPNet model gave superior lesion segmentation performance for an AFB dataset. It also produced superior segmentation results for three widely used public colonoscopy databases and nearly the best results for two other public colonoscopy databases. In addition, the lightweight ESFPNet architecture requires fewer model parameters and less computation than other competing models, enabling the real-time analysis of input video frames. Overall, these studies point to the combined superior analysis performance and architectural efficiency of the ESFPNet for endoscopic video analysis. Lastly, additional experiments with the public colonoscopy databases demonstrate the learning ability and generalizability of ESFPNet, implying that the model could be effective for region segmentation in other domains.

3.
IEEE Trans Biomed Eng ; 70(1): 318-330, 2023 01.
Article in English | MEDLINE | ID: mdl-35819999

ABSTRACT

BACKGROUND/OBJECTIVE: Accurate disease diagnosis and staging are essential for patients suspected of having lung cancer. The state-of-the-art minimally invasive tools used by physicians to perform these operations are bronchoscopy, for navigating the lung airways, and endobronchial ultrasound (EBUS), for localizing suspect extraluminal cancer lesions. While new image-guided systems enable accurate bronchoscope navigation close to a lesion, no means exists for guiding the final EBUS localization of an extraluminal lesion. We propose an EBUS simulation method to assist with EBUS localization. METHODS: The method draws on a patient's chest computed-tomography (CT) scan to model the ultrasound signal propagation through the tissue media. The method, which is suitable for simulating EBUS images for both radial-probe and convex-probe EBUS devices, entails three steps: 1) image preprocessing, which generates a 2D CT equivalent of the EBUS scan plane; 2) EBUS scan-line computation, which models ultrasound transmission to map the CT plane into a preliminary simulated EBUS image; and 3) image post-processing, which increases realism by introducing simulated EBUS imaging effects and artifacts. RESULTS: Results show that the method produces simulated EBUS images that strongly resemble images generated live by a real device and compares favorably to an existing ultrasound simulation method. It also produces images at a rate greater than real time (i.e., 53 frames/sec). We also demonstrate a successful integration of the method into an image-guided EBUS bronchoscopy system. CONCLUSION/SIGNIFICANCE: The method is effective and practical for procedure planning/preview and follow-on live guidance of EBUS bronchoscopy.


Subject(s)
Bronchoscopy , Lung Neoplasms , Humans , Bronchoscopy/methods , Lung Neoplasms/diagnostic imaging , Endosonography/methods , Lung/diagnostic imaging , Ultrasonography
4.
J Digit Imaging ; 25(2): 307-17, 2012 Apr.
Article in English | MEDLINE | ID: mdl-22083553

ABSTRACT

Multi-detector computed tomography (MDCT) scanners produce high-resolution images of the chest. Given a patient's MDCT scan, a physician can use an image-guided intervention system to first plan and later perform bronchoscopy to diagnostic sites situated deep in the lung periphery. An accurate definition of complete routes through the airway tree leading to the diagnostic sites, however, is vital for avoiding navigation errors during image-guided bronchoscopy. We present a system for the robust definition of complete airway routes suitable for image-guided bronchoscopy. The system incorporates both automatic and semiautomatic MDCT analysis methods for this purpose. Using an intuitive graphical user interface, the user invokes automatic analysis on a patient's MDCT scan to produce a series of preliminary routes. Next, the user visually inspects each route and quickly corrects the observed route defects using the built-in semiautomatic methods. Application of the system to a human study for the planning and guidance of peripheral bronchoscopy demonstrates the efficacy of the system.


Subject(s)
Bronchography/methods , Bronchoscopy/methods , Imaging, Three-Dimensional/methods , Radiography, Interventional/methods , Tomography, X-Ray Computed/methods , User-Computer Interface , Algorithms , Humans , Pattern Recognition, Automated/methods , Radiographic Image Enhancement/methods , Radiographic Image Interpretation, Computer-Assisted/methods
5.
J Med Imaging (Bellingham) ; 9(5): 055001, 2022 Sep.
Article in English | MEDLINE | ID: mdl-36090959

ABSTRACT

Purpose: For a patient at risk of having lung cancer, accurate disease staging is vital as it dictates disease prognosis and treatment. Accurate staging requires a comprehensive sampling of lymph nodes within the chest via bronchoscopy. Unfortunately, physicians are generally unable to plan and perform sufficiently comprehensive procedures to ensure accurate disease staging. We propose a method for planning comprehensive lymph node staging procedures. Approach: Drawing on a patient's chest CT scan, the method derives a multi-destination tour for efficient navigation to a set of lymph nodes. We formulate the planning task as a traveling salesman problem. To solve the problem, we apply the concept of ant colony optimization (ACO) to derive an efficient airway tour connecting the target nodes. The method has three main steps: (1) CT preprocessing, to define important chest anatomy; (2) graph and staging zone construction, to set up the necessary data structures and clinical constraints; and (3) tour computation, to derive the staging plan. The plan conforms to the world standard International Association for the Study of Lung Cancer (IASLC) lymph node map and recommended clinical staging guidelines. Results: Tests with a patient database indicate that the method derives optimal or near-optimal tours in under a few seconds, regardless of the number of target lymph nodes (mean tour length = 1.4% longer than the optimum). A brute force optimal search, on the other hand, generally cannot reach a solution in under 10 min. for patients exhibiting > 16 nodes, and other methods provide poor solutions. We also demonstrate the method's utility in an image-guided bronchoscopy system. Conclusions: The method provides an efficient computational approach for planning a comprehensive lymph node staging bronchoscopy. In addition, the method shows promise for driving an image-guided bronchoscopy system or robotics-assisted bronchoscopy system tailored to lymph node staging.

6.
Article in English | MEDLINE | ID: mdl-34532565

ABSTRACT

The staging of the central-chest lymph nodes is a major step in the management of lung-cancer patients. For this purpose, the physician uses a device that integrates videobronchoscopy and an endobronchial ultrasound (EBUS) probe. To biopsy a lymph node, the physician first uses videobronchoscopy to navigate through the airways and then invokes EBUS to localize and biopsy the node. Unfortunately, this process proves difficult for many physicians, with the choice of biopsy site found by trial and error. We present a complete image-guided EBUS bronchoscopy system tailored to lymph-node staging. The system accepts a patient's 3D chest CT scan, an optional PET scan, and the EBUS bronchoscope's video sources as inputs. System workflow follows two phases: (1) procedure planning and (2) image-guided EBUS bronchoscopy. Procedure planning derives airway guidance routes that facilitate optimal EBUS scanning and nodal biopsy. During the live procedure, the system's graphical display suggests a series of device maneuvers to perform and provides multimodal visual cues for locating suitable biopsy sites. To this end, the system exploits data fusion to drive a multimodal virtual bronchoscope and other visualization tools that lead the physician through the process of device navigation and localization. A retrospective lung-cancer patient study and follow-on prospective patient study, performed within the standard clinical workflow, demonstrate the system's feasibility and functionality. For the prospective study, 60/60 selected lymph nodes (100%) were correctly localized using the system, and 30/33 biopsied nodes (91%) gave adequate tissue samples. Also, the mean procedure time including all user interactions was 6 min 43 s All of these measures improve upon benchmarks reported for other state-of-the-art systems and current practice. Overall, the system enabled safe, efficient EBUS-based localization and biopsy of lymph nodes.

7.
J Digit Imaging ; 23(1): 39-50, 2010 Feb.
Article in English | MEDLINE | ID: mdl-19050956

ABSTRACT

Bronchoscopy is often performed for staging lung cancer. The recent development of multidetector computed tomography (MDCT) scanners and ultrathin bronchoscopes now enable the bronchoscopic biopsy and treatment of peripheral diagnostic regions of interest (ROIs). Because these ROIs are often located several generations within the airway tree, careful planning and interpretation of the bronchoscopic route is required prior to a procedure. The current practice for planning bronchoscopic procedures, however, is difficult, error prone, and time consuming. To alleviate these issues, we propose a method for producing and previewing reports for bronchoscopic procedures using patient-specific MDCT chest scans. The reports provide quantitative data about the bronchoscopic routes and both static and dynamic previews of the proper airway route. The previews consist of virtual bronchoscopic endoluminal renderings along the route and three-dimensional cues for a final biopsy site. The reports require little storage space and computational resources, enabling physicians to view the reports on a portable tablet PC. To evaluate the efficacy of the reporting system, we have generated reports for 22 patients in a human lung cancer patient pilot study. For 17 of these patients, we used the reports in conjunction with live image-based bronchoscopic guidance to direct physicians to central chest and peripheral ROIs for subsequent diagnostic evaluation. Our experience shows that the tool enabled useful procedure preview and an effective means for planning strategy prior to a live bronchoscopy.


Subject(s)
Bronchoscopy , Diagnosis, Computer-Assisted/instrumentation , Lung Neoplasms/pathology , Radiography, Interventional , Tomography, X-Ray Computed/methods , Biopsy , Humans , Imaging, Three-Dimensional , Lung Neoplasms/diagnostic imaging , Neoplasm Staging , User-Computer Interface
8.
Annu Int Conf IEEE Eng Med Biol Soc ; 2020: 1556-1559, 2020 07.
Article in English | MEDLINE | ID: mdl-33018289

ABSTRACT

Because of the significance of bronchial lesions as indicators of early lung cancer and squamous cell carcinoma, a critical need exists for early detection of bronchial lesions. Autofluorescence bronchoscopy (AFB) is a primary modality used for bronchial lesion detection, as it shows high sensitivity to suspicious lesions. The physician, however, must interactively browse a long video stream to locate lesions, making the search exceedingly tedious and error prone. Unfortunately, limited research has explored the use of automated AFB video analysis for efficient lesion detection. We propose a robust automatic AFB analysis approach that distinguishes informative and uninformative AFB video frames in a video. In addition, for the informative frames, we determine the frames containing potential lesions and delineate candidate lesion regions. Our approach draws upon a combination of computer-based image analysis, machine learning, and deep learning. Thus, the analysis of an AFB video stream becomes more tractable. Using patient AFB video, 99.5%/90.2% of test frames were correctly labeled as informative/uninformative by our method versus 99.2%/47.6% by ResNet. In addition, ≥97% of lesion frames were correctly identified, with false positive and false negative rates ≤3%.Clinical relevance-The method makes AFB-based bronchial lesion analysis more efficient, thereby helping to advance the goal of better early lung cancer detection.


Subject(s)
Bronchoscopy , Lung Neoplasms , Precancerous Conditions , Bronchi , Fluorescence , Humans , Lung Neoplasms/diagnostic imaging , Precancerous Conditions/diagnostic imaging
9.
Comput Biol Med ; 112: 103361, 2019 09.
Article in English | MEDLINE | ID: mdl-31362107

ABSTRACT

The staging of the central-chest lymph nodes is a major lung-cancer management procedure. To perform a staging procedure, the physician first uses a patient's 3D X-ray computed-tomography (CT) chest scan to interactively plan airway routes leading to selected target lymph nodes. Next, using an integrated EBUS bronchoscope (EBUS = endobronchial ultrasound), the physician uses videobronchoscopy to navigate through the airways toward a target node's general vicinity and then invokes EBUS to localize the node for biopsy. Unfortunately, during the procedure, the physician has difficulty in translating the preplanned airway routes into safe, effective biopsy sites. We propose an automatic route-planning method for EBUS bronchoscopy that gives optimal localization of safe, effective nodal biopsy sites. To run the method, a 3D chest model is first computed from a patient's chest CT scan. Next, an optimization method derives feasible airway routes that enables maximal tissue sampling of target lymph nodes while safely avoiding major blood vessels. In a lung-cancer patient study entailing 31 nodes (long axis range: [9.0 mm, 44.5 mm]), 25/31 nodes yielded safe airway routes having an optimal tissue sample size = 8.4 mm (range: [1.0 mm, 18.6 mm]) and sample adequacy = 0.42 (range: [0.05, 0.93]). Quantitative results indicate that the method potentially enables successful biopsies in essentially 100% of selected lymph nodes versus the 70-94% success rate of other approaches. The method also potentially facilitates adequate tissue biopsies for nearly 100% of selected nodes, as opposed to the 55-77% tissue adequacy rates of standard methods. The remaining nodes did not yield a safe route within the preset safety-margin constraints, with 3 nodes never yielding a route even under the most lenient safety-margin conditions. Thus, the method not only helps determine effective airway routes and expected sample quality for nodal biopsy, but it also helps point out situations where biopsy may not be advisable. We also demonstrate the methodology in an image-guided EBUS bronchoscopy system, used successfully in live lung-cancer patient studies. During a live procedure, the method provides dynamic real-time sample size visualization in an enhanced virtual bronchoscopy viewer. In this way, the physician vividly sees the most promising biopsy sites along the airway walls as the bronchoscope moves through the airways.


Subject(s)
Bronchoscopy , Decision Making, Computer-Assisted , Lung Neoplasms , Surgery, Computer-Assisted , Tomography, X-Ray Computed , Female , Humans , Lung Neoplasms/diagnostic imaging , Lung Neoplasms/surgery , Male
10.
Chest ; 133(4): 897-905, 2008 Apr.
Article in English | MEDLINE | ID: mdl-18263679

ABSTRACT

BACKGROUND: Endobronchial path selection is important for the bronchoscopic diagnosis of focal lung lesions. Path selection typically involves mentally reconstructing a three-dimensional path by interpreting a stack of two-dimensional (2D) axial plane CT scan sections. The hypotheses of our study about path selection were as follows: (1) bronchoscopists are inaccurate and overly confident when making endobronchial path selections based on 2D CT scan analysis; and (2) path selection accuracy and confidence improve and become better aligned when bronchoscopists employ path-planning methods based on virtual bronchoscopy (VB). METHODS: Studies of endobronchial path selection comparing three path-planning methods (ie, the standard 2D CT scan analysis and two new VB-based techniques) were performed. The task was to navigate to discrete lesions located between the third-order and fifth-order bronchi of the right upper and middle lobes. Outcome measures were the cumulative accuracy of making four sequential path selection decisions and self-reported confidence (1, least confident; 5, most confident). Both experienced and inexperienced bronchoscopists participated in the studies. RESULTS: In the first study involving a static paper-based tool, the mean (+/- SD) cumulative accuracy was 14 +/- 3% using 2D CT scan analysis (confidence, 3.4 +/- 1.3) and 49 +/- 15% using a VB-based technique (confidence, 4.2 +/- 1.1; p = 0.0001 across all comparisons). For a second study using an interactive computer-based tool, the mean accuracy was 40 +/- 28% using 2D CT scan analysis (confidence, 3.0 +/- 0.3) and 96 +/- 3% using a dynamic VB-based technique (confidence, 4.6 +/- 0.2). Regardless of the experience level of the bronchoscopist, use of the standard 2D CT scan analysis resulted in poor path selection accuracy and misaligned confidence. Use of the VB-based techniques resulted in considerably higher accuracy and better aligned decision confidence. CONCLUSIONS: Endobronchial path selection is a source of error in the bronchoscopy workflow. The use of VB-based path-planning techniques significantly improves path selection accuracy over use of the standard 2D CT scan section analysis in this simulation format.


Subject(s)
Bronchi/pathology , Bronchoscopy/methods , Computer Simulation , Observer Variation , Bronchography , Humans , Imaging, Three-Dimensional , Lung Neoplasms/diagnostic imaging , Lung Neoplasms/pathology , Software , Tomography, X-Ray Computed , User-Computer Interface
11.
Chest ; 134(5): 1017-1026, 2008 Nov.
Article in English | MEDLINE | ID: mdl-18583513

ABSTRACT

BACKGROUND: Ultrathin bronchoscopy guided by virtual bronchoscopy (VB) techniques show promise for the diagnosis of peripheral lung lesions. In a phantom study, we evaluated a new real-time, VB-based, image-guided system for guiding the bronchoscopic biopsy of peripheral lung lesions and compared its performance to that of standard bronchoscopy practice. METHODS: Twelve bronchoscopists of varying experience levels participated in the study. The task was to use an ultrathin bronchoscope and a biopsy forceps to localize 10 synthetically created lesions situated at varying airway depths. For route planning and guidance, the bronchoscopists employed either standard bronchoscopy practice or the real-time image-guided system. Outcome measures were biopsy site position error, which was defined as the distance from the forceps contact point to the ground-truth lesion boundary, and localization success, which was defined as a site identification having a biopsy site position error of < or = 5 mm. RESULTS: Mean (+/- SD) localization success more than doubled from 43 +/- 16% using standard practice to 94 +/- 7.9% using image guidance (p < 10(-15) [McNemar paired test]). The mean biopsy site position error dropped from 9.7 +/- 9.1 mm for standard practice to 2.2 +/- 2.3 mm for image guidance. For standard practice, localization success decreased from 56% for generation 3 to 4 lesions to 31% for generation 6 to 8 lesions and also decreased from 51% for lesions on a carina vs 23% for lesions situated away from a carina. These factors were far less pronounced when using image guidance, as follows: success for generation 3 to 4 lesions, 97%; success for generation 6 to 8 lesions, 91%; success for lesions on a carina, 98%; success for lesions away from a carina, 86%. Bronchoscopist experience did not significantly affect performance using the image-guided system. CONCLUSIONS: Real-time, VB-based image guidance can potentially far exceed standard bronchoscopy practice for enabling the bronchoscopic biopsy of peripheral lung lesions.


Subject(s)
Bronchoscopy/methods , Lung Diseases/diagnosis , Phantoms, Imaging , Tomography, X-Ray Computed/instrumentation , Biopsy/methods , Equipment Design , Humans , Reproducibility of Results
12.
Comput Med Imaging Graph ; 32(3): 159-73, 2008 Apr.
Article in English | MEDLINE | ID: mdl-18096365

ABSTRACT

Bronchoscopic biopsy of the central-chest lymph nodes is an important step for lung-cancer staging. Before bronchoscopy, the physician first visually assesses a patient's three-dimensional (3D) computed tomography (CT) chest scan to identify suspect lymph-node sites. Next, during bronchoscopy, the physician guides the bronchoscope to each desired lymph-node site. Unfortunately, the physician has no link between the 3D CT image data and the live video stream provided during bronchoscopy. Thus, the physician must essentially perform biopsy blindly, and the skill levels between different physicians differ greatly. We describe an approach that enables synergistic fusion between the 3D CT data and the bronchoscopic video. Both the integrated planning and guidance system and the internal CT-video registration and fusion methods are described. Phantom, animal, and human studies illustrate the efficacy of the methods.


Subject(s)
Bronchoscopy/methods , Imaging, Three-Dimensional , Lymphography/methods , Radiography, Interventional , Tomography, X-Ray Computed , Video Recording , Animals , Humans , Lung Neoplasms/diagnostic imaging , Lung Neoplasms/pathology , Phantoms, Imaging , Sentinel Lymph Node Biopsy , Swine , User-Computer Interface
13.
Comput Biol Med ; 37(12): 1802-20, 2007 Dec.
Article in English | MEDLINE | ID: mdl-17669390

ABSTRACT

Modern micro-CT and multi-detector helical CT scanners can produce high-resolution 3D digital images of various anatomical trees. The large size and complexity of these trees make it essentially impossible to define them interactively. Automatic approaches have been proposed for a few specific problems, but none of these approaches guarantee extracting geometrically accurate multi-generational tree structures. This paper proposes an interactive system for defining and visualizing large anatomical trees and for subsequent quantitative data mining. The system consists of a large number of tools for automatic image analysis, semi-automatic and interactive tree editing, and an assortment of visualization tools. Results are presented for a variety of 3D high-resolution images.


Subject(s)
Cardiovascular System/diagnostic imaging , Imaging, Three-Dimensional , Tomography, X-Ray Computed/methods , Angiography , Blood Vessels , Heart/diagnostic imaging , Humans , Liver/blood supply , Liver/diagnostic imaging , Lung/blood supply , Lung/diagnostic imaging , User-Computer Interface
14.
IEEE Trans Biomed Eng ; 62(12): 2794-811, 2015 Dec.
Article in English | MEDLINE | ID: mdl-25675452

ABSTRACT

Bronchoscopy is a commonly used minimally invasive procedure for lung-cancer staging. In standard practice, however, physicians differ greatly in their levels of performance. To address this concern, image-guided intervention (IGI) systems have been devised to improve procedure success. Current IGI bronchoscopy systems based on virtual bronchoscopic navigation (VBN), however, require involvement from the attending technician. This lessens physician control and hinders the overall acceptance of such systems. We propose a hands-free VBN system for planning and guiding bronchoscopy. The system introduces two major contributions. First, it incorporates a new procedure-planning method that automatically computes airway navigation plans conforming to the physician's bronchoscopy training and manual dexterity. Second, it incorporates a guidance strategy for bronchoscope navigation that enables user-friendly system control via a foot switch, coupled with a novel position-verification mechanism. Phantom studies verified that the system enables smooth operation under physician control, while also enabling faster navigation than an existing technician-assisted VBN system. In a clinical human study, we noted a 97% bronchoscopy navigation success rate, in line with existing VBN systems, and a mean guidance time per diagnostic site = 52 s. This represents a guidance time often nearly 3 min faster per diagnostic site than guidance times reported for other technician-assisted VBN systems. Finally, an ergonomic study further asserts the system's acceptability to the physician and long-term potential.


Subject(s)
Bronchoscopy/methods , Lung Neoplasms/diagnostic imaging , Lung Neoplasms/surgery , Surgery, Computer-Assisted/methods , Tomography, X-Ray Computed/methods , Aged , Bronchoscopes , Bronchoscopy/instrumentation , Equipment Design , Female , Humans , Male , Middle Aged , Phantoms, Imaging , Surgery, Computer-Assisted/instrumentation
15.
Comput Biol Med ; 62: 222-38, 2015 Jul.
Article in English | MEDLINE | ID: mdl-25957746

ABSTRACT

X-ray computed tomography (CT) and positron emission tomography (PET) serve as the standard imaging modalities for lung-cancer management. CT gives anatomical details on diagnostic regions of interest (ROIs), while PET gives highly specific functional information. During the lung-cancer management process, a patient receives a co-registered whole-body PET/CT scan pair and a dedicated high-resolution chest CT scan. With these data, multimodal PET/CT ROI information can be gleaned to facilitate disease management. Effective image segmentation of the thoracic cavity, however, is needed to focus attention on the central chest. We present an automatic method for thoracic cavity segmentation from 3D CT scans. We then demonstrate how the method facilitates 3D ROI localization and visualization in patient multimodal imaging studies. Our segmentation method draws upon digital topological and morphological operations, active-contour analysis, and key organ landmarks. Using a large patient database, the method showed high agreement to ground-truth regions, with a mean coverage=99.2% and leakage=0.52%. Furthermore, it enabled extremely fast computation. For PET/CT lesion analysis, the segmentation method reduced ROI search space by 97.7% for a whole-body scan, or nearly 3 times greater than that achieved by a lung mask. Despite this reduction, we achieved 100% true-positive ROI detection, while also reducing the false-positive (FP) detection rate by >5 times over that achieved with a lung mask. Finally, the method greatly improved PET/CT visualization by eliminating false PET-avid obscurations arising from the heart, bones, and liver. In particular, PET MIP views and fused PET/CT renderings depicted unprecedented clarity of the lesions and neighboring anatomical structures truly relevant to lung-cancer assessment.


Subject(s)
Imaging, Three-Dimensional/methods , Lung Neoplasms/diagnostic imaging , Positron-Emission Tomography/methods , Thoracic Cavity , Tomography, X-Ray Computed/methods , Female , Humans , Male
16.
Acad Radiol ; 9(10): 1153-68, 2002 Oct.
Article in English | MEDLINE | ID: mdl-12385510

ABSTRACT

RATIONALE AND OBJECTIVES: The segmentation of airways from CT images is a critical first step for numerous virtual bronchoscopic (VB) applications. Automatic or semiautomatic methods are necessary, since manual segmentation is prohibitively time consuming. The methods must be robust and operate within a reasonable time frame to be useful for clinical VB use. The authors developed an integrated airway segmentation system and demonstrated its effectiveness on a series of human images. MATERIALS AND METHODS: The authors' airway segmentation system draws on two segmentation algorithms: (a) an adaptive region-growing algorithm and (b) a new hybrid algorithm that uses both region growing and mathematical morphology. Images from an ongoing VB study were segmented by means of both the adaptive region-growing and the new hybrid methods. The segmentation volume, branch number estimate, and segmentation quality were determined for each case. RESULTS: The results demonstrate the need for an integrated segmentation system, since no single method is superior for all clinically relevant cases. The region-growing algorithm is the fastest and provides acceptable segmentations for most VB applications, but the hybrid method provides superior airway edge localization, making it better suited for quantitative applications. In addition, the authors show that prefiltering the image data before airway segmentation increases the robustness of both region-growing and hybrid methods. CONCLUSION: The combination of these two algorithms with the prefiltering options allowed the successful segmentation of all test images. The times required for all segmentations were acceptable, and the results were suitable for the authors' VB application needs.


Subject(s)
Bronchoscopy , Imaging, Three-Dimensional , Respiratory Tract Diseases/diagnosis , Algorithms , Computers, Hybrid , Follow-Up Studies , Humans , Image Processing, Computer-Assisted/instrumentation , Imaging, Three-Dimensional/instrumentation , Imaging, Three-Dimensional/methods , Phantoms, Imaging , Tomography, X-Ray Computed/instrumentation , User-Computer Interface
17.
IEEE Trans Image Process ; 12(9): 1007-15, 2003.
Article in English | MEDLINE | ID: mdl-18237973

ABSTRACT

Of the many proposed image segmentation methods, region growing has been one of the most popular. Research on region growing, however, has focused primarily on the design of feature measures and on growing and merging criteria. Most of these methods have an inherent dependence on the order in which the points and regions are examined. This weakness implies that a desired segmented result is sensitive to the selection of the initial growing points. We define a set of theoretical criteria for a subclass of region-growing algorithms that are insensitive to the selection of the initial growing points. This class of algorithms, referred to as symmetric region growing algorithms, leads to a single-pass region-growing algorithm applicable to any dimensionality of images. Furthermore, they lead to region-growing algorithms that are both memory- and computation-efficient. Results illustrate the method's efficiency and its application to 3D medical image segmentation.

18.
Comput Biol Med ; 32(2): 55-71, 2002 Mar.
Article in English | MEDLINE | ID: mdl-11879820

ABSTRACT

Micro-CT scanners can generate large high-resolution three-dimensional (3D) digital images of small-animal organs, such as rat hearts. Such images enable studies of basic physiologic questions on coronary branching geometry and fluid transport. Performing such an analysis requires three steps: (1) extract the arterial tree from the image; (2) compute quantitative geometric data from the extracted tree; and (3) perform a numerical analysis of the computed data. Because a typical coronary arterial tree consists of hundreds of branches and many generations, it is impractical to perform such an integrated study manually. An automatic method exists for performing step (1), extracting the tree, but little effort has been made on the other two steps. We propose an environment for performing a complete study. Quantitative measures for arterial-lumen cross-sectional area, inter-branch segment length, branch surface area and others at the generation, inter-branch, and intra-branch levels are computed. A human user can then work with the quantitative data in an interactive visualization system. The system provides various forms of viewing and permits interactive tree editing for "on the fly" correction of the quantitative data. We illustrate the methodology for 3D micro-CT rat heart images.


Subject(s)
Coronary Angiography/instrumentation , Image Processing, Computer-Assisted/instrumentation , Imaging, Three-Dimensional/instrumentation , Radiographic Magnification/instrumentation , Tomography, X-Ray Computed/instrumentation , Animals , Rats , Rats, Sprague-Dawley , Software
19.
IEEE Trans Biomed Eng ; 61(3): 638-57, 2014 Mar.
Article in English | MEDLINE | ID: mdl-24235246

ABSTRACT

With the development of multidetector computed-tomography (MDCT) scanners and ultrathin bronchoscopes, the use of bronchoscopy for diagnosing peripheral lung-cancer nodules is becoming a viable option. The work flow for assessing lung cancer consists of two phases: 1) 3-D MDCT analysis and 2) live bronchoscopy. Unfortunately, the yield rates for peripheral bronchoscopy have been reported to be as low as 14%, and bronchoscopy performance varies considerably between physicians. Recently, proposed image-guided systems have shown promise for assisting with peripheral bronchoscopy. Yet, MDCT-based route planning to target sites has relied on tedious error-prone techniques. In addition, route planning tends not to incorporate known anatomical, device, and procedural constraints that impact a feasible route. Finally, existing systems do not effectively integrate MDCT-derived route information into the live guidance process. We propose a system that incorporates an automatic optimal route-planning method, which integrates known route constraints. Furthermore, our system offers a natural translation of the MDCT-based route plan into the live guidance strategy via MDCT/video data fusion. An image-based study demonstrates the route-planning method's functionality. Next, we present a prospective lung-cancer patient study in which our system achieved a successful navigation rate of 91% to target sites. Furthermore, when compared to a competing commercial system, our system enabled bronchoscopy over two airways deeper into the airway-tree periphery with a sample time that was nearly 2 min shorter on average. Finally, our system's ability to almost perfectly predict the depth of a bronchoscope's navigable route in advance represents a substantial benefit of optimal route planning.


Subject(s)
Bronchoscopy/methods , Image Processing, Computer-Assisted/methods , Radiography, Thoracic/methods , Surgery, Computer-Assisted/methods , Adolescent , Adult , Aged , Aged, 80 and over , Algorithms , Female , Humans , Lung Neoplasms/diagnostic imaging , Lung Neoplasms/surgery , Male , Middle Aged , Tomography, X-Ray Computed/methods , Video Recording , Young Adult
20.
IEEE Trans Med Imaging ; 32(8): 1376-96, 2013 Aug.
Article in English | MEDLINE | ID: mdl-23508260

ABSTRACT

Bronchoscopy is a major step in lung cancer staging. To perform bronchoscopy, the physician uses a procedure plan, derived from a patient's 3D computed-tomography (CT) chest scan, to navigate the bronchoscope through the lung airways. Unfortunately, physicians vary greatly in their ability to perform bronchoscopy. As a result, image-guided bronchoscopy systems, drawing upon the concept of CT-based virtual bronchoscopy (VB), have been proposed. These systems attempt to register the bronchoscope's live position within the chest to a CT-based virtual chest space. Recent methods, which register the bronchoscopic video to CT-based endoluminal airway renderings, show promise but do not enable continuous real-time guidance. We present a CT-video registration method inspired by computer-vision innovations in the fields of image alignment and image-based rendering. In particular, motivated by the Lucas-Kanade algorithm, we propose an inverse-compositional framework built around a gradient-based optimization procedure. We next propose an implementation of the framework suitable for image-guided bronchoscopy. Laboratory tests, involving both single frames and continuous video sequences, demonstrate the robustness and accuracy of the method. Benchmark timing tests indicate that the method can run continuously at 300 frames/s, well beyond the real-time bronchoscopic video rate of 30 frames/s. This compares extremely favorably to the ≥ 1 s/frame speeds of other methods and indicates the method's potential for real-time continuous registration. A human phantom study confirms the method's efficacy for real-time guidance in a controlled setting, and, hence, points the way toward the first interactive CT-video registration approach for image-guided bronchoscopy. Along this line, we demonstrate the method's efficacy in a complete guidance system by presenting a clinical study involving lung cancer patients.


Subject(s)
Bronchoscopy/methods , Image Processing, Computer-Assisted/methods , Surgery, Computer-Assisted/methods , Thoracic Surgery, Video-Assisted/methods , Tomography, X-Ray Computed/methods , Adolescent , Adult , Aged , Aged, 80 and over , Algorithms , Female , Humans , Lung/anatomy & histology , Lung/diagnostic imaging , Lung/pathology , Lung Neoplasms/diagnostic imaging , Lung Neoplasms/pathology , Male , Middle Aged
SELECTION OF CITATIONS
SEARCH DETAIL