Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 42
Filtrar
1.
Artículo en Inglés | MEDLINE | ID: mdl-38720159

RESUMEN

PURPOSE: This paper considers a new problem setting for multi-organ segmentation based on the following observations. In reality, (1) collecting a large-scale dataset from various institutes is usually impeded due to privacy issues; (2) many images are not labeled since the slice-by-slice annotation is costly; and (3) datasets may exhibit inconsistent, partial annotations across different institutes. Learning a federated model from these distributed, partially labeled, and unlabeled samples is an unexplored problem. METHODS: To simulate this multi-organ segmentation problem, several distributed clients and a central server are maintained. The central server coordinates with clients to learn a global model using distributed private datasets, which comprise a small part of partially labeled images and a large part of unlabeled images. To address this problem, a practical framework that unifies partially supervised learning (PSL), semi-supervised learning (SSL), and federated learning (FL) paradigms with PSL, SSL, and FL modules is proposed. The PSL module manages to learn from partially labeled samples. The SSL module extracts valuable information from unlabeled data. Besides, the FL module aggregates local information from distributed clients to generate a global statistical model. With the collaboration of three modules, the presented scheme could take advantage of these distributed imperfect datasets to train a generalizable model. RESULTS: The proposed method was extensively evaluated with multiple abdominal CT datasets, achieving an average result of 84.83% in Dice and 41.62 mm in 95HD for multi-organ (liver, spleen, and stomach) segmentation. Moreover, its efficacy in transfer learning further demonstrated its good generalization ability for downstream segmentation tasks. CONCLUSION: This study considers a novel problem of multi-organ segmentation, which aims to develop a generalizable model using distributed, partially labeled, and unlabeled CT images. A practical framework is presented, which, through extensive validation, has proved to be an effective solution, demonstrating strong potential in addressing this challenging problem.

2.
Healthc Technol Lett ; 11(2-3): 146-156, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38638500

RESUMEN

This paper focuses on a new and challenging problem related to instrument segmentation. This paper aims to learn a generalizable model from distributed datasets with various imperfect annotations. Collecting a large-scale dataset for centralized learning is usually impeded due to data silos and privacy issues. Besides, local clients, such as hospitals or medical institutes, may hold datasets with diverse and imperfect annotations. These datasets can include scarce annotations (many samples are unlabelled), noisy labels prone to errors, and scribble annotations with less precision. Federated learning (FL) has emerged as an attractive paradigm for developing global models with these locally distributed datasets. However, its potential in instrument segmentation has yet to be fully investigated. Moreover, the problem of learning from various imperfect annotations in an FL setup is rarely studied, even though it presents a more practical and beneficial scenario. This work rethinks instrument segmentation in such a setting and propose a practical FL framework for this issue. Notably, this approach surpassed centralized learning under various imperfect annotation settings. This method established a foundational benchmark, and future work can build upon it by considering each client owning various annotations and aligning closer with real-world complexities.

3.
Healthc Technol Lett ; 11(2-3): 157-166, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38638498

RESUMEN

This study focuses on enhancing the inference speed of laparoscopic tool detection on embedded devices. Laparoscopy, a minimally invasive surgery technique, markedly reduces patient recovery times and postoperative complications. Real-time laparoscopic tool detection helps assisting laparoscopy by providing information for surgical navigation, and its implementation on embedded devices is gaining interest due to the portability, network independence and scalability of the devices. However, embedded devices often face computation resource limitations, potentially hindering inference speed. To mitigate this concern, the work introduces a two-fold modification to the YOLOv7 model: the feature channels and integrate RepBlock is halved, yielding the YOLOv7-RepFPN model. This configuration leads to a significant reduction in computational complexity. Additionally, the focal EIoU (efficient intersection of union) loss function is employed for bounding box regression. Experimental results on an embedded device demonstrate that for frame-by-frame laparoscopic tool detection, the proposed YOLOv7-RepFPN achieved an mAP of 88.2% (with IoU set to 0.5) on a custom dataset based on EndoVis17, and an inference speed of 62.9 FPS. Contrasting with the original YOLOv7, which garnered an 89.3% mAP and 41.8 FPS under identical conditions, the methodology enhances the speed by 21.1 FPS while maintaining detection accuracy. This emphasizes the effectiveness of the work.

4.
Int J Comput Assist Radiol Surg ; 18(5): 945-952, 2023 May.
Artículo en Inglés | MEDLINE | ID: mdl-36894738

RESUMEN

PURPOSE: Minimally invasive surgery (MIS) using a thoraco- or laparoscope is becoming a more common surgical technique. In MIS, a magnified view from a thoracoscope helps surgeons conduct precise operations. However, there is a risk of the visible area becoming narrow. To confirm that the operation field is safe, the surgeon will draw the thoracoscope back to check the marginal area of the target and insert it again many times during MIS. To reduce the surgeon's load, we aim to visualize the entire thoracic cavity using a newly developed device called "panorama vision ring" (PVR). METHOD: The PVR is used instead of a wound retractor or a trocar. It is a ring-type socket with one big hole for the thoracoscope and four small holes for tiny cameras placed around the big hole. The views from the tiny cameras are fused into one wider view that visualizes the entire thoracic cavity. A surgeon can proceed with an operation by checking what exists outside of the thoracoscopic view. Also, she/he can check whether or not bleeding has occurred from the image of the entire cavity. RESULTS: We evaluated the view-expansion ability of the PVR by using a three-dimensional full-scale thoracic model. The experimental results showed that the entire thoracic cavity could be visible in a panoramic view generated by the PVR. We also demonstrated pulmonary lobectomy in virtual MIS using the PVR. Surgeons could perform a pulmonary lobectomy while checking the entire cavity. CONCLUSION: We developed the PVR, which uses tiny auxiliary cameras to create a panoramic view of the entire thoracic cavity during MIS. We aim to make MIS safer for patients and more comfortable for surgeons through the development of the PVR.


Asunto(s)
Cirujanos , Toracoscopía , Femenino , Humanos , Toracoscopía/métodos , Procedimientos Quirúrgicos Mínimamente Invasivos/métodos
5.
Int J Comput Assist Radiol Surg ; 18(3): 461-472, 2023 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-36273078

RESUMEN

PURPOSE: This paper aims to propose a deep learning-based method for abdominal artery segmentation. Blood vessel structure information is essential to diagnosis and treatment. Accurate blood vessel segmentation is critical to preoperative planning. Although deep learning-based methods perform well on large organs, segmenting small organs such as blood vessels is challenging due to complicated branching structures and positions. We propose a 3D deep learning network from a skeleton context-aware perspective to improve segmentation accuracy. In addition, we propose a novel 3D patch generation method which could strengthen the structural diversity of a training data set. METHOD: The proposed method segments abdominal arteries from an abdominal computed tomography (CT) volume using a 3D fully convolutional network (FCN). We add two auxiliary tasks to the network to extract the skeleton context of abdominal arteries. In addition, our skeleton-based patch generation (SBPG) method further enables the FCN to segment small arteries. SBPG generates a 3D patch from a CT volume by leveraging artery skeleton information. These methods improve the segmentation accuracies of small arteries. RESULTS: We used 20 cases of abdominal CT volumes to evaluate the proposed method. The experimental results showed that our method outperformed previous segmentation accuracies. The averaged precision rate, recall rate, and F-measure were 95.5%, 91.0%, and 93.2%, respectively. Compared to a baseline method, our method improved 1.5% the averaged recall rate and 0.7% the averaged F-measure. CONCLUSIONS: We present a skeleton context-aware 3D FCN to segment abdominal arteries from an abdominal CT volume. In addition, we propose a 3D patch generation method. Our fully automated method segmented most of the abdominal artery regions. The method produced competitive segmentation performance compared to previous methods.


Asunto(s)
Abdomen , Procesamiento de Imagen Asistido por Computador , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Tomografía Computarizada por Rayos X/métodos , Arterias , Esqueleto
6.
Int J Comput Assist Radiol Surg ; 18(3): 473-482, 2023 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-36271215

RESUMEN

PURPOSE: Segmentation tasks are important for computer-assisted surgery systems as they provide the shapes of organs and the locations of instruments. What prevents the most powerful segmentation approaches from becoming practical applications is the requirement for annotated data. Active learning provides strategies to dynamically select the most informative samples to reduce the annotation workload. However, most previous active learning literature has failed to select the frames that containing low-appearing frequency classes, even though the existence of these classes is common in laparoscopic videos, resulting in poor performance in segmentation tasks. Furthermore, few previous works have explored the unselected data to improve active learning. Therefore, in this work, we focus on these classes to improve the segmentation performance. METHODS: We propose a class-wise confidence bank that stores and updates the confidence scores for each class and a new acquisition function based on a confidence bank. We apply confidence scores to explore an unlabeled dataset by combining it with a class-wise data mixture method to exploit unlabeled datasets without any annotation. RESULTS: We validated our proposal on two open-source datasets, CholecSeg8k and RobSeg2017, and observed that its performance surpassed previous active learning studies with about [Formula: see text] improvement on CholecSeg8k, especially for classes with a low-appearing frequency. For robSeg2017, we conducted experiments with a small and large annotation budgets to validate situation that shows the effectiveness of our proposal. CONCLUSIONS: We presented a class-wise confidence score to improve the acquisition function for active learning and explored unlabeled data with our proposed class-wise confidence score, which results in a large improvement over the compared methods. The experiments also showed that our proposal improved the segmentation performance for classes with a low-appearing frequency.


Asunto(s)
Laparoscopía , Aprendizaje Basado en Problemas , Humanos , Procesamiento de Imagen Asistido por Computador/métodos
7.
Int J Comput Assist Radiol Surg ; 16(10): 1795-1804, 2021 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-34392469

RESUMEN

PURPOSE: Bronchoscopists rely on navigation systems during bronchoscopy to reduce the risk of getting lost in the complex bronchial tree-like structure and the homogeneous bronchus lumens. We propose a patient-specific branching level estimation method for bronchoscopic navigation because it is vital to identify the branches being examined in the bronchus tree during examination. METHODS: We estimate the branching level by integrating the changes in the number of bronchial orifices and the camera motions among the frames. We extract the bronchial orifice regions from a depth image, which is generated using a cycle generative adversarial network (CycleGAN) from real bronchoscopic images. We calculate the number of orifice regions using the vertical and horizontal projection profiles of the depth images and obtain the camera-moving direction using the feature point-based camera motion estimation. The changes in the number of bronchial orifices are combined with the camera-moving direction to estimate the branching level. RESULTS: We used three in vivo and one phantom case to train the CycleGAN model and four in vivo cases to validate the proposed method. We manually created the ground truth of the branching level. The experimental results showed that the proposed method can estimate the branching level with an average accuracy of 87.6%. The processing time per frame was about 61 ms. CONCLUSION: Experimental results show that it is feasible to estimate the branching level using the number of bronchial orifices and camera-motion estimation from real bronchoscopic images.


Asunto(s)
Algoritmos , Imagenología Tridimensional , Bronquios/diagnóstico por imagen , Broncoscopía , Humanos , Fantasmas de Imagen
8.
Int J Comput Assist Radiol Surg ; 15(10): 1619-1630, 2020 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-32770324

RESUMEN

PURPOSE: Due to the complex anatomical structure of bronchi and the resembling inner surfaces of airway lumina, bronchoscopic examinations require additional 3D navigational information to assist the physicians. A bronchoscopic navigation system provides the position of the endoscope in CT images with augmented anatomical information. To overcome the shortcomings of previous navigation systems, we propose using a technique known as visual simultaneous localization and mapping (SLAM) to improve bronchoscope tracking in navigation systems. METHODS: We propose an improved version of the visual SLAM algorithm and use it to estimate nt-specific bronchoscopic video as input. We improve the tracking procedure by adding more narrow criteria in feature matching to avoid mismatches. For validation, we collected several trials of bronchoscopic videos with a bronchoscope camera by exploring synthetic rubber bronchus phantoms. We simulated breath by adding periodic force to deform the phantom. We compared the camera positions from visual SLAM with the manually created ground truth of the camera pose. The number of successfully tracked frames was also compared between the original SLAM and the proposed method. RESULTS: We successfully tracked 29,559 frames at a speed of 80 ms per frame. This corresponds to 78.1% of all acquired frames. The average root mean square error for our technique was 3.02 mm, while that for the original was 3.61 mm. CONCLUSION: We present a novel methodology using visual SLAM for bronchoscope tracking. Our experimental results showed that it is feasible to use visual SLAM for the estimation of the bronchoscope camera pose during bronchoscopic navigation. Our proposed method tracked more frames and showed higher accuracy than the original technique did. Future work will include combining the tracking results with virtual bronchoscopy and validation with in vivo cases.


Asunto(s)
Bronquios/diagnóstico por imagen , Broncoscopios , Broncoscopía/métodos , Algoritmos , Simulación por Computador , Humanos , Imagenología Tridimensional/métodos , Fantasmas de Imagen , Reproducibilidad de los Resultados
9.
Comput Med Imaging Graph ; 77: 101642, 2019 10.
Artículo en Inglés | MEDLINE | ID: mdl-31525543

RESUMEN

This paper presents a new approach for precisely estimating the renal vascular dominant region using a Voronoi diagram. To provide computer-assisted diagnostics for the pre-surgical simulation of partial nephrectomy surgery, we must obtain information on the renal arteries and the renal vascular dominant regions. We propose a fully automatic segmentation method that combines a neural network and tensor-based graph-cut methods to precisely extract the kidney and renal arteries. First, we use a convolutional neural network to localize the kidney regions and extract tiny renal arteries with a tensor-based graph-cut method. Then we generate a Voronoi diagram to estimate the renal vascular dominant regions based on the segmented kidney and renal arteries. The accuracy of kidney segmentation in 27 cases with 8-fold cross validation reached a Dice score of 95%. The accuracy of renal artery segmentation in 8 cases obtained a centerline overlap ratio of 80%. Each partition region corresponds to a renal vascular dominant region. The final dominant-region estimation accuracy achieved a Dice coefficient of 80%. A clinical application showed the potential of our proposed estimation approach in a real clinical surgical environment. Further validation using large-scale database is our future work.


Asunto(s)
Arterias/anatomía & histología , Riñón/irrigación sanguínea , Redes Neurales de la Computación , Tomografía Computarizada por Rayos X , Aprendizaje Profundo , Humanos , Procesamiento de Imagen Asistido por Computador , Nefrectomía
10.
Int J Comput Assist Radiol Surg ; 14(12): 2069-2081, 2019 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-31493112

RESUMEN

PURPOSE : The purpose of this paper is to present a fully automated abdominal artery segmentation method from a CT volume. Three-dimensional (3D) blood vessel structure information is important for diagnosis and treatment. Information about blood vessels (including arteries) can be used in patient-specific surgical planning and intra-operative navigation. Since blood vessels have large inter-patient variations in branching patterns and positions, a patient-specific blood vessel segmentation method is necessary. Even though deep learning-based segmentation methods provide good segmentation accuracy among large organs, small organs such as blood vessels are not well segmented. We propose a deep learning-based abdominal artery segmentation method from a CT volume. Because the artery is one of small organs that is difficult to segment, we introduced an original training sample generation method and a three-plane segmentation approach to improve segmentation accuracy. METHOD : Our proposed method segments abdominal arteries from an abdominal CT volume with a fully convolutional network (FCN). To segment small arteries, we employ a 2D patch-based segmentation method and an area imbalance reduced training patch generation (AIRTPG) method. AIRTPG adjusts patch number imbalances between patches with artery regions and patches without them. These methods improved the segmentation accuracies of small artery regions. Furthermore, we introduced a three-plane segmentation approach to obtain clear 3D segmentation results from 2D patch-based processes. In the three-plane approach, we performed three segmentation processes using patches generated on axial, coronal, and sagittal planes and combined the results to generate a 3D segmentation result. RESULTS : The evaluation results of the proposed method using 20 cases of abdominal CT volumes show that the averaged F-measure, precision, and recall rates were 87.1%, 85.8%, and 88.4%, respectively. This result outperformed our previous automated FCN-based segmentation method. Our method offers competitive performance compared to the previous blood vessel segmentation methods from 3D volumes. CONCLUSIONS : We developed an abdominal artery segmentation method using FCN. The 2D patch-based and AIRTPG methods effectively segmented the artery regions. In addition, the three-plane approach generated good 3D segmentation results.


Asunto(s)
Abdomen/irrigación sanguínea , Arterias/diagnóstico por imagen , Imagenología Tridimensional/métodos , Redes Neurales de la Computación , Tomografía Computarizada por Rayos X/métodos , Tomografía Computarizada de Haz Cónico , Humanos
11.
J Med Imaging (Bellingham) ; 4(4): 044502, 2017 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-29152534

RESUMEN

This paper presents a local intensity structure analysis based on an intensity targeted radial structure tensor (ITRST) and the blob-like structure enhancement filter based on it (ITRST filter) for the mediastinal lymph node detection algorithm from chest computed tomography (CT) volumes. Although the filter based on radial structure tensor analysis (RST filter) based on conventional RST analysis can be utilized to detect lymph nodes, some lymph nodes adjacent to regions with extremely high or low intensities cannot be detected. Therefore, we propose the ITRST filter, which integrates the prior knowledge on detection target intensity range into the RST filter. Our lymph node detection algorithm consists of two steps: (1) obtaining candidate regions using the ITRST filter and (2) removing false positives (FPs) using the support vector machine classifier. We evaluated lymph node detection performance of the ITRST filter on 47 contrast-enhanced chest CT volumes and compared it with the RST and Hessian filters. The detection rate of the ITRST filter was 84.2% with 9.1 FPs/volume for lymph nodes whose short axis was at least 10 mm, which outperformed the RST and Hessian filters.

12.
Med Image Anal ; 39: 18-28, 2017 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-28410505

RESUMEN

Automated organ segmentation from medical images is an indispensable component for clinical applications such as computer-aided diagnosis (CAD) and computer-assisted surgery (CAS). We utilize a multi-atlas segmentation scheme, which has recently been used in different approaches in the literature to achieve more accurate and robust segmentation of anatomical structures in computed tomography (CT) volume data. Among abdominal organs, the pancreas has large inter-patient variability in its position, size and shape. Moreover, the CT intensity of the pancreas closely resembles adjacent tissues, rendering its segmentation a challenging task. Due to this, conventional intensity-based atlas selection for pancreas segmentation often fails to select atlases that are similar in pancreas position and shape to those of the unlabeled target volume. In this paper, we propose a new atlas selection strategy based on vessel structure around the pancreatic tissue and demonstrate its application to a multi-atlas pancreas segmentation. Our method utilizes vessel structure around the pancreas to select atlases with high pancreatic resemblance to the unlabeled volume. Also, we investigate two types of applications of the vessel structure information to the atlas selection. Our segmentations were evaluated on 150 abdominal contrast-enhanced CT volumes. The experimental results showed that our approach can segment the pancreas with an average Jaccard index of 66.3% and an average Dice overlap coefficient of 78.5%.


Asunto(s)
Páncreas/diagnóstico por imagen , Tomografía Computarizada por Rayos X/métodos , Abdomen/diagnóstico por imagen , Adulto , Anciano , Anciano de 80 o más Años , Algoritmos , Femenino , Humanos , Masculino , Persona de Mediana Edad , Reproducibilidad de los Resultados
13.
Int J Comput Assist Radiol Surg ; 12(6): 1041-1048, 2017 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-28275889

RESUMEN

PURPOSE: For safe and reliable laparoscopic surgery, it is important to determine individual differences of blood vessels such as the position, shape, and branching structures. Consequently, a computer-assisted laparoscopy that displays blood vessel structures with anatomical labels would be extremely beneficial. This paper details an automated anatomical labeling method for abdominal arteries and veins extracted from 3D CT volumes. METHODS: The proposed method represents a blood vessel tree as a probabilistic graphical model by conditional random fields (CRFs). An adaptive gradient algorithm is adopted for structure learning. The anatomical labeling of blood vessel branches is performed by maximum a posteriori estimation. RESULTS: We applied the proposed method to 50 cases of arterial and portal phase abdominal X-ray CT volumes. The experimental results showed that the F-measure of the proposed method for abdominal arteries and veins was 94.4 and 86.9%, respectively. CONCLUSION: We developed an automated anatomical labeling method to annotate each blood vessel branches of abdominal arteries and veins using CRF. The proposed method outperformed a state-of-the-art method.


Asunto(s)
Arterias/diagnóstico por imagen , Radiografía Abdominal , Venas/diagnóstico por imagen , Abdomen/diagnóstico por imagen , Algoritmos , Humanos , Laparoscopía/métodos , Modelos Estadísticos , Tomografía Computarizada por Rayos X/métodos
14.
Int J Comput Assist Radiol Surg ; 12(1): 39-50, 2017 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-27431209

RESUMEN

PURPOSE: Polyps found during CT colonography can be removed by colonoscopic polypectomy. A colonoscope navigation system that navigates a physician to polyp positions while performing the colonoscopic polypectomy is required. Colonoscope tracking methods are essential for implementing colonoscope navigation systems. Previous colonoscope tracking methods have failed when the colon deforms during colonoscope insertions. This paper proposes a colonoscope tracking method that is robust against colon deformations. METHOD: The proposed method generates a colon centerline from a CT volume and a curved line representing the colonoscope shape (colonoscope line) by using electromagnetic sensors. We find correspondences between points on a deformed colon centerline and colonoscope line by a landmark-based coarse correspondence finding and a length-based fine correspondence finding processes. Even if the coarse correspondence finding process fails to find some correspondences, which occurs with colon deformations, the fine correspondence finding process is able to find correct correspondences by using previously recorded line lengths. RESULT: Experimental results using a colon phantom showed that the proposed method finds the colonoscope tip position with tracking errors smaller than 50 mm in most trials. A physician who specializes in gastroenterology commented that tracking errors smaller than 50 mm are acceptable. This is because polyps are observable from the colonoscope camera when positions of the colonoscope tip and polyps are closer than 50 mm. CONCLUSIONS: We developed a colonoscope tracking method that is robust against deformations of the colon. Because the process was designed to consider colon deformations, the proposed method can track the colonoscope tip position even if the colon deforms.


Asunto(s)
Colon/cirugía , Pólipos del Colon/cirugía , Colonografía Tomográfica Computarizada , Colonoscopios , Colonoscopía/métodos , Fantasmas de Imagen , Cirugía Asistida por Computador/métodos , Colon/diagnóstico por imagen , Pólipos del Colon/diagnóstico por imagen , Humanos , Imanes , Modelos Anatómicos
15.
Int J Comput Assist Radiol Surg ; 12(2): 245-261, 2017 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-27796791

RESUMEN

PURPOSE: Airway segmentation plays an important role in analyzing chest computed tomography (CT) volumes for computerized lung cancer detection, emphysema diagnosis and pre- and intra-operative bronchoscope navigation. However, obtaining a complete 3D airway tree structure from a CT volume is quite a challenging task. Several researchers have proposed automated airway segmentation algorithms basically based on region growing and machine learning techniques. However, these methods fail to detect the peripheral bronchial branches, which results in a large amount of leakage. This paper presents a novel approach for more accurate extraction of the complex airway tree. METHODS: This proposed segmentation method is composed of three steps. First, Hessian analysis is utilized to enhance the tube-like structure in CT volumes; then, an adaptive multiscale cavity enhancement filter is employed to detect the cavity-like structure with different radii. In the second step, support vector machine learning will be utilized to remove the false positive (FP) regions from the result obtained in the previous step. Finally, the graph-cut algorithm is used to refine the candidate voxels to form an integrated airway tree. RESULTS: A test dataset including 50 standard-dose chest CT volumes was used for evaluating our proposed method. The average extraction rate was about 79.1 % with the significantly decreased FP rate. CONCLUSION: A new method of airway segmentation based on local intensity structure and machine learning technique was developed. The method was shown to be feasible for airway segmentation in a computer-aided diagnosis system for a lung and bronchoscope guidance system.


Asunto(s)
Algoritmos , Bronquios/diagnóstico por imagen , Procesamiento de Imagen Asistido por Computador/métodos , Imagenología Tridimensional/métodos , Enfermedades Pulmonares/diagnóstico por imagen , Aprendizaje Automático , Automatización , Broncoscopía , Diagnóstico por Computador/métodos , Humanos , Pulmón/diagnóstico por imagen , Tamaño de los Órganos , Máquina de Vectores de Soporte , Tórax , Tomografía Computarizada por Rayos X/métodos
16.
J Med Imaging (Bellingham) ; 2(4): 044004, 2015 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-26697510

RESUMEN

Laparoscopic surgery, which is one minimally invasive surgical technique that is now widely performed, is done by making a working space (pneumoperitoneum) by infusing carbon dioxide ([Formula: see text]) gas into the abdominal cavity. A virtual pneumoperitoneum method that simulates the abdominal wall and viscera motion by the pneumoperitoneum based on mass-spring-damper models (MSDMs) with mechanical properties is proposed. Our proposed method simulates the pneumoperitoneum based on MSDMs and Newton's equations of motion. The parameters of MSDMs are determined by the anatomical knowledge of the mechanical properties of human tissues. Virtual [Formula: see text] gas pressure is applied to the boundary surface of the abdominal cavity. The abdominal shapes after creation of the pneumoperitoneum are computed by solving the equations of motion. The mean position errors of our proposed method using 10 mmHg virtual gas pressure were [Formula: see text], and the position error of the previous method proposed by Kitasaka et al. was 35.6 mm. The differences in the errors were statistically significant ([Formula: see text], Student's [Formula: see text]-test). The position error of the proposed method was reduced from [Formula: see text] to [Formula: see text] using 30 mmHg virtual gas pressure. The proposed method simulated abdominal wall motion by infused gas pressure and generated deformed volumetric images from a preoperative volumetric image. Our method predicted abdominal wall deformation by just giving the [Formula: see text] gas pressure and the tissue properties. Measurement of the visceral displacement will be required to validate the visceral motion.

17.
Med Image Anal ; 20(1): 152-61, 2015 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-25484019

RESUMEN

This paper proposes a method for automated anatomical labeling of abdominal arteries and a hepatic portal system. In abdominal surgeries, understanding blood vessel structure is critical since it is very complicated. The input of the proposed method is the blood vessel region extracted from the CT volume. The blood vessel region is expressed as a tree structure by applying a thinning process to it and compute the mapping from the branches in the tree structure to the anatomical names. First, several characteristic anatomical names are assigned by rule-based pre-processing. The branches assigned to these names are used as references. The remaining blood vessel names are assigned using a likelihood function trained by a machine-learning technique. Simple rule-based postprocessing can correct several blood vessel names. The output of the proposed method is a tree structure with anatomical names. In an experiment using 50 blood vessel regions manually extracted from abdominal CT volumes, the recall and precision rates of the abdominal arteries were 86.2% and 85.3%, and they were 86.5% and 79.5% for the hepatic portal system.


Asunto(s)
Abdomen/irrigación sanguínea , Hígado/irrigación sanguínea , Sistema Porta/anatomía & histología , Radiografía Abdominal , Tomografía Computarizada por Rayos X , Automatización , Humanos
18.
IEEE Trans Med Imaging ; 32(10): 1745-64, 2013 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-23686944

RESUMEN

The paper presents a new endoscope motion tracking method that is based on a novel external endoscope tracking device and our modified stochastic optimization method for boosting endoscopy navigation. We designed a novel tracking prototype where a 2-D motion sensor was introduced to directly measure the insertion-retreat linear motion and also the rotation of the endoscope. With our developed stochastic optimization method, which embeds traceable particle swarm optimization in the Condensation algorithm, a full six degrees-of-freedom endoscope pose (position and orientation) can be recovered from 2-D motion sensor measurements. Experiments were performed on a dynamic bronchial phantom with maximal simulated respiratory motion around 24.0 mm. The experimental results demonstrate that our proposed method provides a promising endoscope motion tracking approach with more effective and robust performance than several current available tracking techniques. The average tracking accuracy of the position improved from 6.5 to 3.3 mm, which further approaches the clinical requirement of 2.0 mm in practice.


Asunto(s)
Algoritmos , Broncoscopía/instrumentación , Broncoscopía/métodos , Cirugía Asistida por Computador/instrumentación , Cirugía Asistida por Computador/métodos , Bronquios/anatomía & histología , Bronquios/fisiología , Humanos , Imagenología Tridimensional/métodos , Modelos Biológicos , Movimiento/fisiología , Fantasmas de Imagen , Respiración
19.
Comput Med Imaging Graph ; 37(2): 131-41, 2013 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-23562139

RESUMEN

The precise annotation of vascular structure is desired in computer-assisted systems to help surgeons identify each vessel branch. This paper proposes a method that annotates vessels on volume rendered images by rendering their names on them using a two-pass rendering process. In the first rendering pass, vessel surface models are generated using such properties as centerlines, radii, and running directions. Then the vessel names are drawn on the vessel surfaces. Finally, the vessel name images and the corresponding depth buffer are generated by a virtual camera at the viewpoint. In the second rendering pass, volume rendered images are generated by a ray casting volume rendering algorithm that considers the depth buffer generated in the first rendering pass. After the two-pass rendering is finished, an annotated image is generated by blending the volume rendered image with the surface rendered image. To confirm the effectiveness of our proposed method, we performed a computer-assisted system for the automated annotation of abdominal arteries. The experimental results show that vessel names can be drawn on the corresponding vessel surface in the volume rendered images at a computing cost that is nearly the same as that by volume rendering only. The proposed method has enormous potential to be adopted to annotate the vessels in the 3D medical images in clinical applications, such as image-guided surgery.


Asunto(s)
Angiografía/métodos , Inteligencia Artificial , Vasos Sanguíneos/anatomía & histología , Documentación/métodos , Interpretación de Imagen Asistida por Computador/métodos , Imagenología Tridimensional/métodos , Reconocimiento de Normas Patrones Automatizadas/métodos , Humanos , Procesamiento de Lenguaje Natural , Terminología como Asunto
20.
Int J Comput Assist Radiol Surg ; 8(3): 353-63, 2013 May.
Artículo en Inglés | MEDLINE | ID: mdl-23225021

RESUMEN

PURPOSE: Chronic obstructive pulmonary disease (COPD) is characterized by airflow limitations. Physicians frequently assess the stage using pulmonary function tests and chest CT images. This paper describes a novel method to assess COPD severity by combining measurements of pulmonary function tests (PFT) and the results of chest CT image analysis. METHODS: The proposed method utilizes measurements from PFTs and chest CT scans to assess COPD severity. This method automatically classifies COPD severity into five stages, described in GOLD guidelines, by a multi-class AdaBoost classifier. The classifier utilizes 24 measurements as feature values, which include 18 measurements from PFTs and six measurements based on chest CT image analysis. A total of 3 normal and 46 abnormal (COPD) examinations performed in adults were evaluated using the proposed method to test its diagnostic capability. RESULTS: The experimental results revealed that its accuracy rates were 100.0 % (resubstitution scheme) and 53.1 % (leave-one-out scheme). A total of 95.7 % of missed classifications were assigned in the neighboring severities. CONCLUSIONS: These results demonstrate that the proposed method is a feasible means to assess COPD severity. A much larger sample size will be required to establish the limits of the method and provide clinical validation.


Asunto(s)
Enfermedad Pulmonar Obstructiva Crónica/clasificación , Enfermedad Pulmonar Obstructiva Crónica/diagnóstico , Pruebas de Función Respiratoria , Tomografía Computarizada por Rayos X , Adulto , Factores de Edad , Anciano , Anciano de 80 o más Años , Algoritmos , Índice de Masa Corporal , Femenino , Humanos , Masculino , Persona de Mediana Edad , Enfermedad Pulmonar Obstructiva Crónica/fisiopatología , Reproducibilidad de los Resultados , Índice de Severidad de la Enfermedad
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...