Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 57
Filtrar
1.
Comput Med Imaging Graph ; 116: 102418, 2024 Jul 19.
Artículo en Inglés | MEDLINE | ID: mdl-39079410

RESUMEN

Shape registration of patient-specific organ shapes to endoscopic camera images is expected to be a key to realizing image-guided surgery, and a variety of applications of machine learning methods have been considered. Because the number of training data available from clinical cases is limited, the use of synthetic images generated from a statistical deformation model has been attempted; however, the influence on estimation caused by the difference between synthetic images and real scenes is a problem. In this study, we propose a self-supervised offline learning framework for model-based registration using image features commonly obtained from synthetic images and real camera images. Because of the limited number of endoscopic images available for training, we use a synthetic image generated from the nonlinear deformation model that represents possible intraoperative pneumothorax deformations. In order to solve the difficulty in estimating deformed shapes and viewpoints from the common image features obtained from synthetic and real images, we attempted to improve the registration error by adding the shading and distance information that can be obtained as prior knowledge in the synthetic image. Shape registration with real camera images is performed by learning the task of predicting the differential model parameters between two synthetic images. The developed framework achieved registration accuracy with a mean absolute error of less than 10 mm and a mean distance of less than 5 mm in a thoracoscopic pulmonary cancer resection, confirming improved prediction accuracy compared with conventional methods.

2.
Sci Rep ; 14(1): 9686, 2024 04 27.
Artículo en Inglés | MEDLINE | ID: mdl-38678091

RESUMEN

In robot-assisted surgery, in which haptics should be absent, surgeons experience haptics-like sensations as "pseudo-haptic feedback". As surgeons who routinely perform robot-assisted laparoscopic surgery, we wondered if we could make these "pseudo-haptics" explicit to surgeons. Therefore, we created a simulation model that estimates manipulation forces using only visual images in surgery. This study aimed to achieve vision-based estimations of the magnitude of forces during forceps manipulation of organs. We also attempted to detect over-force, exceeding the threshold of safe manipulation. We created a sensor forceps that can detect precise pressure at the tips with three vectors. Using an endoscopic system that is used in actual surgery, images of the manipulation of excised pig kidneys were recorded with synchronized force data. A force estimation model was then created using deep learning. Effective detection of over-force was achieved if the region of the visual images was restricted by the region of interest around the tips of the forceps. In this paper, we emphasize the importance of limiting the region of interest in vision-based force estimation tasks.


Asunto(s)
Aprendizaje Profundo , Riñón , Laparoscopía , Animales , Porcinos , Riñón/cirugía , Riñón/fisiología , Laparoscopía/métodos , Procedimientos Quirúrgicos Robotizados/métodos
3.
Artículo en Inglés | MEDLINE | ID: mdl-38083713

RESUMEN

A model that represents the shapes and positions of organs or skeletal structures with a small number of parameters may be expected to have a wide range of clinical applications, such as radiotherapy and surgical guidance. However, because soft organs vary in shape and position between patients, it is difficult for linear models to reconstruct locally variable shapes, and nonlinear models are prone to overfitting, particularly when the quantity of data is small. The aim of this study was to construct a shape atlas with high accuracy and good generalization performance. We designed a mesh variational autoencoder that can reconstruct both nonlinear shape and position with high accuracy. We validated the trained model for liver meshes of 125 cases, and found that it was possible to reconstruct the positions and shapes with an average accuracy of 4.3 mm for the test data of 19 cases.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Imagenología Tridimensional , Hígado , Humanos , Hígado/diagnóstico por imagen
4.
J Thorac Dis ; 15(9): 4736-4744, 2023 Sep 28.
Artículo en Inglés | MEDLINE | ID: mdl-37868873

RESUMEN

Background: Preoperative three-dimensional (3D) computed tomography (CT) images have been widely used as surgical guides in lung surgery; however, the lungs tend to be deflated during surgery. Discrepancies between the preoperatively constructed 3D image and the intraoperative view of the deflated lungs often require preoperative and/or intraoperative marking methods for sublobar pulmonary resection. We have developed a lung deflation simulation algorithm in which 3D CT images of the deflated lungs can be predicted only based on the preoperative CT taken in an inflated phase of respiration. Using this system, we conducted a preliminary study to retrospectively compare the intersegmental line predicted by our lung deflation simulation algorithm with the intersegmental line delineated by the intravenous administration of indocyanine green. Methods: Sixteen patients who underwent unilateral segmentectomy between January 1, 2020, and June 30, 2022, were included in the study. The identified intersegmental lines were confirmed intraoperatively using indocyanine green. These actual intersegmental lines were compared with those delineated on 3D images using the lung deflation simulation algorithm. Results: Of the 16 patients who underwent pulmonary segmentectomy, the consistency of these intersegmental lines was in complete agreement in twelve patients, partial agreement in three patients, and disagreement in one patient. The concordance rate of the intersegmental lines was 75%. Conclusions: The lung deflation simulation algorithm provides a new surgical guide in addition to the currently utilized ones. Continuous innovation might lead to a less invasive surgical technique for delineating the intersegmental line.

5.
J Appl Clin Med Phys ; 24(10): e14073, 2023 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-37317937

RESUMEN

PURPOSE: This study was conducted to determine the margins and timing of replanning by assessing the daily interfractional cervical and uterine motions using magnetic resonance (MR) images. METHODS: Eleven patients with cervical cancer, who underwent intensity-modulated radiotherapy (IMRT) in 23-25 fractions, were considered in this study. The daily and reference MR images were converted into three-dimensional (3D) shape models. Patient-specific anisotropic margins were calculated from the proximal 95% of vertices located outside the surface of the reference model. Population-based margins were defined as the 90th percentile values of the patient-specific margins. The expanded volume of interest (expVOI) for the cervix and uterus was generated by expanding the reference model based on the population-based margin to calculate the coverage for daily deformable mesh models. For comparison, expVOIconv was generated using conventional margins: right (R), left (L), anterior (A), posterior (P), superior (S), and inferior (I) were (5, 5, 15, 15, 10, 10) and (10, 10, 20, 20, 15, 15) mm for the cervix and uterus, respectively. Subsequently, a replanning scenario was developed based on the cervical volume change. ExpVOIini and expVOIreplan were generated before and after replanning, respectively. RESULTS: Population-based margins were (R, L, A, P, S, I) of (7, 7, 11, 6, 11, 8) and (14, 13, 27, 19, 15, 21) mm for the cervix and uterus, respectively. The timing of replanning was found to be the 16th fraction, and the volume of expVOIreplan decreased by >30% compared to that of expVOIini . However, margins cannot be reduced to ensure equivalent coverage after replanning. CONCLUSION: We determined the margins and timing of replanning through detailed daily analysis. The margins of the cervix were smaller than conventional margins in some directions, while the margins of the uterus were larger in almost all directions. A margin equivalent to that at the initial planning was required for replanning.


Asunto(s)
Radioterapia de Intensidad Modulada , Neoplasias del Cuello Uterino , Femenino , Humanos , Cuello del Útero/diagnóstico por imagen , Cuello del Útero/patología , Útero/diagnóstico por imagen , Útero/patología , Movimiento (Física) , Imagen por Resonancia Magnética/métodos , Neoplasias del Cuello Uterino/diagnóstico por imagen , Neoplasias del Cuello Uterino/radioterapia , Neoplasias del Cuello Uterino/patología , Planificación de la Radioterapia Asistida por Computador/métodos , Radioterapia de Intensidad Modulada/métodos , Dosificación Radioterapéutica
6.
Artículo en Inglés | MEDLINE | ID: mdl-37079768

RESUMEN

Resection Process Map (RPM) is a surgical simulation system that uses preoperative three-dimensional computed tomography. Unlike the usual static simulation, this system provides surgeons an individualized dynamic deformation of the lung parenchyma and vessels. RPM was first introduced in 2020. Although the intraoperative usefulness of this system has been evaluated experimentally, there have been no reports on its clinical use. Herein, we presented in detail the first experience on RPM during robot-assisted anatomical lung resection in the real clinical setting.

7.
J Appl Clin Med Phys ; 24(5): e13912, 2023 May.
Artículo en Inglés | MEDLINE | ID: mdl-36659871

RESUMEN

PURPOSE: The aim of this study was to evaluate generalization ability of segmentation accuracy for limited FOV CBCT in the male pelvic region using a full-image CNN. Auto-segmentation accuracy was evaluated using various datasets with different intensity distributions and FOV sizes. METHODS: A total of 171 CBCT datasets from patients with prostate cancer were enrolled. There were 151, 10, and 10 CBCT datasets acquired from Vero4DRT, TrueBeam STx, and Clinac-iX, respectively. The FOV for Vero4DRT, TrueBeam STx, and Clinac-iX was 20, 26, and 25 cm, respectively. The ROIs, including the bladder, prostate, rectum, and seminal vesicles, were manually delineated. The U2 -Net CNN network architecture was used to train the segmentation model. A total of 131 limited FOV CBCT datasets from Vero4DRT were used for training (104 datasets) and validation (27 datasets); thereafter the rest were for testing. The training routine was set to save the best weight values when the DSC in the validation set was maximized. Segmentation accuracy was qualitatively and quantitatively evaluated between the ground truth and predicted ROIs in the different testing datasets. RESULTS: The mean scores ± standard deviation of visual evaluation for bladder, prostate, rectum, and seminal vesicle in all treatment machines were 1.0 ± 0.7, 1.5 ± 0.6, 1.4 ± 0.6, and 2.1 ± 0.8 points, respectively. The median DSC values for all imaging devices were ≥0.94 for the bladder, 0.84-0.87 for the prostate and rectum, and 0.48-0.69 for the seminal vesicles. Although the DSC values for the bladder and seminal vesicles were significantly different among the three imaging devices, the DSC value of the bladder changed by less than 1% point. The median MSD values for all imaging devices were ≤1.2 mm for the bladder and 1.4-2.2 mm for the prostate, rectum, and seminal vesicles. The MSD values for the seminal vesicles were significantly different between the three imaging devices. CONCLUSION: The proposed method is effective for testing datasets with different intensity distributions and FOV from training datasets.


Asunto(s)
Aprendizaje Profundo , Tomografía Computarizada de Haz Cónico Espiral , Humanos , Masculino , Planificación de la Radioterapia Asistida por Computador/métodos , Procesamiento de Imagen Asistido por Computador/métodos , Pelvis/diagnóstico por imagen
8.
PLoS One ; 17(12): e0279005, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-36520814

RESUMEN

Large slice thickness or slice increment causes information insufficiency of Computed Tomography (CT) data in the longitudinal direction, which degrades the quality of CT-based diagnosis. Traditional approaches such as high-resolution computed tomography (HRCT) and linear interpolation can solve this problem. However, HRCT suffers from dose increase, and linear interpolation causes artifacts. In this study, we propose a deep-learning-based approach to reconstruct densely sliced CT from sparsely sliced CT data without any dose increase. The proposed method reconstructs CT images from neighboring slices using a U-net architecture. To prevent multiple reconstructed slices from influencing one another, we propose a parallel architecture in which multiple U-net architectures work independently. Moreover, for a specific organ (i.e., the liver), we propose a range-clip technique to improve reconstruction quality, which enhances the learning of CT values within this organ by enlarging the range of the training data. CT data from 130 patients were collected, with 80% used for training and the remaining 20% used for testing. Experiments showed that our parallel U-net architecture reduced the mean absolute error of CT values in the reconstructed slices by 22.05%, and also reduced the incidence of artifacts around the boundaries of target organs, compared with linear interpolation. Further improvements of 15.12%, 11.04%, 10.94%, and 10.63% were achieved for the liver, left kidney, right kidney, and stomach, respectively, using the proposed range-clip algorithm. Also, we compared the proposed architecture with original U-net method, and the experimental results demonstrated the superiority of our approach.


Asunto(s)
Aprendizaje Profundo , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Tomografía Computarizada por Rayos X/métodos , Artefactos , Algoritmos
9.
JTCVS Tech ; 15: 181-191, 2022 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-36276675

RESUMEN

Objective: Prolonged air leak is the most common complication of thoracic surgery. Intraoperative leak site detection is the first step in decreasing the risk of leak-related postoperative complications. Methods: We retrospectively reviewed the surgical videos of patients who underwent lung resection at our institution. In the training phase, deep learning-based air leak detection software was developed using leak-positive endoscopic images. In the testing phase, a different data set was used to evaluate our proposed application for each predicted box. Results: A total of 110 originally captured and labeled images obtained from 70 surgeries were preprocessed for the training data set. The testing data set contained 64 leak-positive and 45 leak-negative sites. The testing data set was obtained from 93 operations, including 58 patients in whom an air leak was present and 35 patients in whom an air leak was absent. In the testing phase, our software detected leak sites with a sensitivity and specificity of 81.3% and 68.9%, respectively. Conclusions: We have successfully developed a deep learning-based leak site detection application, which can be used in deflated lungs. Although the current version is still a prototype with a limited training data set, it is a novel concept of leak detection based entirely on visual information.

10.
IEEE Trans Med Imaging ; 41(12): 3747-3761, 2022 12.
Artículo en Inglés | MEDLINE | ID: mdl-35901001

RESUMEN

Organ shape reconstruction based on a single-projection image during treatment has wide clinical scope, e.g., in image-guided radiotherapy and surgical guidance. We propose an image-to-graph convolutional network that achieves deformable registration of a three-dimensional (3D) organ mesh for a low-contrast two-dimensional (2D) projection image. This framework enables simultaneous training of two types of transformation: from the 2D projection image to a displacement map, and from the sampled per-vertex feature to a 3D displacement that satisfies the geometrical constraint of the mesh structure. Assuming application to radiation therapy, the 2D/3D deformable registration performance is verified for multiple abdominal organs that have not been targeted to date, i.e., the liver, stomach, duodenum, and kidney, and for pancreatic cancer. The experimental results show shape prediction considering relationships among multiple organs can be used to predict respiratory motion and deformation from digitally reconstructed radiographs with clinically acceptable accuracy.


Asunto(s)
Imagenología Tridimensional , Radioterapia Guiada por Imagen , Imagenología Tridimensional/métodos , Algoritmos , Radioterapia Guiada por Imagen/métodos , Movimiento (Física) , Hígado/diagnóstico por imagen
11.
Interact Cardiovasc Thorac Surg ; 34(5): 808-813, 2022 05 02.
Artículo en Inglés | MEDLINE | ID: mdl-35018431

RESUMEN

OBJECTIVES: Recently, preoperative and intraoperative simulation using three-dimensional computed tomography (CT) has attracted much attention in thoracic surgery. However, because conventional three-dimensional CT only shows static images, dynamic simulation is required for a more precise operation. We previously reported on a resection process map for pulmonary resection, which we developed to generate virtual dynamic images from preoperative patient-specific CT scans. The goal of this study was to evaluate the feasibility of the clinical use of the resection process map for anatomical lung resection. METHODS: This study included 5 lobectomies for different lobes and 4 representative segmentectomies. Dissection of the pulmonary arteries, veins and bronchi were considered key parts of each procedure. To assess the description of images obtained from the resection process map, relevant clips from the actual surgical videos were collected, retrospectively replicated and superimposed on the resection process map to explain the procedures. RESULTS: In all surgical procedures, the resection process map successfully and semiautomatically generated a virtual dynamic image from the patient-specific CT data. Moreover, superimposition of the virtual images on the selected clips from the surgical videos showed no major differences. CONCLUSIONS: The resection process map could generate virtual images that corresponded to the actual surgical videos and has the potential for clinical use as preoperative and intraoperative simulation.


Asunto(s)
Neoplasias Pulmonares , Neumonectomía , Humanos , Imagenología Tridimensional/métodos , Pulmón/cirugía , Neoplasias Pulmonares/cirugía , Neumonectomía/métodos , Estudios Retrospectivos
12.
Annu Int Conf IEEE Eng Med Biol Soc ; 2021: 2843-2846, 2021 11.
Artículo en Inglés | MEDLINE | ID: mdl-34891840

RESUMEN

Artifacts and defects in Cone-beam Computed Tomography (CBCT) images are a problem in radiotherapy and surgical procedures. Unsupervised learning-based image translation techniques have been studied to improve the image quality of head and neck CBCT images, but there have been few studies on improving the image quality of abdominal CBCT images, which are strongly affected by organ deformation due to posture and breathing. In this study, we propose a method for improving the image quality of abdominal CBCT images by translating the numerical values to the values of corresponding paired CT images using an unsupervised CycleGAN framework. This method preserves anatomical structure through adversarial learning that translates voxel values according to corresponding regions between CBCT and CT images of the same case. The image translation model was trained on 68 CT-CBCT datasets and then applied to 8 test datasets, and the effectiveness of the proposed method for improving the image quality of CBCT images was confirmed.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Planificación de la Radioterapia Asistida por Computador , Artefactos , Tomografía Computarizada de Haz Cónico
13.
Annu Int Conf IEEE Eng Med Biol Soc ; 2021: 2960-2963, 2021 11.
Artículo en Inglés | MEDLINE | ID: mdl-34891866

RESUMEN

Computed tomography and magnetic resonance imaging produce high-resolution images; however, during surgery or radiotherapy, only low-resolution cone-beam CT and low-dimensional X-ray images can be obtained. Furthermore, because the duodenum and stomach are filled with air, even in high-resolution CT images, it is hard to accurately segment their contours. In this paper, we propose a method that is based on a graph convolutional network (GCN) to reconstruct organs that are hard to detect in medical images. The method uses surrounding detectable-organ features to determine the shape and location of the target organ and learns mesh deformation parameters, which are applied to a target organ template. The role of the template is to establish an initial topological structure for the target organ. We conducted experiments with both single and multiple organ meshes to verify the performance of our proposed method.


Asunto(s)
Tomografía Computarizada de Haz Cónico , Tomografía Computarizada por Rayos X , Abdomen , Imagen por Resonancia Magnética
14.
Med Image Anal ; 73: 102181, 2021 10.
Artículo en Inglés | MEDLINE | ID: mdl-34303889

RESUMEN

The positions of nodules can change because of intraoperative lung deflation, and the modeling of pneumothorax-associated deformation remains a challenging issue for intraoperative tumor localization. In this study, we introduce spatial and geometric analysis methods for inflated/deflated lungs and discuss heterogeneity in pneumothorax-associated lung deformation. Contrast-enhanced CT images simulating intraoperative conditions were acquired from live Beagle dogs. The images contain the overall shape of the lungs, including all lobes and internal bronchial structures, and were analyzed to provide a statistical deformation model that could be used as prior knowledge to predict pneumothorax. To address the difficulties of mapping pneumothorax CT images with topological changes and CT intensity shifts, we designed deformable mesh registration techniques for mixed data structures including the lobe surfaces and the bronchial centerlines. Three global-to-local registration steps were performed under the constraint that the deformation was spatially continuous and smooth, while matching visible bronchial tree structures as much as possible. The developed framework achieved stable registration with a Hausdorff distance of less than 1 mm and a target registration error of less than 5 mm, and visualized deformation fields that demonstrate per-lobe contractions and rotations with high variability between subjects. The deformation analysis results show that the strain of lung parenchyma was 35% higher than that of bronchi, and that deformation in the deflated lung is heterogeneous.


Asunto(s)
Neoplasias Pulmonares , Neumotórax , Algoritmos , Animales , Bronquios/diagnóstico por imagen , Perros , Procesamiento de Imagen Asistido por Computador , Pulmón/diagnóstico por imagen , Neumotórax/diagnóstico por imagen , Mallas Quirúrgicas
15.
Radiat Oncol ; 16(1): 96, 2021 Jun 06.
Artículo en Inglés | MEDLINE | ID: mdl-34092240

RESUMEN

BACKGROUND: We investigated the geometric and dosimetric impact of three-dimensional (3D) generative adversarial network (GAN)-based metal artifact reduction (MAR) algorithms on volumetric-modulated arc therapy (VMAT) and intensity-modulated proton therapy (IMPT) for the head and neck region, based on artifact-free computed tomography (CT) volumes with dental fillings. METHODS: Thirteen metal-free CT volumes of the head and neck regions were obtained from The Cancer Imaging Archive. To simulate metal artifacts on CT volumes, we defined 3D regions of the teeth for pseudo-dental fillings from the metal-free CT volumes. HU values of 4000 HU were assigned to the selected teeth region of interest. Two different CT volumes, one with four (m4) and the other with eight (m8) pseudo-dental fillings, were generated for each case. These CT volumes were used as the Reference. CT volumes with metal artifacts were then generated from the Reference CT volumes (Artifacts). On the Artifacts CT volumes, metal artifacts were manually corrected for using the water density override method with a value of 1.0 g/cm3 (Water). By contrast, the CT volumes with reduced metal artifacts using 3D GAN model extension of CycleGAN were also generated (GAN-MAR). The structural similarity (SSIM) index within the planning target volume was calculated as quantitative error metric between the Reference CT volumes and the other volumes. After creating VMAT and IMPT plans on the Reference CT volumes, the reference plans were recalculated for the remaining CT volumes. RESULTS: The time required to generate a single GAN-MAR CT volume was approximately 30 s. The median SSIMs were lower in the m8 group than those in the m4 group, and ANOVA showed a significant difference in the SSIM for the m8 group (p < 0.05). Although the median differences in D98%, D50% and D2% were larger in the m8 group than the m4 group, those from the reference plans were within 3% for VMAT and 1% for IMPT. CONCLUSIONS: The GAN-MAR CT volumes generated in a short time were closer to the Reference CT volumes than the Water and Artifacts CT volumes. The observed dosimetric differences compared to the reference plan were clinically acceptable.


Asunto(s)
Algoritmos , Cabeza/efectos de la radiación , Cuello/efectos de la radiación , Radioterapia de Intensidad Modulada/métodos , Artefactos , Cabeza/diagnóstico por imagen , Neoplasias de Cabeza y Cuello/diagnóstico por imagen , Neoplasias de Cabeza y Cuello/radioterapia , Humanos , Metales , Cuello/diagnóstico por imagen , Redes Neurales de la Computación , Radiometría , Planificación de la Radioterapia Asistida por Computador , Tomografía Computarizada por Rayos X
16.
Med Image Anal ; 67: 101829, 2021 01.
Artículo en Inglés | MEDLINE | ID: mdl-33129146

RESUMEN

Respiratory motion and the associated deformations of abdominal organs and tumors are essential information in clinical applications. However, inter- and intra-patient multi-organ deformations are complex and have not been statistically formulated, whereas single organ deformations have been widely studied. In this paper, we introduce a multi-organ deformation library and its application to deformation reconstruction based on the shape features of multiple abdominal organs. Statistical multi-organ motion/deformation models of the stomach, liver, left and right kidneys, and duodenum were generated by shape matching their region labels defined on four-dimensional computed tomography images. A total of 250 volumes were measured from 25 pancreatic cancer patients. This paper also proposes a per-region-based deformation learning using the non-linear kernel model to predict the displacement of pancreatic cancer for adaptive radiotherapy. The experimental results show that the proposed concept estimates deformations better than general per-patient-based learning models and achieves a clinically acceptable estimation error with a mean distance of 1.2  ±  0.7 mm and a Hausdorff distance of 4.2  ±  2.3 mm throughout the respiratory motion.


Asunto(s)
Tomografía Computarizada Cuatridimensional , Neoplasias Pancreáticas , Abdomen , Humanos , Movimiento (Física) , Neoplasias Pancreáticas/diagnóstico por imagen
17.
Phys Med Biol ; 66(1): 014001, 2021 01 14.
Artículo en Inglés | MEDLINE | ID: mdl-33227722

RESUMEN

PURPOSE: To introduce the concept of statistical shape model (SSM)-based planning organ-at-risk volume (sPRV) for pancreatic cancer patients. METHODS: A total of 120 pancreatic cancer patients were enrolled in this study. After correcting inter-patient variations in the centroid position of the planning target volume (PTV), four different SSMs were constructed by registering a deformable template model to an individual model for the stomach and duodenum. The sPRV, which focused on the following different components of the inter-patient variations, was then created: Scenario A: shape, rotational angle, volume, and centroid position; Scenario B: shape, rotational angle, and volume; Scenario C: shape and rotational angle; and Scenario D: shape. The conventional PRV (cPRV) was created by adding an isotropic margin R (3-15 mm) to the mean shape model. The corresponding sPRV was created from the SSM until the volume difference between the cPRV and sPRV was less than 1%. Thereafter, we computed the overlapping volume between the PTV and cPRV (OLc) or sPRV (OLs) in each patient. OLs being larger than OLc implies that the local shape variations in the corresponding OAR close to the PTV were large. Therefore, OLs/OLc was calculated in each patient for each R-value, and the median value of OLs/OLc was regarded as a surrogate for plan quality for each R-value. RESULTS: For R = 3 and 5 mm, OLs/OLc exceeded 1 for the stomach and duodenum in all scenarios, with a maximum OLs/OLc of 1.21. This indicates that smaller isotropic margins did not sufficiently account for the local shape changes close to the PTV. CONCLUSIONS: Our results indicated that, in contrast to conventional PRV, SSM-based PRVs, which account for local shape changes, would result in better dose sparing for the stomach and duodenum in pancreatic cancer patients.


Asunto(s)
Modelos Estadísticos , Órganos en Riesgo/efectos de la radiación , Neoplasias Pancreáticas/radioterapia , Planificación de la Radioterapia Asistida por Computador/métodos , Humanos , Dosificación Radioterapéutica
18.
Annu Int Conf IEEE Eng Med Biol Soc ; 2020: 1278-1281, 2020 07.
Artículo en Inglés | MEDLINE | ID: mdl-33018221

RESUMEN

In endoscopic surgery, it is necessary to understand the three-dimensional structure of the target region to improve safety. For organs that do not deform much during surgery, preoperative computed tomography (CT) images can be used to understand their three-dimensional structure, however, deformation estimation is necessary for organs that deform substantially. Even though the intraoperative deformation estimation of organs has been widely studied, two-dimensional organ region segmentations from camera images are necessary to perform this estimation. In this paper, we propose a region segmentation method using U-net for the lung, which is an organ that deforms substantially during surgery. Because the accuracy of the results for smoker lungs is lower than that for non-smoker lungs, we improved the accuracy by translating the texture of the lung surface using a CycleGAN.


Asunto(s)
Aprendizaje Profundo , Pulmón , Endoscopía , Pulmón/diagnóstico por imagen , Tomografía Computarizada por Rayos X
19.
Annu Int Conf IEEE Eng Med Biol Soc ; 2020: 1608-1611, 2020 07.
Artículo en Inglés | MEDLINE | ID: mdl-33018302

RESUMEN

Computed tomography (CT) and magnetic resonance imaging (MRI) scanners measure three-dimensional (3D) images of patients. However, only low-dimensional local two-dimensional (2D) images may be obtained during surgery or radiotherapy. Although computer vision techniques have shown that 3D shapes can be estimated from multiple 2D images, shape reconstruction from a single 2D image such as an endoscopic image or an X-ray image remains a challenge. In this study, we propose X-ray2Shape, which permits a deep learning-based 3D organ mesh to be reconstructed from a single 2D projection image. The method learns the mesh deformation from a mean template and deep features computed from the individual projection images. Experiments with organ meshes and digitally reconstructed radiograph (DRR) images of abdominal regions were performed to confirm the estimation performance of the methods.


Asunto(s)
Imagenología Tridimensional , Tomografía Computarizada por Rayos X , Humanos , Hígado/diagnóstico por imagen , Imagen por Resonancia Magnética
20.
Annu Int Conf IEEE Eng Med Biol Soc ; 2020: 5519-5522, 2020 07.
Artículo en Inglés | MEDLINE | ID: mdl-33019229

RESUMEN

Because implicit medical knowledge and experience are used to perform medical treatment, such decisions must be clarified when systematizing surgical procedures. We propose an algorithm that extracts low-dimensional features that are important for determining the number of fibular segments in mandibular reconstruction using the enumeration of Lasso solutions (eLasso). To perform the multi-class classification, we extend the eLasso using an importance evaluation criterion that quantifies the contribution of the extracted features. Experiment results show that the extracted 7-dimensional feature set has the same estimation performance as the set using all 49-dimensional features.


Asunto(s)
Reconstrucción Mandibular , Algoritmos , Peroné/cirugía
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA